The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Mon, 21 Oct, 8:01 AM UTC
18 Sources
[1]
IBM Introduces Granite 3.0: High Performing AI Models Built for Business
New Granite 3.0 8B & 2B models, released under the permissive Apache 2.0 license, show strong performance across many academic and enterprise benchmarks, able to outperform or match similar-sized models. New Granite Guardian 3.0 models deliver most comprehensive guardrail capabilities to advance safe and trustworthy AI New Granite 3.0 Mixture-of-Experts models enable extremely efficient inference and low latency, suitable for CPU-based deployments and edge computing New Granite Time Series model achieved state-of-the-art performance in zero/few-shot forecasting, outperforming models 10 times larger unveils next generation of Granite-powered watsonx Code Assistant for general purpose coding; Debuts new tools in watsonx.ai for building and deploying AI applications and agents Announces Granite will become the default model of Consulting Advantage, an AI-powered delivery platform used by 160,000 consultants to bring new solutions to clients faster Today, at (NYSE: IBM) annual TechXchange event the company announced the release of its most advanced family of AI models to date, Granite 3.0. third-generation Granite flagship language models can outperform or match similarly sized models from leading model providers on many academic and industry benchmarks, showcasing strong performance, transparency and safety. Consistent with the company's commitment to open-source AI, the Granite models are released under the permissive Apache 2.0 license, making them unique in the combination of performance, flexibility and autonomy they provide to enterprise clients and the community at large. Granite 3.0 family includes: General Purpose/Language: Granite 3.0 8B Instruct, Granite 3.0 2B Instruct, Granite 3.0 8B Base, Granite 3.0 2B Base Guardrails & Safety: Granite Guardian 3.0 8B, Granite Guardian 3.0 2B Mixture-of-Experts: Granite 3.0 3B-A800M Instruct, Granite 3.0 1B-A400M Instruct, Granite 3.0 3B-A800M Base, Granite 3.0 1B-A400M Base The new Granite 3.0 8B and 2B language models are designed as 'workhorse' models for enterprise AI, delivering strong performance for tasks such as Retrieval Augmented Geneneration (RAG), classification, summarization, entity extraction, and tool use. These compact, versatile models are designed to be fine-tuned with enterprise data and seamlessly integrated across diverse business environments or workflows. While many large language models (LLMs) are trained on publicly available data, a vast majority of enterprise data remains untapped. By combining a small Granite model with enterprise data, especially using the revolutionary alignment technique InstructLab - introduced by and RedHat in May - believes businesses can achieve task-specific performance that rivals larger models at a fraction of the cost (based on an observed range of 3x-23x less cost than large frontier models in several early proofs-of-concept1). The Granite 3.0 release reaffirms commitment to building transparency, safety, and trust in AI products. The Granite 3.0 technical report and responsible use guide provide a description of the datasets used to train these models, details of the filtering, cleansing, and curation steps applied, along with comprehensive results of model performance across major academic and enterprise benchmarks. Critically, provides an IP indemnity for all Granite models on watsonx.ai so enterprise clients can be more confident in merging their data with the models. Raising the bar: Granite 3.0 benchmarks The Granite 3.0 language models also demonstrate promising results on raw performance. On standard academic benchmarks defined by Hugging Face's OpenLLM Leaderboard, the Granite 3.0 8B Instruct model's overall performance leads on average against state-of-the-art-performance of similar-sized open source models from Meta and Mistral. On state-of-the-art AttaQ safety benchmark, the Granite 3.0 8B Instruct model leads across all measured safety dimensions compared to models from Meta and Mistral.2 Across the core enterprise tasks of RAG, tool use, and tasks in the Cybersecurity domain, the Granite 3.0 8B Instruct model shows leading performance on average compared to similar-sized open source models from Mistral and Meta.3 The Granite 3.0 models were trained on over 12 trillion tokens on data taken from 12 different natural languages and 116 different programming languages, using a novel two-stage training method, leveraging results from several thousand experiments designed to optimize data quality, data selection, and training parameters. By the end of the year, the 3.0 8B and 2B language models are expected to include support for an extended 128K context window and multi-modal document understanding capabilities. Demonstrating an excellent balance of performance and inference cost, offers its Granite Mixture of Experts (MoE) Architecture models, Granite 3.0 1B-A400M and Granite 3.0 3B-A800M, as smaller, lightweight models that could be deployed for low latency applications as well as CPU-based deployments. is also announcing an updated release of its pre-trained Granite Time Series models, the first versions of which were released earlier this year. These new models are trained on 3 times more data and deliver strong performance on all three major time series benchmarks, outperforming 10 times larger models from Google, Alibaba, and others. The updated models also provide greater modeling flexibility with support for external variables and rolling forecasts.4 Introducing Granite Guardian 3.0: ushering the next era of responsible AI As part of this release, is also introducing a new family of Granite Guardian models that permit application developers to implement safety guardrails by checking user prompts and LLM responses for a variety of risks. The Granite Guardian 3.0 8B and 2B models provide the most comprehensive set of risk and harm detection capabilities available in the market today. In addition to harm dimensions such as social bias, hate, toxicity, profanity, violence, jailbreaking and more, these models also provide a range of unique RAG-specific checks such as groundedness, context relevance, and answer relevance. In extensive testing across 19 safety and RAG benchmarks, the Granite Guardian 3.0 8B model has higher overall accuracy on harm detection on average than all three generations of Llama Guard models from Meta. It also showed on par overall performance in hallucination detection on average with specialized hallucination detection models WeCheck and MiniCheck.5 While the Granite Guardian models are derived from the corresponding Granite language models, they can be used to implement guardrails alongside any open or proprietary AI models. Availability of Granite 3.0 models The entire suite of Granite 3.0 models and the updated time series models are available for download on HuggingFace under the permissive Apache 2.0 license. The instruct variants of the new Granite 3.0 8B and 2B language models and the Granite Guardian 3.0 8B and 2Bmodels are available today for commercial use on watsonx platform. A selection of the Granite 3.0 models will also be available as NVIDIA NIM microservices and through Google Cloud's Vertex AI Model Garden integrations with HuggingFace. To help provide developer choice and ease of use and support local, edge deployments, a curated set of the Granite 3.0 models are also available on Ollama and Replicate. The latest generation of Granite models expand robust open-source catalog of powerful LLMs. has collaborated with ecosystem partners like AWS, Docker, Domo, via its Qualcomm AI Hub, Salesforce, SAP, and others to integrate a variety of Granite models into these partners' offerings or make Granite models available on their platforms, offering greater choice to enterprises across the world. Assistants to Agents: realizing the future for enterprise AI is advancing enterprise AI through a spectrum of technologies - from models and assistants, to the tools needed to tune and deploy AI specifically for companies' unique data and use-cases. is also paving the way for future AI agents that can self-direct, reflect, and perform complex tasks in dynamic business environments. continues to evolve its portfolio of AI assistant technologies - from watsonx Orchestrate to help companies build their own assistants via low-code tooling and automation, to a wide set of pre-built assistants for specific tasks and domains such as customer service, human resources, sales, and marketing. Organizations around the world have used watsonx Assistant to help them build AI assistants for tasks like answering routine questions from customers or employees, modernizing their mainframes and legacy IT applications, helping students explore potential career paths, or providing digital mortgage support for home buyers. Today also unveiled the upcoming release of the next generation of watsonx Code Assistant, powered by Granite code models, to offer general-purpose coding assistance across languages like C, C++, Go, Java, and Python, with advanced application modernization capabilities for Enterprise Java Applications.6 Granite's code capabilities are also now accessible through a Visual Studio Code extension, IBM Granite.Code. also plans to release new tools to help developers build, customize and deploy AI more efficiently via watsonx.ai - including agentic frameworks, integrations with existing environments and low-code automations for common use-cases like RAG and agents.7 is focused on developing AI agent technologies which are capable of greater autonomy, sophisticated reasoning and multi-step problem solving. The initial release of the Granite 3.0 8B model features support for key agentic capabilities, such as advanced reasoning and a highly-structured chat template and prompting style for implementing tool use workflows. also plans to introduce a new AI agent chat feature to watsonx Orchestrate, which uses agentic capabilities to orchestrate AI Assistants, skills, and automations that help users increase productivity across their teams.8 plans to continue building agent capabilities across its portfolio in 2025, including pre-built agents for specific domains and use-cases. Expanded AI-powered delivery platform to supercharge consultants with AI is also announcing a major expansion of its AI-powered delivery platform, IBM Consulting Advantage. The multi-model platform contains AI agents, applications, and methods like repeatable frameworks that can empower 160,000 consultants to deliver better and faster client value at a lower cost. As part of the expansion, Granite 3.0 language models will become the default model in Consulting Advantage. Leveraging Granite's performance and efficiency, will be able to help maximize the return-on-investment for the generative AI projects of clients. Another key part of the expansion is the introduction of IBM Consulting Advantage for Cloud Transformation and Management and IBM Consulting Advantage for Business Operations. Each includes domain-specific AI agents, applications, and methods infused with best practices so consultants can help accelerate client cloud and AI transformations in tasks, like code modernization and quality engineering, or transform and execute operations across domains, like finance, HR and procurement. To learn more about Granite and AI for Business strategy, visit https://www.ibm.com/granite. 1 Cost calculations are based on API cost per million tokens pricing of watsonx for open models and for GPT4 models (assuming blend of 80% inout, 20% output) for customer proofs-of-concept. 4 The Tiny Time Mixer: Fast Pre-Trained Models for Enhanced Zero/Few Shot Forecasting on Multivariate Time Series 5 Evaluation results published in Granite Guardian GitHub Repo
[2]
IBM Introduces Granite 3.0: High Performing AI Models Built for Business
- New Granite 3.0 8B & 2B models, released under the permissive Apache 2.0 license, show strong performance across many academic and enterprise benchmarks, able to outperform or match similar-sized models- New Granite Guardian 3.0 models deliver IBM's most comprehensive guardrail capabilities to advance safe and trustworthy AI- New Granite 3.0 Mixture-of-Experts models enable extremely efficient inference and low latency, suitable for CPU-based deployments and edge computing- New Granite Time Series model achieved state-of-the-art performance in zero/few-shot forecasting, outperforming models 10 times larger- IBM unveils next generation of Granite-powered watsonx Code Assistant for general purpose coding; Debuts new tools in watsonx.ai for building and deploying AI applications and agents- Announces Granite will become the default model of Consulting Advantage, an AI-powered delivery platform used by IBM's 160,000 consultants to bring new solutions to clients faster Today, at IBM's (NYSE: IBM) annual TechXchange event the company announced the release of its most advanced family of AI models to date, Granite 3.0. IBM's third-generation Granite flagship language models can outperform or match similarly sized models from leading model providers on many academic and industry benchmarks, showcasing strong performance, transparency and safety. Consistent with the company's commitment to open-source AI, the Granite models are released under the permissive Apache 2.0 license, making them unique in the combination of performance, flexibility and autonomy they provide to enterprise clients and the community at large. IBM's Granite 3.0 family includes: General Purpose/Language: Granite 3.0 8B Instruct, Granite 3.0 2B Instruct, Granite 3.0 8B Base, Granite 3.0 2B BaseGuardrails & Safety: Granite Guardian 3.0 8B, Granite Guardian 3.0 2BMixture-of-Experts: Granite 3.0 3B-A800M Instruct, Granite 3.0 1B-A400M Instruct, Granite 3.0 3B-A800M Base, Granite 3.0 1B-A400M Base The new Granite 3.0 8B and 2B language models are designed as 'workhorse' models for enterprise AI, delivering strong performance for tasks such as Retrieval Augmented Geneneration (RAG), classification, summarization, entity extraction, and tool use. These compact, versatile models are designed to be fine-tuned with enterprise data and seamlessly integrated across diverse business environments or workflows. While many large language models (LLMs) are trained on publicly available data, a vast majority of enterprise data remains untapped. By combining a small Granite model with enterprise data, especially using the revolutionary alignment technique InstructLab - introduced by IBM and RedHat in May - IBM believes businesses can achieve task-specific performance that rivals larger models at a fraction of the cost (based on an observed range of 3x-23x less cost than large frontier models in several early proofs-of-concept1). The Granite 3.0 release reaffirms IBM's commitment to building transparency, safety, and trust in AI products. The Granite 3.0 technical report and responsible use guide provide a description of the datasets used to train these models, details of the filtering, cleansing, and curation steps applied, along with comprehensive results of model performance across major academic and enterprise benchmarks. Critically, IBM provides an IP indemnity for all Granite models on watsonx.ai so enterprise clients can be more confident in merging their data with the models. Raising the bar: Granite 3.0 benchmarks The Granite 3.0 language models also demonstrate promising results on raw performance. On standard academic benchmarks defined by Hugging Face's OpenLLM Leaderboard, the Granite 3.0 8B Instruct model's overall performance leads on average against state-of-the-art-performance of similar-sized open source models from Meta and Mistral. On IBM's state-of-the-art AttaQ safety benchmark, the Granite 3.0 8B Instruct model leads across all measured safety dimensions compared to models from Meta and Mistral.2 Across the core enterprise tasks of RAG, tool use, and tasks in the Cybersecurity domain, the Granite 3.0 8B Instruct model shows leading performance on average compared to similar-sized open source models from Mistral and Meta.3 The Granite 3.0 models were trained on over 12 trillion tokens on data taken from 12 different natural languages and 116 different programming languages, using a novel two-stage training method, leveraging results from several thousand experiments designed to optimize data quality, data selection, and training parameters. By the end of the year, the 3.0 8B and 2B language models are expected to include support for an extended 128K context window and multi-modal document understanding capabilities. Demonstrating an excellent balance of performance and inference cost, IBM offers its Granite Mixture of Experts (MoE) Architecture models, Granite 3.0 1B-A400M and Granite 3.0 3B-A800M, as smaller, lightweight models that could be deployed for low latency applications as well as CPU-based deployments. IBM is also announcing an updated release of its pre-trained Granite Time Series models, the first versions of which were released earlier this year. These new models are trained on 3 times more data and deliver strong performance on all three major time series benchmarks, outperforming 10 times larger models from Google, Alibaba, and others. The updated models also provide greater modeling flexibility with support for external variables and rolling forecasts.4 Introducing Granite Guardian 3.0: ushering the next era of responsible AI As part of this release, IBM is also introducing a new family of Granite Guardian models that permit application developers to implement safety guardrails by checking user prompts and LLM responses for a variety of risks. The Granite Guardian 3.0 8B and 2B models provide the most comprehensive set of risk and harm detection capabilities available in the market today. In addition to harm dimensions such as social bias, hate, toxicity, profanity, violence, jailbreaking and more, these models also provide a range of unique RAG-specific checks such as groundedness, context relevance, and answer relevance. In extensive testing across 19 safety and RAG benchmarks, the Granite Guardian 3.0 8B model has higher overall accuracy on harm detection on average than all three generations of Llama Guard models from Meta. It also showed on par overall performance in hallucination detection on average with specialized hallucination detection models WeCheck and MiniCheck.5 While the Granite Guardian models are derived from the corresponding Granite language models, they can be used to implement guardrails alongside any open or proprietary AI models. Availability of Granite 3.0 models The entire suite of Granite 3.0 models and the updated time series models are available for download on HuggingFace under the permissive Apache 2.0 license. The instruct variants of the new Granite 3.0 8B and 2B language models and the Granite Guardian 3.0 8B and 2Bmodels are available today for commercial use on IBM's watsonx platform. A selection of the Granite 3.0 models will also be available as NVIDIA NIM microservices and through Google Cloud's Vertex AI Model Garden integrations with HuggingFace. To help provide developer choice and ease of use and support local, edge deployments, a curated set of the Granite 3.0 models are also available on Ollama and Replicate. The latest generation of Granite models expand IBM's robust open-source catalog of powerful LLMs. IBM has collaborated with ecosystem partners like AWS, Docker, Domo, Qualcomm Technologies, Inc. via its Qualcomm® AI Hub, Salesforce, SAP, and others to integrate a variety of Granite models into these partners' offerings or make Granite models available on their platforms, offering greater choice to enterprises across the world. Assistants to Agents: realizing the future for enterprise AI IBM is advancing enterprise AI through a spectrum of technologies - from models and assistants, to the tools needed to tune and deploy AI specifically for companies' unique data and use-cases. IBM is also paving the way for future AI agents that can self-direct, reflect, and perform complex tasks in dynamic business environments. IBM continues to evolve its portfolio of AI assistant technologies - from watsonx Orchestrate to help companies build their own assistants via low-code tooling and automation, to a wide set of pre-built assistants for specific tasks and domains such as customer service, human resources, sales, and marketing. Organizations around the world have used watsonx Assistant to help them build AI assistants for tasks like answering routine questions from customers or employees, modernizing their mainframes and legacy IT applications, helping students explore potential career paths, or providing digital mortgage support for home buyers. Today IBM also unveiled the upcoming release of the next generation of watsonx Code Assistant, powered by Granite code models, to offer general-purpose coding assistance across languages like C, C++, Go, Java, and Python, with advanced application modernization capabilities for Enterprise Java Applications.6 Granite's code capabilities are also now accessible through a Visual Studio Code extension, IBM Granite.Code. IBM also plans to release new tools to help developers build, customize and deploy AI more efficiently via watsonx.ai - including agentic frameworks, integrations with existing environments and low-code automations for common use-cases like RAG and agents.7 IBM is focused on developing AI agent technologies which are capable of greater autonomy, sophisticated reasoning and multi-step problem solving. The initial release of the Granite 3.0 8B model features support for key agentic capabilities, such as advanced reasoning and a highly-structured chat template and prompting style for implementing tool use workflows. IBM also plans to introduce a new AI agent chat feature to IBM watsonx Orchestrate, which uses agentic capabilities to orchestrate AI Assistants, skills, and automations that help users increase productivity across their teams.8 IBM plans to continue building agent capabilities across its portfolio in 2025, including pre-built agents for specific domains and use-cases. Expanded AI-powered delivery platform to supercharge IBM consultants with AI IBM is also announcing a major expansion of its AI-powered delivery platform, IBM Consulting Advantage. The multi-model platform contains AI agents, applications, and methods like repeatable frameworks that can empower 160,000 IBM consultants to deliver better and faster client value at a lower cost. As part of the expansion, Granite 3.0 language models will become the default model in Consulting Advantage. Leveraging Granite's performance and efficiency, IBM Consulting will be able to help maximize the return-on-investment for the generative AI projects of IBM clients. Another key part of the expansion is the introduction of IBM Consulting Advantage for Cloud Transformation and Management and IBM Consulting Advantage for Business Operations. Each includes domain-specific AI agents, applications, and methods infused with IBM's best practices so IBM consultants can help accelerate client cloud and AI transformations in tasks, like code modernization and quality engineering, or transform and execute operations across domains, like finance, HR and procurement. To learn more about Granite and IBM's AI for Business strategy, visit https://www.ibm.com/granite.
[3]
IBM Introduces Granite 3.0: High Performing AI Models Built for Business By Investing.com
, /PRNewswire/ -- Today, at IBM's (NYSE: IBM) annual TechXchange event the company announced the release of its most advanced family of AI models to date, Granite 3.0. IBM's third-generation Granite flagship language models can outperform or match similarly sized models from leading model providers on many academic and industry benchmarks, showcasing strong performance, transparency and safety. Consistent with the company's commitment to open-source AI, the Granite models are released under the permissive Apache 2.0 license, making them unique in the combination of performance, flexibility and autonomy they provide to enterprise clients and the community at large. IBM's Granite 3.0 family includes: The new Granite 3.0 8B and 2B language models are designed as 'workhorse' models for enterprise AI, delivering strong performance for tasks such as Retrieval Augmented Geneneration (RAG), classification, summarization, entity extraction, and tool use. These compact, versatile models are designed to be fine-tuned with enterprise data and seamlessly integrated across diverse business environments or workflows. While many large language models (LLMs) are trained on publicly available data, a vast majority of enterprise data remains untapped. By combining a small Granite model with enterprise data, especially using the revolutionary alignment technique InstructLab " introduced by IBM and RedHat in May " IBM believes businesses can achieve task-specific performance that rivals larger models at a fraction of the cost (based on an observed range of 3x-23x less cost than large frontier models in several early proofs-of-concept). The Granite 3.0 release reaffirms IBM's commitment to building transparency, safety, and trust in AI products. The Granite 3.0 technical report and responsible use guide provide a description of the datasets used to train these models, details of the filtering, cleansing, and curation steps applied, along with comprehensive results of model performance across major academic and enterprise benchmarks. Critically, IBM provides an IP indemnity for all Granite models on watsonx.ai so enterprise clients can be more confident in merging their data with the models. Raising the bar: Granite 3.0 benchmarks The Granite 3.0 language models also demonstrate promising results on raw performance. On standard academic benchmarks defined by Hugging Face's OpenLLM Leaderboard, the Granite 3.0 8B Instruct model's overall performance leads on average against state-of-the-art-performance of similar-sized open source models from Meta (NASDAQ:META) and Mistral. On IBM's state-of-the-art AttaQ safety benchmark, the Granite 3.0 8B Instruct model leads across all measured safety dimensions compared to models from Meta and Mistral. Across the core enterprise tasks of RAG, tool use, and tasks in the Cybersecurity domain, the Granite 3.0 8B Instruct model shows leading performance on average compared to similar-sized open source models from Mistral and Meta. The Granite 3.0 models were trained on over 12 trillion tokens on data taken from 12 different natural languages and 116 different programming languages, using a novel two-stage training method, leveraging results from several thousand experiments designed to optimize data quality, data selection, and training parameters. By the end of the year, the 3.0 8B and 2B language models are expected to include support for an extended 128K context window and multi-modal document understanding capabilities. Demonstrating an excellent balance of performance and inference cost, IBM offers its Granite Mixture of Experts (MoE) Architecture models, Granite 3.0 1B-A400M and Granite 3.0 3B-A800M, as smaller, lightweight models that could be deployed for low latency applications as well as CPU-based deployments. IBM is also announcing an updated release of its pre-trained Granite Time Series models, the first versions of which were released earlier this year. These new models are trained on 3 times more data and deliver strong performance on all three major time series benchmarks, outperforming 10 times larger models from Google (NASDAQ:GOOGL), Alibaba (NYSE:BABA), and others. The updated models also provide greater modeling flexibility with support for external variables and rolling forecasts. Introducing Granite Guardian 3.0: ushering the next era of responsible AI As part of this release, IBM is also introducing a new family of Granite Guardian models that permit application developers to implement safety guardrails by checking user prompts and LLM responses for a variety of risks. The Granite Guardian 3.0 8B and 2B models provide the most comprehensive set of risk and harm detection capabilities available in the market today. In addition to harm dimensions such as social bias, hate, toxicity, profanity, violence, jailbreaking and more, these models also provide a range of unique RAG-specific checks such as groundedness, context relevance, and answer relevance. In extensive testing across 19 safety and RAG benchmarks, the Granite Guardian 3.0 8B model has higher overall accuracy on harm detection on average than all three generations of Llama Guard models from Meta. It also showed on par overall performance in hallucination detection on average with specialized hallucination detection models WeCheck and MiniCheck. While the Granite Guardian models are derived from the corresponding Granite language models, they can be used to implement guardrails alongside any open or proprietary AI models. Availability of Granite 3.0 models The entire suite of Granite 3.0 models and the updated time series models are available for download on HuggingFace under the permissive Apache 2.0 license. The instruct variants of the new Granite 3.0 8B and 2B language models and the Granite Guardian 3.0 8B and 2Bmodels are available today for commercial use on IBM's watsonx platform. A selection of the Granite 3.0 models will also be available as NVIDIA (NASDAQ:NVDA) NIM microservices and through Google Cloud's Vertex (NASDAQ:VRTX) AI Model Garden integrations with HuggingFace. To help provide developer choice and ease of use and support local, edge deployments, a curated set of the Granite 3.0 models are also available on Ollama and Replicate. The latest generation of Granite models expand IBM's robust open-source catalog of powerful LLMs. IBM has collaborated with ecosystem partners like AWS, Docker, Domo (NASDAQ:DOMO), Qualcomm (NASDAQ:QCOM) Technologies, Inc. via its Qualcomm ® AI Hub, Salesforce (NYSE:CRM), SAP, and others to integrate a variety of Granite models into these partners' offerings or make Granite models available on their platforms, offering greater choice to enterprises across the world. Assistants to Agents: realizing the future for enterprise AI IBM is advancing enterprise AI through a spectrum of technologies " from models and assistants, to the tools needed to tune and deploy AI specifically for companies' unique data and use-cases. IBM is also paving the way for future AI agents that can self-direct, reflect, and perform complex tasks in dynamic business environments. IBM continues to evolve its portfolio of AI assistant technologies " from watsonx Orchestrate to help companies build their own assistants via low-code tooling and automation, to a wide set of pre-built assistants for specific tasks and domains such as customer service, human resources, sales, and marketing. Organizations around the world have used watsonx Assistant to help them build AI assistants for tasks like answering routine questions from customers or employees, modernizing their mainframes and legacy IT applications, helping students explore potential career paths, or providing digital mortgage support for home buyers. Today IBM also unveiled the upcoming release of the next generation of watsonx Code Assistant, powered by Granite code models, to offer general-purpose coding assistance across languages like C, C++, Go, Java, and Python, with advanced application modernization capabilities for Enterprise Java Applications. Granite's code capabilities are also now accessible through a Visual Studio Code extension, IBM Granite.Code. IBM also plans to release new tools to help developers build, customize and deploy AI more efficiently via watsonx.ai " including agentic frameworks, integrations with existing environments and low-code automations for common use-cases like RAG and agents. IBM is focused on developing AI agent technologies which are capable of greater autonomy, sophisticated reasoning and multi-step problem solving. The initial release of the Granite 3.0 8B model features support for key agentic capabilities, such as advanced reasoning and a highly-structured chat template and prompting style for implementing tool use workflows. IBM also plans to introduce a new AI agent chat feature to IBM watsonx Orchestrate, which uses agentic capabilities to orchestrate AI Assistants, skills, and automations that help users increase productivity across their teams. IBM plans to continue building agent capabilities across its portfolio in 2025, including pre-built agents for specific domains and use-cases. Expanded AI-powered delivery platform to supercharge IBM consultants with AI IBM is also announcing a major expansion of its AI-powered delivery platform, IBM Consulting Advantage. The multi-model platform contains AI agents, applications, and methods like repeatable frameworks that can empower 160,000 IBM consultants to deliver better and faster client value at a lower cost. As part of the expansion, Granite 3.0 language models will become the default model in Consulting Advantage. Leveraging Granite's performance and efficiency, IBM Consulting will be able to help maximize the return-on-investment for the generative AI projects of IBM clients. Another key part of the expansion is the introduction of IBM Consulting Advantage for Cloud Transformation and Management and IBM Consulting Advantage for Business Operations. Each includes domain-specific AI agents, applications, and methods infused with IBM's best practices so IBM consultants can help accelerate client cloud and AI transformations in tasks, like code modernization and quality engineering, or transform and execute operations across domains, like finance, HR and procurement. To learn more about Granite and IBM's AI for Business strategy, visit https://www.ibm.com/granite. Cost calculations are based on API cost per million tokens pricing of IBM watsonx for open models and openAI for GPT4 models (assuming blend of 80% inout, 20% output) for customer proofs-of-concept. IBM Research technical paper: Granite 3.0 Language Models IBM Research technical paper: Granite 3.0 Language Models The : Fast Pre-Trained Models for Enhanced Zero/Few Shot Forecasting on Multivariate Time Series Evaluation results published in Granite Guardian GitHub Repo Planned availability for Q4 2024 Planned availability for Q4 2024 Planned availability for Q1 2025
[4]
IBM Introduces Granite 3.0: High Performing AI Models Built for Business
ARMONK, N.Y., Oct. 21, 2024 /PRNewswire/ -- Today, at IBM's (NYSE: IBM) annual TechXchange event the company announced the release of its most advanced family of AI models to date, Granite 3.0. IBM's third-generation Granite flagship language models can outperform or match similarly sized models from leading model providers on many academic and industry benchmarks, showcasing strong performance, transparency and safety. Consistent with the company's commitment to open-source AI, the Granite models are released under the permissive Apache 2.0 license, making them unique in the combination of performance, flexibility and autonomy they provide to enterprise clients and the community at large. IBM's Granite 3.0 family includes: The new Granite 3.0 8B and 2B language models are designed as 'workhorse' models for enterprise AI, delivering strong performance for tasks such as Retrieval Augmented Geneneration (RAG), classification, summarization, entity extraction, and tool use. These compact, versatile models are designed to be fine-tuned with enterprise data and seamlessly integrated across diverse business environments or workflows. While many large language models (LLMs) are trained on publicly available data, a vast majority of enterprise data remains untapped. By combining a small Granite model with enterprise data, especially using the revolutionary alignment technique InstructLab - introduced by IBM and RedHat in May - IBM believes businesses can achieve task-specific performance that rivals larger models at a fraction of the cost (based on an observed range of 3x-23x less cost than large frontier models in several early proofs-of-concept). The Granite 3.0 release reaffirms IBM's commitment to building transparency, safety, and trust in AI products. The Granite 3.0 technical report and responsible use guide provide a description of the datasets used to train these models, details of the filtering, cleansing, and curation steps applied, along with comprehensive results of model performance across major academic and enterprise benchmarks. Critically, IBM provides an IP indemnity for all Granite models on watsonx.ai so enterprise clients can be more confident in merging their data with the models. Raising the bar: Granite 3.0 benchmarks The Granite 3.0 language models also demonstrate promising results on raw performance. On standard academic benchmarks defined by Hugging Face's OpenLLM Leaderboard, the Granite 3.0 8B Instruct model's overall performance leads on average against state-of-the-art-performance of similar-sized open source models from Meta and Mistral. On IBM's state-of-the-art AttaQ safety benchmark, the Granite 3.0 8B Instruct model leads across all measured safety dimensions compared to models from Meta and Mistral. Across the core enterprise tasks of RAG, tool use, and tasks in the Cybersecurity domain, the Granite 3.0 8B Instruct model shows leading performance on average compared to similar-sized open source models from Mistral and Meta. The Granite 3.0 models were trained on over 12 trillion tokens on data taken from 12 different natural languages and 116 different programming languages, using a novel two-stage training method, leveraging results from several thousand experiments designed to optimize data quality, data selection, and training parameters. By the end of the year, the 3.0 8B and 2B language models are expected to include support for an extended 128K context window and multi-modal document understanding capabilities. Demonstrating an excellent balance of performance and inference cost, IBM offers its Granite Mixture of Experts (MoE) Architecture models, Granite 3.0 1B-A400M and Granite 3.0 3B-A800M, as smaller, lightweight models that could be deployed for low latency applications as well as CPU-based deployments. IBM is also announcing an updated release of its pre-trained Granite Time Series models, the first versions of which were released earlier this year. These new models are trained on 3 times more data and deliver strong performance on all three major time series benchmarks, outperforming 10 times larger models from Google, Alibaba, and others. The updated models also provide greater modeling flexibility with support for external variables and rolling forecasts. Introducing Granite Guardian 3.0: ushering the next era of responsible AI As part of this release, IBM is also introducing a new family of Granite Guardian models that permit application developers to implement safety guardrails by checking user prompts and LLM responses for a variety of risks. The Granite Guardian 3.0 8B and 2B models provide the most comprehensive set of risk and harm detection capabilities available in the market today. In addition to harm dimensions such as social bias, hate, toxicity, profanity, violence, jailbreaking and more, these models also provide a range of unique RAG-specific checks such as groundedness, context relevance, and answer relevance. In extensive testing across 19 safety and RAG benchmarks, the Granite Guardian 3.0 8B model has higher overall accuracy on harm detection on average than all three generations of Llama Guard models from Meta. It also showed on par overall performance in hallucination detection on average with specialized hallucination detection models WeCheck and MiniCheck. While the Granite Guardian models are derived from the corresponding Granite language models, they can be used to implement guardrails alongside any open or proprietary AI models. Availability of Granite 3.0 models The entire suite of Granite 3.0 models and the updated time series models are available for download on HuggingFace under the permissive Apache 2.0 license. The instruct variants of the new Granite 3.0 8B and 2B language models and the Granite Guardian 3.0 8B and 2Bmodels are available today for commercial use on IBM's watsonx platform. A selection of the Granite 3.0 models will also be available as NVIDIA NIM microservices and through Google Cloud's Vertex AI Model Garden integrations with HuggingFace. To help provide developer choice and ease of use and support local, edge deployments, a curated set of the Granite 3.0 models are also available on Ollama and Replicate. The latest generation of Granite models expand IBM's robust open-source catalog of powerful LLMs. IBM has collaborated with ecosystem partners like AWS, Docker, Domo, Qualcomm Technologies, Inc. via its Qualcomm® AI Hub, Salesforce, SAP, and others to integrate a variety of Granite models into these partners' offerings or make Granite models available on their platforms, offering greater choice to enterprises across the world. Assistants to Agents: realizing the future for enterprise AI IBM is advancing enterprise AI through a spectrum of technologies - from models and assistants, to the tools needed to tune and deploy AI specifically for companies' unique data and use-cases. IBM is also paving the way for future AI agents that can self-direct, reflect, and perform complex tasks in dynamic business environments. IBM continues to evolve its portfolio of AI assistant technologies - from watsonx Orchestrate to help companies build their own assistants via low-code tooling and automation, to a wide set of pre-built assistants for specific tasks and domains such as customer service, human resources, sales, and marketing. Organizations around the world have used watsonx Assistant to help them build AI assistants for tasks like answering routine questions from customers or employees, modernizing their mainframes and legacy IT applications, helping students explore potential career paths, or providing digital mortgage support for home buyers. Today IBM also unveiled the upcoming release of the next generation of watsonx Code Assistant, powered by Granite code models, to offer general-purpose coding assistance across languages like C, C++, Go, Java, and Python, with advanced application modernization capabilities for Enterprise Java Applications. Granite's code capabilities are also now accessible through a Visual Studio Code extension, IBM Granite.Code. IBM also plans to release new tools to help developers build, customize and deploy AI more efficiently via watsonx.ai - including agentic frameworks, integrations with existing environments and low-code automations for common use-cases like RAG and agents. IBM is focused on developing AI agent technologies which are capable of greater autonomy, sophisticated reasoning and multi-step problem solving. The initial release of the Granite 3.0 8B model features support for key agentic capabilities, such as advanced reasoning and a highly-structured chat template and prompting style for implementing tool use workflows. IBM also plans to introduce a new AI agent chat feature to IBM watsonx Orchestrate, which uses agentic capabilities to orchestrate AI Assistants, skills, and automations that help users increase productivity across their teams. IBM plans to continue building agent capabilities across its portfolio in 2025, including pre-built agents for specific domains and use-cases. Expanded AI-powered delivery platform to supercharge IBM consultants with AI IBM is also announcing a major expansion of its AI-powered delivery platform, IBM Consulting Advantage. The multi-model platform contains AI agents, applications, and methods like repeatable frameworks that can empower 160,000 IBM consultants to deliver better and faster client value at a lower cost. As part of the expansion, Granite 3.0 language models will become the default model in Consulting Advantage. Leveraging Granite's performance and efficiency, IBM Consulting will be able to help maximize the return-on-investment for the generative AI projects of IBM clients. Another key part of the expansion is the introduction of IBM Consulting Advantage for Cloud Transformation and Management and IBM Consulting Advantage for Business Operations. Each includes domain-specific AI agents, applications, and methods infused with IBM's best practices so IBM consultants can help accelerate client cloud and AI transformations in tasks, like code modernization and quality engineering, or transform and execute operations across domains, like finance, HR and procurement. To learn more about Granite and IBM's AI for Business strategy, visit https://www.ibm.com/granite. Cost calculations are based on API cost per million tokens pricing of IBM watsonx for open models and openAI for GPT4 models (assuming blend of 80% inout, 20% output) for customer proofs-of-concept. IBM Research technical paper: Granite 3.0 Language Models IBM Research technical paper: Granite 3.0 Language Models The Tiny Time Mixer: Fast Pre-Trained Models for Enhanced Zero/Few Shot Forecasting on Multivariate Time Series Evaluation results published in Granite Guardian GitHub Repo Planned availability for Q4 2024 Planned availability for Q4 2024 Planned availability for Q1 2025 Media Contact: Amy Angelini alangeli@us.ibm.com
[5]
IBM releases new Granite foundation models under 'permissive' Apache license - SiliconANGLE
IBM releases new Granite foundation models under 'permissive' Apache license Furthering its drive to build a distinctive position in enterprise artificial intelligence, IBM Corp. today is rolling out a series of new language models and tools to ensure their responsible use. The company is also unveiling a new generation of its watsonx Code Assistant for application development and modernization. All of these new capabilities are being bundled together in a multimodel platform for use by the company's 160,000 consultants. The new Granite 3.0 8B & 2B models come in "Instruct" and "Guardian" variants used for training and risk/harm detection, respectively. Both will be available under an Apache 2.0 license, Rob Thomas (pictured), IBM's senior vice president of software and chief commercial officer, called "the most permissive license for enterprises and partners to create value on top." The open-source license allows models to be deployed for as little as $100 per server, with intellectual property indemnification aimed at giving enterprise customers confidence in merging their data with the IBM models. "We've gone from a world of 'plus AI,' where clients were running their business and adding AI on top of it, to a notion of AI first, which is companies building their business model based on AI," Thomas said. IBM intends to lead in the use of AI for information technology automation through organic development and its acquisitions and pending acquisitions of infrastructure-focused firms like Turbonomic Inc., Apptio Inc. and HashiCorp Inc. "The book of business that we have built on generative AI is now $2 billion-plus across technology and consulting," Thomas said. "I'm not sure we've ever had a business that has scaled at this pace." The Instruct versions of Granite, which are used for training, come in 8 billion- and 2 billion-parameter versions. They were trained on more than 12 trillion tokens of training data in 12 languages and 116 programming languages, making them capable of coding, documentation and translation. By year's end, IBM said, it plans to extend the foundational models to a 128,000-token context length with multimodality. That refers to enhancing a model's ability to process significantly longer input sequences and handle multiple data types simultaneously. Context length is the number of tokens -- such as words, symbols and or other units of input data -- that the AI model can process and retain. Typical models have context lengths of between 1,000 and 8,000 tokens. IBM said the new Granite models are designed as enterprise "workhorses" for tasks such as retrieval-augmented generation or RAG, classification, summarization, agent training, entity extraction and tool use. They can be trained with enterprise data to deliver the task-specific performance of much larger models at up to 60 times lower cost. Internal benchmarks showed the Granite 8B model achieving better performance than comparable models from Google LLC and Mistral AI SAS and equivalent performance to comparable models from Meta Platforms Inc. An accompanying technical report and responsible use guide provide extensive documentation of training datasets used to train the models as well as details of the filtering, cleansing and curation steps that were applied and comparative benchmark data. An updated release of the pretrained Granite models IBM released earlier this year are trained on three times more data and provide greater modeling flexibility with support for external variables and rolling forecasts. The Granite Guardian 3.0 models are intended to provide safety protections by checking user prompts and model responses for a variety of risks. "You can concatenate both on the input before you make the inference query and the output to prevent the core model from jailbreaks and to prevent violence, profanity, et cetera," said Dario Gil, senior vice president and director of research at IBM. "We've done everything possible to make it as safe as possible." Jailbreaks are malicious attempts to bypass the restrictions or safety measures imposed on an AI system to make it behave in unintended or potentially harmful ways. Guardian also performs RAG-specific checks such as context relevance, answer relevance and "groundedness," which refers to the extent to which the model is connected to and informed by real-world data, facts or context. A set of smaller models called Granite Accelerators and Mixture of Experts are intended for low-latency and CPU-only applications. MoE is a type of machine learning architecture that combines multiple specialized models and dynamically selects and activates only a subset of them to enhance efficiency. "Accelerator allows you to implement speculative decoding so you can achieve twice the throughput of the core model with no loss of quality," Gil said. The MoE model can be trained with 10 trillion tokens but uses only 800 million used during inferencing for efficiency in edge use cases. The Instruct and Guardian variants of Granite 8B and 2B models are available immediately for commercial use on IBM's watsonx platform. A selection of Granite 3.0 models will also be available n partner platforms like Nvidia Corp.'s NIM stack and Google's Vertex. The entire suite of Granite 3.0 models and the updated Time Series models are available for download on HuggingFace Inc.'s open-source platform and Red Hat Enterprise Linux. The new Granite 3.0-based watsonx Code Assistant supports the C, C++, Go, Java and Python languages with new application modernization capabilities for enterprise Java Applications. IBM said the assistant has yielded 90% faster code documentation for certain tasks within its software development business. The code capabilities are accessible through a Visual Studio Code extension called IBM Granite.Code. New tools for developers include agentic frameworks, integrations with existing environments and low-code automations for common use cases such as RAG and agents. With agentic AI, or systems that are capable of autonomous behavior or decision-making, set to become the next big wave in AI development, IBM also said it's equipping its consulting division with a multimodal agentic platform. The new Consulting Advantage for Cloud Transformation and Management and Consulting Advantage for Business Operations consulting lines will include domain-specific AI agents, applications and methods trained on IBM intellectual property and best practices that consultants can apply to their clients' cloud and AI projects. About 80,000 IBM consultants are currently using Consulting Advantage, with most deploying only one or two agents at a time, said Mohamad Ali, senior vice president head of IBM Consulting. As usage grows, however, IBM Consulting will need to support over 1.5 million agents, making Granite's economics "absolutely essential because we will continue to scale this platform and we needed to be very cost-efficient," he said.
[6]
IBM debuts open source Granite 3.0 LLMs for enterprise AI
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Make no mistake about it, enterprise AI is big business, especially for IBM. IBM already has a $2 billion book of business related to generative AI and it's now looking to accelerate that growth. IBM is expanding its enterprise AI business today with the launch of the third generation of Granite large language models (LLMs). A core element of the new generation is the continued focus on real open source enterprise AI. Going a step further, IBM is ensuring that models can be fine-tuned for enterprise AI, with its InstructLab capabilities. The new models announced today include general purpose options with a 2 billion and 8 billion Granite 3.0. There are also Mixture-of-Experts (MoE) models that include Granite 3.0 3B A800M Instruct, Granite 3.0 1B A400M Instruct, Granite 3.0 3B A800M Base and Granite 3.0 1B A400M Base. Rounding out the update, IBM also has a new group with optimized guardrail and safety options that include Granite Guardian 3.0 8B and Granite Guardian 3.0 2B models. The new models will be available on IBM's watsonX service, as well as on Amazon Bedrock, Amazon Sagemaker and Hugging Face. "As we mentioned on our last earnings call, the book of business that we've built on generative AI is now $2 billion plus across technology and consulting," Rob Thomas, senior vice-president and chief commercial officer at IBM, said during a briefing with press and analysts. "As I think about my 25 years in IBM, I'm not sure we've ever had a business that has scaled at this pace." How IBM is looking to advance enterprise AI with Granite 3.0 Granite 3.0 introduces a range of sophisticated AI models tailored for enterprise applications. IBM expects that the new models will help to support a range of enterprise use cases including: customer service, IT automation, Business Process Outsourcing (BPO), application development and cybersecurity. The new Granite 3.0 models were trained by IBM's centralized data model factory team that is responsible for sourcing and curating the data used for training. Dario Gil, Senior Vice President and Director of IBM research, explained that the training process involved 12 trillion tokens of data, including both language data across multiple languages as well as code data. He emphasized that the key differences from previous generations were the quality of the data and the architectural innovations used in the training process. Thomas added that what's also important to recognize is where the data comes from. "Part of our advantage in building models is data sets that we have that are unique to IBM," Thomas said. "We have a unique, I'd say, vantage point in the industry, where we become the first customer for everything that we build that also gives us an advantage in terms of how we construct the models." IBM claims high performance benchmarks for Granite 3.0 According to Gil, the Granite models have achieved remarkable results on a wide range of tasks, outperforming the latest versions of models from Google, Anthropic and others. "What you're seeing here is incredibly highly performant models, absolutely state of the art, and we're very proud of that," Gil said. But it's not just raw performance that sets Granite apart. IBM has also placed a strong emphasis on safety and trust, developing advanced "Guardian" models that can be used to prevent the core models from being jailbroken or producing harmful content. The various model size options are also a critical element. "We care so deeply, and we've learned a lesson from scaling AI, that inference cost is essential," Gil noted. "That is the reason why we're so focused on the size of the category of models, because it has the blend of performance and inference cost that is very attractive to scale use cases in the enterprise." Why real open source matters for enterprise AI A key differentiator for Granite 3.0 is IBM's decision to release the models under the Open Source Initiative (OSI) approved Apache 2.0 open-source license. There are many other open models, such as Meta's Llama in the market, that are not in fact available under an OSI-approved license. That's a distinction that matters to some enterprises. "We decided that we're going to be absolutely squeaky clean on that, and decided to do an Apache 2 license, so that we give maximum flexibility to our enterprise partners to do what they need to do with the technology," Gil explained. The permissive Apache 2.0 license allows IBM's partners to build their own brands and intellectual property on top of the Granite models. This helps foster a robust ecosystem of solutions and applications powered by the Granite technology. "It's completely changing the notion of how quickly businesses can adopt AI when you have a permissive license that enables contribution, enables community and ultimately, enables wide distribution," Thomas said. Looking beyond generative AI to generative computing Looking forward, IBM is thinking about the next major paradigm shift, something that Gil referred to as - generative computing. In essence, generative computing refers to the ability to program computers by providing examples or prompts, rather than explicitly writing out step-by-step instructions. This aligns with the capabilities of LLMs like Granite, which can generate text, code, and other outputs based on the input they receive. "This paradigm where we don't write the instructions, but we program the computer, by example, is fundamental, and we're just beginning to touch what that feels like by interacting with LLMs," Gil said. "You are going to see us invest and go very aggressively in a direction where with this paradigm of generative computing, we're going to be able to implement the next generation of models, agentic frameworks and much more than that, it's a fundamental new way to program computers as a consequence of the Gen AI revolution."
[7]
IBM doubles down on open source AI with new Granite 3.0 models
Open source and AI have an uneasy relationship. AI can't exist without open source, but few companies want to open source their AI programs or large language models (LLM). Except, notably, for IBM, which previously open-sourced its Granite models. Now, Big Blue is doubling down on its open-source AI with the release of its latest Granite AI 3.0 models under the Apache 2.0 license. IBM has done this using pretraining data from publicly available datasets, such as GitHub Code Clean, Starcoder data, public code repositories, and GitHub issues. And IBM has gone to great lengths to avoid potential copyright or legal problems. Also: Can AI even be open-source? It's complicated Why have other major AI companies not done this? One big reason is that their datasets are filled with copyrighted or other intellectual property-protected data. If they open their data, they also open themselves to lawsuits. For example, News Corp publications such as the Wall Street Journal and the New York Post are suing Perplexity for stealing their content. The Granite models, by contrast, are LLMs specifically designed for business use cases, with a strong emphasis on programming and software development. IBM claims these new models were trained on three times as much data as the ones released earlier this year. They also come with greater modeling flexibility and support for external variables and rolling forecasts. In particular, the new Granite 3.0 8B and 2B language models are designed as "workhorse" models for enterprise AI, delivering robust performance for tasks such as Retrieval Augmented Generation (RAG), classification, summarization, entity extraction, and tool use. These models also come in Instruct and Guardian variants. The first, as the name promises, helps people learn a particular language. Guardian is designed to detect risks in user prompts and AI responses. This is vital because, as security expert Bruce Schindler noted at the Secure Open-Source Software (SOSS) Fusion conference, "prompt injection [attacks] work because I am sending the AI data that it is interpreting as commands" -- which can lead to disastrous answers. Also: Red Hat reveals major enhancements to Red Hat Enterprise Linux AI The Granite code models range from 3 billion to 34 billion parameters and have been trained on 116 programming languages and 3 to 4 terabytes of tokens, combining extensive code data and natural language datasets. These models are accessible through several platforms, including Hugging Face, GitHub, IBM's own Watsonx.ai, and Red Hat Enterprise Linux (RHEL) AI. A curated set of the Granite 3.0 models is also available on Ollama and Replicate. In addition, IBM has released a new version of its Watsonx Code Assistant for application development. There, Granite provides general-purpose coding assistance across languages like C, C++, Go, Java, and Python, with advanced application modernization capabilities for Enterprise Java Applications. Granite's code capabilities are now accessible through a Visual Studio Code extension, IBM Granite.Code. Also: How to use ChatGPT to write code: What it does well and what it doesn't The Apache 2.0 license allows for both research and commercial use, which is a significant advantage compared to other major LLMs, which may claim to be open source but bind their LLMs with commercial restrictions. The most notable example of this is Meta's Llama. By making these models freely available, IBM is lowering barriers to entry for AI development and use. IBM also believes, with reason, that because they're truly open source, developers and researchers can quickly build upon and improve the models. IBM also claims these models can deliver performance comparable to much larger and much more expensive models. Put it all together, and I, for one, am impressed. True, Granite won't help kids with their homework or write the great AI American novel, but it will help you develop useful programs and AI-based expert systems.
[8]
Granite 3.0: IBM launched open-source LLMs for enterprise AI
IBM is stepping up its game in the AI world, and it's not subtle about it. With generative AI already driving $2 billion in business, the tech giant is showing no signs of slowing down. Enter Granite 3.0 -- a powerhouse set of AI models designed to take enterprise AI to the next level. IBM's new Granite 3.0 isn't your run-of-the-mill AI. These models are built with enterprise use cases in mind, from customer service to cybersecurity. The real differentiator? IBM is sticking to real open source. Unlike other tech giants that toy with the idea of "open," IBM is making Granite available under the Apache 2.0 license, ensuring businesses can truly innovate without getting stuck in legal quagmires. As Dario Gil, Senior Vice President and Director of IBM Research, put it: "We decided that we're going to be absolutely squeaky clean on that, and decided to do an Apache 2 license, so that we give maximum flexibility to our enterprise partners to do what they need to do with the technology." IBM is focused on safety. The Granite Guardian 3.0 models come equipped with guardrails designed to ensure AI outputs don't go off the rails. With safety and security being top concerns in the AI space, these models are IBM's answer to keeping enterprises protected from rogue AI behavior. "What you're seeing here is incredibly highly performant models, absolutely state of the art, and we're very proud of that," said Gil. But it's not just about performance. IBM has also ensured that the models are designed to prevent jailbreaking and harmful outputs, something that's become increasingly crucial as AI scales in complexity. IBM isn't stopping with Granite 3.0. The company is looking even further ahead to what it's calling generative computing -- a big improvement for how we program and interact with machines. It's a vision where computers are programmed through examples and prompts, not rigid step-by-step instructions, a concept that ties closely to what large language models like Granite are already doing. "This paradigm where we don't write the instructions, but we program the computer by example, is fundamental," Gil said. IBM sees this as the future of computing, and Granite is just the beginning of that journey. As Rob Thomas, IBM's Senior Vice President and Chief Commercial Officer, aptly summed it up: "The book of business that we've built on generative AI is now $2 billion plus across technology and consulting. As I think about my 25 years in IBM, I'm not sure we've ever had a business that has scaled at this pace."
[9]
IBM AI Updates Include Granite 3.0, Watsonx Upgrades
An expansion to IBM Consulting Advantage and greater model access across partner platforms were also part of the updates. IBM has revealed its Granite 3.0 flagship language models, new capabilities in the Watsonx artificial intelligence platform, an expansion to its Consulting Advantage AI-powered delivery platform and greater model access across partner platforms. The news came during IBM's annual TechXchange event, which runs Monday to Thursday in Las Vegas. As part of the news, IBM also announced an update to its pre-trained Granite models. Rob Thomas, IBM's senior vice president of software and chief commercial officer, told CRN during a virtual press event ahead of the Granite 3.0 reveal that system integrators and other partners have seen opportunities selling customers on the Granite models plus opportunities in governing models as customers scale AI use. Thomas also pointed to the Armonk, N.Y.-based vendor's model indemnification policy as a unique differentiator for IBM as the AI vendor of choice for solution providers. "You think about the confidence that gives an MSP, or a system integration partner, that IBM is standing behind everything we've done from a training perspective, that's pretty dramatic," Thomas said. [RELATED: IBM's Deep Dive Into AI: CEO Arvind Krishna Touts The 'Massive' Enterprise Opportunity For Partners] During the virtual press event, Ritika Gunnar, IBM's general manager of data and AI, told CRN that the vendor's solution providers can leverage its smaller language models and the InstructLab open source model enhancement project co-launched with IBM subsidiary Red Hat to customize models with their data and create unique domains. "Because the base model itself has transparency and not only the data, but the rights that you have with that foundation model, and then you can apply your own domain-specific data to that, it becomes very lucrative, not only from a cost perspective, but from a transparency perspective and for a governance perspective for organizations to build with," Gunnar said. On the call, Dario Gil, IBM senior vice president and director of research, said that the licensing strategy IBM is pursuing for AI allows users to leverage their own brand. "Very often, bespoke licenses prevent you from attaching a brand on top of a base capability, Gil said. "We impose no such thing. They can take that. They can create their own brand. They can express it and however they see fit and grow with it." The vendor has pledged more AI agent capabilities across its portfolio in 2025, which will include pre-built agents for specific domains and use cases-adding to the popularity of agent AI offerings coming from rivals such as Microsoft, Google and Salesforce. In a blog post Monday, Kate Woolley-IBM's channel chief whose formal title is general manager of the IBM ecosystem-said that the vendor's tens of thousands of solution providers help end customers build trust in AI offerings, scale them and manage costs and complexity. "The IBM Ecosystem acts as the linchpin to ensure clients can access and put AI advances like these to work for their business," Woolley said. "Comprised of ISVs, hyperscalers, resellers and distributors, service providers, technology partners, consultancies, systems integrators and MSPs, these partners are bringing our state-of-the-art technology to business users at scale." IBM bills Granite 3.0 as its most advanced family of AI models to date, according to a statement Monday. Part of the 3.0 family are new Granite 8B and 2B "workhorse" models used for retrieval augmented generation (RAG), classification, summarization, entity extraction and tools use. Users can fine-tune the new 8B and 2B models with enterprise data and integrate them into any business environment or workflow, according to IBM. By the end of the year, the 8B and 2B models will support extended 128K context length and multi-modal document understanding capabilities, according to IBM. The new models will be embedded in the December release of Red Hat Enterprise Linux (RHEL) AI and OpenShift AI, Gil said during the press event.n The initial Granite 8B model release supports agentic AI capabilities including advanced reasoning and a chat template and prompting style for tool use workflows. Also part of the family are Granite mixture of experts (MoE) architecture models Granite 1B A400M and 3B A800M. Users can leverage these smaller, lightweight models for low-latency applications and central processing unit (CPU)-based deployments. The new family of Granite Guardian guardrail and safety models allow app developers to check user prompts and model responses for a variety of risks. They are derived from corresponding Granite language models, but they can implement guardrails alongside any open or proprietary AI model, according to the vendor. The models have RAG-specific checks for groundedness, context relevance and answer relevance while also accounting for social bias, hate, toxicity, profanity, violence and jailbreaking, among other dimensions, according to IBM. Granite 3.0 models are under a fully permissive Apache 2.0 license, aligning with the vendor's open-source AI commitment, according to IBM. The license differentiates the models from competing ones by giving enterprises and the community more rights along with the models' improved performance. Now available for commercial use on IBM's Watsonx artificial intelligence platform are instruct variants of the new Granite 8B and 2B language models and the 8B and 2B Granite Guardian models. IBM plans to make select Granite 3.0 models available on Nvidia's NIM stack, Google Vertex, Domo and other partner platforms, the vendor said Monday. A curated set of Granite language and mixture-of-experts (MoE) models are available on Ollama and Replicate.ai. Users can download on Hugging Face under the Apache 2.0 license the whole suite of 3.0 models and updated time series models, according to IBM. Other partnerships IBM highlighted in a separate statement Monday include Amazon Web Services (AWS) now offering several Granite models for its Sagemaker JumpStart machine learning (ML) hub through its AWS Marketplace. Users of the Amazon Bedrock generative AI apps builder can also access IBM Granite models, according to IBM. Qualcomm, Salesforce and SAP have made Granite models available to their developer communities and ecosystems, according to IBM. Samsung SDS, a consulting and outsourcing subsidiary of Samsung, is piloting Granite time series models for anomaly detection and open-sourced Granite models through Watsonx for supply chain relationship management use cases. IBM's new generation of Watsonx Code Assistant brings general-purpose coding assistance for C, C++, Go, Java, Python and other languages. The tool also helps with modernization capabilities for Enterprise Java apps. The vendor also said to expect agentic frameworks, existing environment integrations, low-code automations for RAG, agents and other common use cases and other new developer tools in Watsonx.ai. As for Watsonx Orchestrate, IBM has a new AI agent chat feature that will allow users to orchestrate AI assistants, skills and automations, according to the vendor. Among IBM's many AI updates was the expansion of its Consulting Advantage AI-powered delivery platform, which has adopted Granite 3.0 as its default language model. IBM Consulting Advantage has added a Cloud Transformation and Management offering and a Business Operations offering, each with domain-specific AI agents, applications and methods to go along with IBM intellectual property and best practices. The two new Consulting Advantage offerings aim to help speed up code modernization and other tasks and help operations across finance, procurement and other domains, according to IBM. The Consulting Advantage platform is used by 160,000 IBM consultants, according to the vendor. IBM Consulting is No. 6 on CRN's 2024 Solution Provider 500 Mohamad Ali, senior vice president and head of IBM Consulting, told CRN during the virtual press event that Granite models are "important to our economic model going forward, as well as our flexibility in the types of projects we can do." "Starting with the license-the ability to then do derivative projects and create new intellectual property either for us consulting or for our clients," Ali said. "We're working with a bank right now, the bank wants to keep all the IP to themselves, understandably. ... We have a hospitality client, and he said to me, 'This thing's too expensive.' And so he had to reserve something called prompt caching so he's not hitting the LLM all the time, because they're too expensive. Granite changes that."
[10]
IBM Unveils Granite 3.0 Models, Outperforms Llama 3.1
Developers can utilize the Granite 3.0 8B Instruct model for a range of natural language applications, including text generation, classification, summarization, entity extraction, and customer service chatbots. IBM has launched Granite 3.0, the latest generation of its large language models (LLMs) for enterprise applications. The Granite 3.0 collection includes several models, highlighted by the Granite 3.0 8B Instruct, which has been trained on over 12 trillion tokens across multiple languages. The Granite 3.0 8B Instruct model is intended for enterprise use, demonstrating competitive performance against similar models while excelling in specific business tasks. IBM claims that on academic benchmarks included in Hugging Face's OpenLLM Leaderboard v2, Granite 3.0 8B Instruct rivals similarly sized models from Meta and Mistral AI. Fine-tuning options through InstructLab allow organisations to customise models to their needs, potentially reducing costs. All Granite models are released under the Apache 2.0 license, with detailed disclosures of training datasets and methodologies included in the accompanying technical paper. The Granite 3.0 release includes: Developers can utilize the Granite 3.0 8B Instruct model for a range of natural language applications, including text generation, classification, summarization, entity extraction, and customer service chatbots. The model also supports programming tasks such as code generation, code explanation, and code editing, as well as agentic use cases that require tool calling. Upcoming updates planned for 2024 will increase model context windows to 128K tokens and introduce multimodal capabilities. The Granite 3.0 models are available for commercial use on the IBM watsonx platform and through partners like Google Cloud, Hugging Face, and NVIDIA. IBM emphasises safety and transparency in AI, with Granite 3.0 models incorporating robust safety features and extensive training data filtering to mitigate risks. The Granite Guardian models enhance input and output management across various dimensions, outperforming existing models in key safety benchmarks. IBM's new models leverage innovative training techniques, including the use of the Data Prep Kit for efficient data processing and a power scheduler for optimised learning rates. This enables faster convergence to optimal model weights while minimizing training costs. Granite 3.0 language models were trained on Blue Vela, powered entirely by renewable energy, reinforcing IBM's commitment to sustainability in AI development.
[11]
IBM releases new AI models for businesses as Gen AI competition heats up
IBM has announced the release of its latest family of AI models, promising a strong balance of power and cost efficiency for enterprises. The company says its Granite 3.0 models offer smaller and more business-focused use cases, as opposed to many existing large-scale general-purpose language models that are already available. More broadly, IBM also announced other AI improvements such as its latest watsonx Code Assistant, powered by Granite code models, which supports languages like C, C++, Go, Java and Python. IBM envisions the power of Granite 3.0 by combining what are essentially reasonably small models with enterprise data to unlock task-specific performance that rivals larger models, without the associated cost. The company declared that, across a range of testing, Granite 3.0 proved to be 3x-23x cheaper than large frontier models, which it failed to name. The company did state that its 8B Instruct model - the larger of the two new 8B and 2B models - was able to keep up with similarly sized options from Meta and Mistral on benchmarks set out by Hugging Face's OpenLLM Leaderboard. At the same time, IBM also announced updates to its Granite Guardian 3.0 to allow developers to implement safety guardrails by checking user prompts and LLM responses. This includes checking for things like social bias, hate, toxicity, profanity, violence and jailbreaking. Another test confirmed that IBM's solution gave Granite Guardian 3.0 8B improved accuracy on harm detection over Meta's three generations of Llama Guard models. All of the Granite 3.0 models are available for download on HuggingFace, with the 8B and 2B language models and the Granite Guardian 3.0 8B and 3B models also available fro commercial use through IBM's watsonx.
[12]
IBM Releases New AI Models Built for Business
Time Series Models: Outperforming competitors like Google and Alibaba. IBM announced the launch of its most advanced AI models to date, the Granite 3.0 family, designed to elevate enterprise AI and open-source development, at its annual TechXchange event. This new suite of models -- available under the permissive Apache 2.0 license -- delivers high performance, flexibility, and safety across a wide range of tasks. Also Read: Vodafone Idea Set to Finalise IT Outsourcing Deals, Reducing Dependence on IBM: Report "Granite 3.0: IBM's third-generation Granite flagship language models can outperform or match similarly sized models from leading providers on many academic and industry benchmarks, showcasing strong performance, transparency, and safety," IBM said on Monday. Key Features of Granite 3.0 Models: General-Purpose Models: The Granite 3.0 8B and 2B language models excel in enterprise tasks such as Retrieval Augmented Generation (RAG), classification, and summarisation, offering up to 23x cost savings compared to larger models. Safety Focus with Granite Guardian: The new Granite Guardian 3.0 models are tailored to enforce responsible AI practices, checking for risks like bias, toxicity, and hallucinations, ensuring more secure and reliable AI outputs. "In extensive testing across 19 safety and RAG benchmarks, the Granite Guardian 3.0 8B model has shown higher overall accuracy in harm detection on average than all three generations of Llama Guard models from Meta," IBM noted. Mixture of Experts (MoE): MoE models, such as the 1B-A400M and 3B-A800M, are lightweight and optimised for low-latency, CPU-based applications. Time Series Models: The Granite Time Series models have been upgraded, outperforming larger models from Google, Alibaba, and others, with enhanced flexibility for external variables and rolling forecasts. Also Read: Meta Unveils New AI Models and Tools to Drive Innovation According to IBM, the instruct variants of the new Granite 3.0 8B and 2B language models, as well as the Granite Guardian 3.0 8B and 2B models, are available today for commercial use on IBM's watsonx platform. A selection of the Granite 3.0 models will also be available as NVIDIA NIM microservices and through Google Cloud's Vertex AI Model Garden integrations with HuggingFace. IBM's Granite 3.0 models were trained on over 12 trillion tokens on data taken from 12 different natural languages and 116 different programming languages using two-stage training method. By the end of the year, the 3.0 8B and 2B language models are expected to include support for an extended 128K context window and multi-modal document understanding capabilities. IBM said it is advancing enterprise AI through a spectrum of technologies -- from models and assistants to the tools needed to tune and deploy AI specifically for companies' unique data and use cases. IBM is also paving the way for future AI agents that can self-direct, reflect, and perform complex tasks in dynamic business environments. IBM also plans to release new tools to help developers build, customise, and deploy AI more efficiently via watsonx.ai -- these include agentic frameworks, integrations with existing environments, and low-code automations for common use cases like RAG and agents. Also Read: OpenAI Raises USD 6.6 Billion to Accelerate AI Research and Expansion IBM also announced a major expansion of its AI-powered delivery platform, IBM Consulting Advantage. The multi-model platform includes AI agents, applications, and methods like repeatable frameworks that can empower 160,000 IBM consultants to deliver better and faster client value at a lower cost, according to the official release. As part of the expansion, Granite 3.0 language models will become the default model in Consulting Advantage. Another key part of the expansion is the introduction of IBM Consulting Advantage for Cloud Transformation and Management and IBM Consulting Advantage for Business Operations. Each includes domain-specific AI agents, applications, and methods infused with IBM's best practices so IBM consultants can help accelerate client cloud and AI transformations in tasks, like code modernization and quality engineering, or transform and execute operations across domains, like finance, HR and procurement, IBM said.
[13]
How IBM is harnessing AI models for business
IBM is rolling out new AI models, which it claims outperforms other popular large language models (LLMs). IBM has released the latest version of its artificial intelligence (AI) models for businesses that will help the technology giant in the race towards AI agents. The company on Monday announced its Granite 3.0 AI models, which are open source and tailored for enterprise use cases such as customer service, IT automation, Business Process Outsourcing (BPO), application development and cybersecurity. The company claims it outperforms AI models from Meta, Anthropic and Mistral AI and is tougher to jailbreak. The Granite models were trained on 12 trillion tokens of data, including across multiple languages as well as code data. The difference with IBM's large language model (LLM) compared to others is that it trains its data on enterprise data, rather than public data pulled from the Internet. "We have a unique, I'd say, vantage point in the industry, where we become the first customer for everything that we build that also gives us an advantage in terms of how we construct the models," Rob Thomas, senior vice president and chief commercial officer at IBM, said during a briefing with press and analysts. He also said that the LLM family "is the most close to open source that anybody has released". The difference between IBM's model and Meta's is that IBM released the models under the Open Source Initiative (OSI) approved Apache 2.0 open-source license. "We decided that we're going to be absolutely squeaky clean on that, and decided to do an Apache 2 license so that we give maximum flexibility to our enterprise partners to do what they need to do with the technology," said DarÃo Gil, IBM's senior vice president and director of research, at a press conference. The open source model also allows IBM's partners to build intellectual property on top of the Granite models. "It's completely changing the notion of how quickly businesses can adopt AI when you have a permissive license that enables contribution, enables community and ultimately, enables wide distribution," Thomas said. The company also mapped out its ambitions for AI agents, which is software that can perceive its environment and is based on data that take autonomous actions. "We're at the very beginning of this era around agents and ideally assistants and agents is almost two sides of the same coin," said Thomas. One of the ways IBM said it would focus on AI agents was through domain-specific agents, which are designed to understand unique jargon and content for certain industries. Human resources (HR) is one of its use cases and it has HR assistants that it is showcasing, said Ritika Gunnar, IBM's general manager for data and AI.
[14]
IBM Expands Open-Source AI with Granite 3.0, Empowering Enterprise Flexibility - IBM (NYSE:IBM)
IBM integrates Granite 3.0 into Nvidia and Google Cloud platforms, boosting AI adaptability for businesses. International Business Machines Corp IBM showcased its latest advancement in artificial intelligence with the release of Granite 3.0, the company's most sophisticated family of AI models. The announcement was made during IBM's annual TechXchange event. The Granite 3.0 models address the growing demand for AI transparency, safety, and performance, particularly in enterprise settings. Also Read: OpenAI's For Profit Structure Adds Valuation Complexity to Microsoft And Other Equity Investors: Report Available under the open-source Apache 2.0 license, Granite 3.0 aims to provide flexible AI solutions for businesses seeking cutting-edge performance with greater control and adaptability. Open Source Initiative (OSI) called out Meta Platforms Inc META for labeling its AI models as "open-source." OSI head Stefano Maffulli reportedly accused Meta of misleading users about open-source technology, raising concerns about the company's claims. Meanwhile, Meta chief Mark Zuckerberg has been a vocal supporter of open-source and criticized closed-source AI platforms, including ChatGPT parent OpenAI, for stifling creative freedom and development. IBM's Granite 3.0 series includes diverse models designed for various enterprise tasks. The Granite 3.0 models, which support more than 12 natural and 116 programming languages, were developed using an innovative two-stage training method. IBM continues emphasizing responsible AI development by introducing Granite Guardian 3.0, a family of models focused on safety and risk detection. IBM aims to further support enterprise AI initiatives by integrating these models into various platforms, such as Nvidia Corp NVDA NIM microservices and Alphabet Inc GOOG GOOGL Google Cloud's Vertex AI Model Garden. Nvidia Corp NVDA recently introduced an open-source AI model. The NVLM 1.0 model offers advanced capabilities for vision and language tasks, enhancing the company's AI toolkit for developers. Wedbush analyst Dan Ives highlighted IBM's potential driven by the AI boom, predicting that ongoing momentum, increasing enterprise spending, and a rebound in digital advertising will drive tech stocks upward by year-end. IBM stock gained over 70% in the last 12 months. Investors can gain exposure to the stock through Vanguard Div Appreciation ETF VIG and Vanguard High Dividend Yield ETF VYM. Price Action: IBM stock closed lower by 0.29% at $232.20 on Friday. Photo Credit: Shutterstock Also Read: This Bullish Analyst Highlights Taiwan Semi's 3nm Strength and Margin Expansion Potential, Raises Forecast Market News and Data brought to you by Benzinga APIs
[15]
IBM releases new AI models for businesses as genAI competition heats up
(Reuters) - IBM released the latest version of its artificial intelligence models catered towards businesses on Monday, looking to capitalize on the surge in enterprises adopting generative AI technology. "Granite 3.0" models will be made open-source, similar to other versions in IBM's Granite family of AI models. This approach differs from rivals such as Microsoft that charge customers for access to their models. In turn, IBM offers a paid tool called Watsonx that helps run models inside data centers after they have been customized. Some variants of the new Granite models are available starting Monday for commercial use on the Watsonx platform. A selection of these models will also be available on Nvidia's stack of software tools that enable businesses to incorporate AI models. The new Granite models were trained using AI chip leader Nvidia's H100 graphics processor units (GPUs), said Dario Gil, IBM's director of research. (Reporting by Arsheeya Bajwa in Bengaluru; Editing by Rashmi Aich)
[16]
IBM Unveils Next-Generation AI Models for Business
Open-source, customisable AI models: Available on IBM's Watsnox platform As part of IBM's efforts to bring AI to the table, "Granite 3.0" stands apart from its competitors in terms of accessibility. It aims to reduce the threshold of entry for businesses interested in generative AI. IBM also offers a paid tool called Watsonx, allowing organizations to launch customized models in operations inside their data centres. IBM is making the new models available for commercial use on the Watsonx platform. Some variants also will be available via NVIDIA's software tools suite, so AI capabilities can be delivered to businesses seamlessly. According to Dario Gil, IBM's Director of Research, the new models were trained using NVIDIA's latest H100 graphics processing units.
[17]
IBM Releases New AI Models for Businesses as GenAI Competition Heats Up
Some of IBM's AI models will be available on Nvidia's software tool stack IBM released the latest version of its Artificial Intelligence (AI) models catered towards businesses on Monday, looking to capitalise on the surge in enterprises adopting generative AI technology. 'Granite 3.0' models will be made open-source, similar to other versions in IBM's Granite family of AI models. This approach differs from rivals such as Microsoft that charge customers for access to their models. In turn, IBM offers a paid tool called Watsonx that helps run models inside data centers after they have been customised. Some variants of the new Granite models are available starting Monday for commercial use on the Watsonx platform. A selection of these models will also be available on Nvidia's stack of software tools that enable businesses to incorporate AI models. The new Granite models were trained using AI chip leader Nvidia's H100 graphics processor units (GPUs), said Dario Gil, IBM's director of research. © Thomson Reuters 2024
[18]
IBM Releases New AI Models for Businesses as GenAI Competition Heats Up
(Reuters) - IBM released the latest version of its artificial intelligence models catered towards businesses on Monday, looking to capitalize on the surge in enterprises adopting generative AI technology. "Granite 3.0" models will be made open-source, similar to other versions in IBM's Granite family of AI models. This approach differs from rivals such as Microsoft that charge customers for access to their models. In turn, IBM offers a paid tool called Watsonx that helps run models inside data centers after they have been customized. Some variants of the new Granite models are available starting Monday for commercial use on the Watsonx platform. A selection of these models will also be available on Nvidia's stack of software tools that enable businesses to incorporate AI models. The new Granite models were trained using AI chip leader Nvidia's H100 graphics processor units (GPUs), said Dario Gil, IBM's director of research. (Reporting by Arsheeya Bajwa in Bengaluru; Editing by Rashmi Aich)
Share
Share
Copy Link
IBM introduces Granite 3.0, a new family of high-performing AI models designed for business use, featuring improved performance, safety measures, and flexible deployment options.
IBM has unveiled its most advanced family of AI models to date, Granite 3.0, at its annual TechXchange event. This release marks a significant step forward in enterprise-focused artificial intelligence, combining performance, transparency, and safety 1.
The Granite 3.0 family includes various models designed for different purposes:
These models are built to handle tasks such as Retrieval Augmented Generation (RAG), classification, summarization, and tool use 2.
IBM claims that Granite 3.0 models can outperform or match similarly sized models from leading providers on many academic and industry benchmarks. The Granite 3.8B Instruct model reportedly leads in overall performance against state-of-the-art open-source models from Meta and Mistral on Hugging Face's OpenLLM Leaderboard 3.
The models were trained on over 12 trillion tokens from 12 natural languages and 116 programming languages. By year-end, IBM plans to extend the 3.8B and 2B language models to support a 128K context window and multi-modal document understanding capabilities 4.
IBM has introduced Granite Guardian models to implement safety guardrails, checking user prompts and LLM responses for various risks. These models provide comprehensive risk and harm detection capabilities, including checks for social bias, hate, toxicity, and RAG-specific issues like groundedness and context relevance 5.
Consistent with IBM's commitment to open-source AI, the Granite models are released under the Apache 2.0 license. This permissive license allows for flexible use and modification, potentially enabling deployment for as little as $100 per server [5].
IBM is integrating Granite 3.0 into its watsonx platform and other tools:
These integrations aim to accelerate AI adoption and solution delivery for IBM's enterprise clients [1].
Reference
[1]
[3]
[4]
IBM releases Granite 3.1, a powerful suite of open-source AI models designed for enterprise use, featuring extended context length, improved performance, and multilingual capabilities.
2 Sources
IBM strengthens its partnership with Amazon Web Services, introducing powerful Granite AI models and responsible AI solutions on AWS platforms, aiming to advance enterprise-grade AI adoption.
2 Sources
IBM introduces InstructLab and Granite models, transforming LLM training. Simultaneously, lessons from generative AI implementation in businesses highlight key strategies for successful adoption.
2 Sources
A comprehensive look at the current state of AI adoption in enterprises, covering early successes, ROI challenges, and the growing importance of edge computing in AI implementations.
4 Sources