Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
IBM is doubling down on its artificial intelligence (AI) efforts with a series of new initiatives announced today at Big Blue’s annual Think conference.
The efforts fall under IBM’s new Watsonx product platform, which includes technologies and services to help organizations build and manage AI models, including generative AI. A key part of the new platform is IBM Watsonx AI, which provides a foundation model library to help enterprises choose from pretrained models that can be fine-tuned for enterprise application development.
As part of the model library, IBM is partnering with Hugging Face to bring access to open AI models to IBM’s enterprise users. The Watsonx AI models also include the Watson Code Assistant, which is a generative AI coding tool that will be integrated with IBM’s Red Hat Ansible products to help developers automate their workflows.
>>Follow VentureBeat’s ongoing generative AI coverage<<
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
The Watsonx platform also includes the Watsonx data and Watsonx governance services that will help to empower organizations to use their own data and have strong governance for access and privacy.
In a nearly hour-long roundtable session with the press ahead of the conference, IBM executives, including CEO Arvind Krishna, outlined the new efforts and provided some insight into how IBM is tackling the hot-button issues of explainable AI, competition and the continued need for humans in IT.
“I think we all acknowledge there’s a lot of excitement around AI recently,” Krishna said. “That said, there is also some caution with our enterprise clients, especially for those in regulated industries and those who care a lot about accuracy and scaling.”
IBM is all in on the enterprise AI use case
Rather than build off a generic generative AI platform that is intended for the general public, Krishna emphasized the IBM approach is focused on the needs of enterprise users.
The foundation (no pun intended) of IBM’s approach is the use of foundation models. IBM has been building out its own series of foundation models over the last several years and has even built out its own supercomputer to aid its development efforts. The basic idea is simple: create a very large language model (LLM) that can then serve as the foundation for specific use cases. With Watsonx AI, IBM is providing what Krishna referred to as a “workbench” to help support organizations with those use cases.
In the world of generative AI, it seems as though every vendor is either partnering with or competing against OpenAI and its runaway success with ChatGPT. Krishna did not directly mention OpenAI by name, though he did argue that IBM has a very focused enterprise use case for AI that is not the same as something that is targeted at the general public.
Krishna said Watsonx lets organizations tap into the potential for LLMs and generative AI, while providing much more control of the data. The IBM CEO said that his company is looking to provide generative AI that can run on-premises, or in a private instance on a public cloud, to help provide more privacy.
“It’s not for consumer use cases and it’s not a single instance trying to take care of all the enterprises in the world,” Krishna said about Watsonx. “We tend to work more with people who want to adapt it.”
AI governance is not the same as explainable AI
IBM executives also emphasized the need for governance, which is also addressed with the Watsonx platform.
Rob Thomas, SVP and chief commercial officer at IBM, explained that Watsonx governance includes everything that’s needed for an organization to have responsible AI. That includes life cycle management and model-drift detection.
“Regardless of what a model is doing, you can connect it into Watsonx governance, which gives you an understanding of data provenance,” Thomas said. “We think this will be a key part of how companies adopt AI, which is doing it in a measured and responsible way.”
Krishna, however, argued that responsible AI isn’t necessarily the same as explainable AI, nor does it need to be.
“Anybody who claims that a large AI model is explainable is not being completely truthful,” Krishna said. “They are not explainable in the sense of reasoning and logic, like we would do in a college humanities class — that’s just not accurate.”
However, he noted that they’re explainable as a function of detailing what data a model was trained on and what results the model is serving. Full explainability in Krishna’s view doesn’t quite exist today, but that’s where concepts like governance and guardrails to protect against potential risk can fit it.
Will AI replace humans? (Not yet)
An underlying fear for many about the emergence of AI is that it will replace the need for humans in many different jobs.
Krishna argued that AI is more of a productivity multiplier, enabling humans to get more done. For example, he noted that foundation models can be a big help for cybersecurity, but won’t replace the need for humans; rather they just made the productivity of analysts significantly higher.
Overall, though IBM has been working on AI for longer than just about any other company on the planet, Krishna also noted that there has been a big shift in recent months. He said that three or five years ago, there were many IBM clients that talked about AI and many had small teams experimenting with often small projects. That conversation has changed in the last six months.
“Most clients now are looking at how to deploy this much more widely inside their enterprises and how they take advantage of it,” Krishna said. “We can see the excitement in our clients and I think that’s the biggest signal this is revolutionary and a significant step forward.”