Enterprises need to control their own generative AI, say data scientists

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

New poll data from enterprise MLOps platform Domino Data Lab found that data scientists believe generative AI will significantly impact enterprises over the next few years, but its capabilities cannot be outsourced — that is, enterprises need to fine-tune or control their own gen AI models.

The data, from data and analytics professionals who attended Domino Data Lab’s recent Rev conference in New York City, found that 90% of data science leaders — who are typically a skeptical bunch – believe that the hype surrounding Generative AI is justified. More than half believe it will have a significant impact on their business within the next 1-2 years.

However, simply leveraging AI features offered by software vendors won’t be enough for gen AI success. A full 94% of survey respondents believe their organizations must create their own gen AI offerings — more than half plan to leverage foundation models developed by third parties and to create differentiated customer experiences on top of them, while more than a third believe organizations must develop their own proprietary gen AI models.

>>Follow VentureBeat’s ongoing generative AI coverage<<


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

According to Kjell Carlsson, head of data science strategy at Domino Data Lab, the survey confirmed that data science leaders believe in the transformative power of generative AI — but they nixed the idea that enterprises can get by if they simply use generative AI through third-party applications like Salesforce, SAP or Microsoft Office.

“They completely and resoundingly went and smashed that one down,” he said. Instead, organizations need to either fine-tune off of the hyperscalers’ large language models or build their own proprietary models.

“In my own conversations with with data science leaders, they’re saying in theory, these very ultra large language models are great for prototyping, and end users want them to write their emails, but in terms of what we’re actually going to operationalize, we’re going to look at smaller LLMs and do additional fine tuning on top of that, and potentially some human-in-the-loop reinforcement learning to get the level of accuracy we need.”

Besides data security, IP protection is another issue, he pointed out. “If it’s important and really driving value, then they want to own it and have a much greater degree of control,” he said.

There is no doubt that enterprises will invest in current generative AI offerings to make sure their end users have access, he said. But at the same time, they will invest in their own capabilities to create fine- tuned specialized generative AI models for their “real” use cases — “the use cases that are going to make them unique and differentiated.”

Originally appeared on: TheSpuzz