Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Organizations building artificial intelligence (AI) models face no shortage of quality challenges, most notably the need for explainable AI that minimizes the risk of bias.
For Redwood City, California-based startup TruEra, the path to explainable AI is paved with technologies that provide AI quality for models. Founded in 2019, TruEra has raised over $45 million in funding, including a recent round of investment that included the participation of Hewlett Packard Enterprise (HPE).
This week, TruEra announced the latest milestone in its growth, revealing that it has been selected to be part of the Intel Disrupter Initiative, which brings technical partnership and go-to-market support for participants.
“The big picture here is that as machine learning is increasingly adopted in the enterprise, there’s a greater need to explain, test and monitor these models, because they’re used in higher-stakes use cases,” Will Uppington, cofounder and CEO of TruEra, told VentureBeat.
TruEra takes on the challenges of explainable AI
As the use of AI matures, there are emerging regulations around the world for its responsible usage.
The responsible use of AI is multifaceted, including prioritizing data privacy and providing mechanisms to enable the explainability of the methods used in models, to help encourage fairness and avoid bias.
Uppington noted that aside from regulations, the performance of AI systems — which require both speed and accuracy — needs to be monitored and measured. In Uppington’s view, anytime software undergoes a new paradigm shift, a new monitoring infrastructure is needed. He argued, however, that the monitoring infrastructure for machine learning is different from other types of software systems that already exist.
Machine learning systems are fundamentally data-driven analytical entities, where models are being iterated at a much more rapid rate than other types of software, he explained.
“The data that you’re seeing in production, becomes the training data for your next iteration,” he said. “So today’s operational data is tomorrow’s training data that’s used to directly improve your product.”
As such, Uppington contends that in order to provide explainable AI, organizations first really need to have the right AI model monitoring in place. The things that a data scientist does to explain and analyze a model during development should be monitored throughout the lifecycle of the model. With that approach, Uppington said that the organization can learn from that operational data and bring it back into the next iteration of the model.
Disrupting the AI market with Intel
The issue of AI quality, or lack thereof, is often seen as a barrier to adoption.
“AI quality and explainability have emerged as huge hurdles for enterprises, ones that often prevent them from achieving a return on their AI investments,” stated Arijit Bandyopadhyay, CTO of enterprise analytics and AI at Intel Corporation, in a media advisory. “In teaming with TruEra, Intel is helping to remove those hurdles by enabling enterprises to access AI evaluation, testing and monitoring capabilities that can help them leverage AI for measurable business impact.”
Uppington noted that as part of his company’s engagement with Intel, it is integrating with cnvrg.io, an Intel company that is building out machine learning training services and software. The goal with the integration is to help make it easier for organizations to build, deploy and monitor AI quality, using the convrg.io platform.
The integration with Intel is not the first, or the only silicon vendor that TruEra has partnered with. Barbara Lewis, chief marketing officer at TruEra, said her company already has a partnership with Nvidia, though she noted that partnership is not as deep as the new Intel Disrupter Initiative.
Looking forward, Uppington said that TruEra will continue to iterate its own technology to further help organizations improve AI quality and accuracy.
“We’re gonna be talking a lot more about just making it easier to systematically test and then do the root-cause analysis of your machine learning systems,” he said.