Why the explainable AI market is growing rapidly

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Powered by digital transformation, there seems to be no ceiling to the heights organizations will reach in the next few years. One of the notable technologies helping enterprises scale these new heights is artificial intelligence (AI). But as AI advances with numerous use cases, there’s been the persistent problem of trust: AI is still not fully trusted by humans. At best, it’s under intense scrutiny and we’re still a long way from the human-AI synergy that’s the dream of data science and AI experts.

One of the underlying factors behind this disjointed reality is the complexity of AI. The other is the opaque approach AI-led projects often take to problem-solving and decision-making. To solve this challenge, several enterprise leaders looking to build trust and confidence in AI have turned their sights to explainable AI (also called XAI) models.

Explainable AI enables IT leaders — especially data scientists and ML engineers — to query, understand and characterize model accuracy and ensure transparency in AI-powered decision-making.   

Why companies are getting on the explainable AI train

With the global explainable AI market size estimated to grow from $3.5 billion in 2020 to $21 billion by 2030, according to a report by ResearchandMarkets, it’s obvious that more companies are now getting on the explainable AI train. Alon Lev, CEO at Israel-based Qwak, a fully-managed platform that unifies machine learning (ML) engineering and data operations, told VentureBeat in an interview that this trend “may be directly related to the new regulations that require specific industries to provide more transparency about the model predictions.” The growth of explainable AI is predicated on the need to build trust in AI models, he said.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

He further noted that another growing trend in explainable AI is the use of SHAP (SHapley Additive exPlanations) values — which is a game theoretic approach to explaining the outcome of ML models.

“We are seeing that our fintech and healthcare customers are more involved in the topic as they are sometimes required by regulation to explain why a model gave a specific prediction, how the prediction came about and what factors were considered. In these specific industries, we are seeing more models with explainable AI built in by default,” he added.

A growing marketplace with tough problems to solve

There’s no dearth of startups in the AI and MLops space, with a long list of startups developing MLops solutions including Comet, Iterative.ai, ZenML, Landing AI, Domino Data Lab, Weights and Biases and others. Qwak is another startup in the space that focuses on automating MLops processes and allows companies to manage models the moment they are integrated with their products.  

With the claim to accelerate MLops potential using a different approach, Domino Data Lab is focused on building on-premises systems to integrate with cloud-based GPUs as part of Nexus — its enterprise-facing initiative built in collaboration with Nvidia as a launch partner. ZenML in its own right offers a tooling and infrastructure framework that acts as a standardization layer and allows data scientists to iterate on promising ideas and create production-ready ML pipelines.

Comet prides itself on the ability to provide a self-hosted and cloud-based MLops solution that allows data scientists and engineers to track, compare and optimize experiments and models. The aim is to deliver insights and data to build more accurate AI models while improving productivity, collaboration and explainability across teams.

In the world of AI development, the most perilous journey to take is the one from prototyping to production. Research has shown that the majority of AI projects never make it into production, with an 87% failure rate in a cutthroat market. However, this doesn’t in any way imply that established companies and startups aren’t having any success at riding the wave of AI innovation.

Addressing Qwak’s challenges when deploying its ML and explainable AI solutions to users, Lev said while Qwak doesn’t create its own ML models, it provides the tools that empower its customers to efficiently train, adapt, test, monitor and productionize the models they build. “The challenge we solve in a nutshell is the dependency of the data scientists on engineering tasks,” he said.

By shortening the lifespan of the model buildup via taking away the underlying drudgery, Lev claims Qwak helps both data scientists and engineers deploy ML models continuously and automate the process using its platform.

Qwak’s differentiators

In a tough marketplace with various competitors, Lev claims Qwak is the only MLops/ML engineering platform that covers the full ML workflow from feature creation and data preparation through to deploying models into production.

“Our platform is simple to use for both data scientists and engineers, and the platform deployment is as simple as a single line of code. The build system will standardize your project’s structure and help data scientists and ML engineers generate auditable and retrainable models. It will also automatically version all models’ code, data and parameters, building deployable artifacts. On top of that, its model version tracks disparities between multiple versions, warding off data and concept drift.”

Founded in 2021 by Alon Lev (former VP of data operations at Payoneer), Yuval Fernbach (former ML specialist at Amazon), Ran Romano (former head of data and ML engineering at Wix.com) and Lior Penso (former business development manager at IronSource), the team at Qwak claims to have upended the race and approach to getting the explainable AI market ready.

Originally appeared on: TheSpuzz

Scoophot
Logo