How to use responsible AI to manage risk

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

While AI-driven solutions are quickly becoming a mainstream technology across industries, it has also become clear that deployment requires careful management to prevent unintentional damage.  As is the case with most tools, AI has the potential to expose individuals and enterprises to an array of risks, risks that could have otherwise been mitigated through diligent assessment of potential consequences early on in the process.

This is where “responsible AI” comes in — that is, a governance framework that documents how a specific organization should address the ethical and legal challenges surrounding AI. A key motivation for responsible AI endeavors is resolving uncertainty about who is accountable if something goes wrong.

According to Accenture’s latest Tech Vision report, only 35% of global consumers trust how AI is being implemented. And 77% think companies must be held liable for their misuse of AI.

But the development of ethical, trustworthy AI standards is largely up to the discretion of those who write and deploy a company’s AI algorithmic models. This means that the steps required to regulate AI and ensure transparency vary from business to business.

And without any defined policies and processes in place, there is no way to establish accountability and make informed decisions about how to keep AI applications compliant and brands profitable.

Another major challenge of machine learning is the enormous AI expertise divide that exists between policymakers and data scientists and developers. Stakeholders who understand risk management don’t necessarily have the tools to apply that skill set to machine learning operations and put the right governance and controls in place.

These problems served as inspiration for Palo Alto-based Credo AI, which was founded in 2020 to bridge the gap between policymakers’ tech knowledge and data scientists’ ethics knowledge, in order to ensure AI’s sustainable development.

Putting responsible AI into practice

“We created the first responsible AI governance platform because we saw an opportunity to help companies keep their AI systems and machine learning models aligned with human values,” Navrina Singh, founder and CEO of the company, told VentureBeat.

After closing a $12.8 million Series A financial round led by Sands Capital, Singh hopes to bring the responsible AI governance to more enterprises around the world. The funding will be used to accelerate product development, build a strong go-to-market team to further Credo AI’s responsible AI category leadership and strengthen the tech policy function to support emerging standards and regulations.   

With Fortune 500 customers in financial services, banking, retail, insurance and defense, Singh and her team want to empower enterprises to measure, monitor and manage AI-introduced risks at scale.

The future of AI governance

Despite tech giants like Google and Microsoft slowly pulling back the curtain to share how they’re approaching AI in their workplace, responsible AI is a relatively new field that is still evolving.

Singh notes that she envisions a future in which enterprises prioritize responsible AI the way they do cybersecurity. “Similar to what we’re seeing on the climate change and cyber safety fronts, we’re going to start seeing more disclosures around data and AI systems,” she says.

Though legislative changes to AI oversight appear unlikely in the near future, one thing is certain — policymakers and private firms must work in tandem to get all the stakeholders on the same page — in terms of both compliance and accountability.

Originally appeared on: TheSpuzz