What business executives need to know about AI

Join today’s leading executives online at the Data Summit on March 9th. Register here.

Virtually every enterprise decision-maker across the economic spectrum knows by now that artificial intelligence (AI)  is the wave of the future. Yes, AI has its challenges and its ultimate contribution to the business model is still largely unknown, but at this point it’s not a matter of whether to deploy AI but how.

For most of the C-suite, even those running the IT side of the house, AI is still a mystery. The basic idea is simple enough – software that can ingest data and make changes in response to that data — but the details surrounding its components, implementation, integration and ultimate purpose are a bit more complicated. AI isn’t merely a new generation of technology that can be provisioned and deployed to serve a specific function; it represents a fundamental change in the way we interact with the digital universe.

Intelligent oversight of AI

So even as the front office is saying “yes” to AI projects left and right, it wouldn’t hurt to gain a more thorough understanding of the technology to ensure it is being employed productively.

One of the first things busy executives should do is gain a clear understanding of AI terms and the various development paths currently underway, says Mateusz Lach, AI and digital business consultant at Nexocode. After all, it’s difficult to push AI into the workplace if you don’t understand the difference between AI, ML, DL and traditional software. At the same time, you should have a basic working knowledge of the various learning models being employed (reinforcement, supervised, model-based …), as well as ways AI is used (natural language processing, neural networking, predictive analysis, etc.)

With this foundation in hand, it becomes easier to see how the technology can be applied to specific operational challenges. And perhaps most importantly, understanding the role of data in the AI model, and how quality data is of prime importance, will go a long way toward making the right decisions as to where, when and how to employ AI.  

It should also help to understand where the significant challenges lie in AI deployment, and what those challenges are. Tech consultant Neil Raden argues that the toughest going lies in the “last mile” of any given project, where AI must finally prove that it can solve problems and enhance value. This requires the development of effective means of measurement and calibration, preferably with the capability to place results in multiple contexts given that success can be defined in different ways by different groups. Fortunately, the more experience you gain with AI the more you will be able to automate these steps, and this should lessen many of the problems associated with the last mile.

View from above

Creating the actual AI models is best left to the line-of-business workers and data scientists who know what needs to be done and how to do it, but it’s still important for the higher ups to understand some of the key design principles and capabilities that differentiate successful models from failures. Andrew Clark, CTO at AI governance firm Monitaur, says models should be designed around three key principals:

  • Context – the scope, risks, limitations and overall business justification for the model should be clearly defined and well-documented
  • Verifiability – each decision and step in the development process should be verified and interrogated to understand where data comes from, how it was processes and what regulatory factors should come into play
  • Objectivity – ideally, the model should be evaluated and understood by someone not involved in the project, which is made easier if it has been designed around adequate context and verifiability.

As well, models should exhibit a number of other important qualities, such as reperformance (aka, consistency), interpretability (the ability to be understood by non-experts), and a high degree of deployment maturity, preferably using standard processes and governance rules.

Like any enterprise initiative, the executive view of AI should center on maximizing reward and minimizing risk. A recent article from PwC in the Harvard Business Review highlights some ways this can be done, starting with the creation of a set of ethical principles to act as a “north star” for AI development and utilization. Equally important is establishing clear lines of ownership over each project, as well as building a detailed review and approval process at multiple stages of the AI lifecycle. But executives should guard against letting these safeguards become stagnant, since both the economic conditions and regulatory requirements governing the use of AI will likely be highly dynamic for some time.

Above all, enterprise executives should strive for flexibility in their AI strategies. Like any business resource, AI must prove itself worthy of trust, which means it should not be released into the data environment until its performance can be assured – and even then, never in a way that cannot be undone without painful consequences to the business model.

Yes, the pressure to push AI into production environments is strong and growing stronger, but wiser heads should know that the price of failure can be quite high, not just for the organization but individual careers as well.

Originally appeared on: TheSpuzz