MLsec could be the answer to adversarial AI and machine learning attacks 

Register now for your free virtual pass to the Low-Code/No-Code Summit this November 9. Hear from executives from Service Now, Credit Karma, Stitch Fix, Appian, and more. Learn more.

With research showing that private investment in artificial intelligence (AI) reached roughly $93.5 billion in 2021, it’s no secret that many organizations are implementing AI and machine learning (ML) to improve their businesses, but it’s easy to overlook the security risks created by AI adoption. 

Every AI and ML model that an organization uses can be a potential target for cyberattacks. The good news is that a growing number of providers are recognizing these models as part of the modern enterprise attack surface.

One such provider is HiddenLayer, which today announced the launch of the HiddenLayer MLsec Platform designed to detect adversarial ML attacks. The announcement comes hot on the heels of raising $6 million in seed funding earlier this year. 

HiddenLayer uses a model scanner to analyze machine learning model events in real time to identify malicious activity without directly accessing an organization’s ML models. 


Low-Code/No-Code Summit

Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

Register Here

AI and ML models as part of the attack surface 

As AI adoption continues to increase, it’s becoming increasingly clear that ML models themselves are part of the attack surface. According to McKinsey, 63% of enterprises cite cybersecurity as an AI risk, the most recognized risk associated with AI adoption.

These concerns are well founded, particularly when vulnerabilities in AI or ML models can provide cybercriminals with an entry point into an environment, as part of adversarial machine learning (AML) attacks.

One of the most notorious examples of this occurred in 2019, after Skylight researchers discovered a vulnerability in Cylance’s AI-based antivirus product.

In a blog post outlining the event, the researchers said, “AI-based products offer a new and unique attack surface. Namely, if you could truly understand how a certain model works, and the type of features it uses to reach a decision, you would have the potential to fool it consistently, creating a universal bypass.” 

As a result, any enterprise that leverages AI must be prepared to defend it from threat actors, which HiddenLayer does with automated detection and response capabilities. 

“The single largest concern about continuing the investment and expansion into AI/ML is cybersecurity, per McKinsey’s State of AI Report. The HL MLsec Platform provides the industry’s first scalable and real-time security suite to enable organizations and governments to expand the use of AI/ML without risk to their entire security posture,” said CEO of HiddenLayer, Christopher Sestito. 

“Further, every industry has embraced artificial intelligence in some form or fashion, helping them grow their revenue or save costs in the trillions of dollars. As with any new technology, it is susceptible to cybersecurity attacks,” Sestito said. 

The vendors addressing adversarial machine learning 

With awareness over adversarial machine learning attacks growing as AI adoption increases, there are a number of vendors looking to reduce the chance of malicious exploitation of AI and ML models. 

One such provider is Robust Intelligence, which provides a platform for testing, monitoring and improving machine learning models. The solution can not only detect vulnerabilities in machine learning models that threat actors can exploit but also stress test models before deployment. 

Last year, Robust Intelligence raised $30 million as part of a series B funding round. Another competitor is, which most recently raised $13 million in funding in 2020, for an AI stress testing solution with threat modeling and model hardening capabilities. 

However, Sestito argues that one of the key differentiators between HiddenLayer and other providers is that its solution doesn’t require access to private data or model IP. 

“There are many companies focused on MLops to help operationalize AI, but not on security. Traditional cybersecurity companies are focused on legacy threats like malware, spam, phishing, etc that attack computer systems. We are the first company to address cybersecurity threats targeting AI,” Sestito said.

Originally appeared on: TheSpuzz