How one researcher used ChatGPT to fool a hacker 

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


The release of GPT-4 back in March has changed enterprise security forever. While hackers have the ability to jailbreak these tools and generate malicious code, security teams vendors have also begun experimenting with generative AI’s detection capabilities. However, one security researcher has quietly developed an innovative new use case for ChatGPT: deception.  

On the 22nd of April, Xavier Bellekens, CEO of deception-as-a-service provider Lupovis, released a blog post outlining how he used ChatGPT to create a printer honeypot to trick a hacker into trying to breach a nonexistent system, and demonstrated the role generative AI has to play deception cybersecurity. 

“I started doing a quick proof of concept [that] took me about two or three hours essentially, and the idea was you build some sort of decoy honeypot, and the plan is to lure adversaries towards you, as opposed to letting them roam into your network,” Bellekens told VentureBeat in an exclusive interview. 

Fooling hackers with ChatGPT 

As part of the exercise, Bellekens asked ChatGPT for instructions and code for building a medium interaction printer, which would support all the functions of a printer, respond to scans and identify as a printer, and have a login page where the user name is “admin” and the password “password.” 

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

In about 10 minutes he had developed a decoy printer, with code that functioned “relatively well.” Next, Bellekens hosted the “printer” on Vultr, using ChatGPT to log incoming connections and send them to a database.  The newly created printer started gaining interest almost immediately. 

“Within a couple of minutes I started having incoming connections and folks trying to brute force it. I was like ‘hey, it’s actually working, so maybe I should start getting some data to see where those bots are coming from,’” Bellekens said. 

To better analyze the connections, Bellekens cross-referenced connecting IP addresses with a Lupovis tool called Prowl, which provides information on a connection’s postal code, city and country, and confirms whether it is a machine or human entity. 

However, it wasn’t just bots that were connecting to the printer. In one instance, Bellekens found that an individual had logged into the printer, which warranted a closer investigation. 

“I looked at that time period in a bit more detail and indeed they logged in without brute force, so I knew that one of the scanners had worked, and they went to click on a couple of buttons to change some of the settings in there. So that was actually quite quick to see that they got fooled by a ChatGPT decoy,” Bellekens said. 

Why is this exercise significant? 

At a high level, this honeypot exercise highlights the role that generative AI tools like ChatGPT have to play in the realm of deception cybersecurity, an approach to defensive operations where a security team creates decoy infrastructure to mislead attackers while gaining insights into the exploitation techniques they use to gain access to the environment. 

VentureBeat reached out to a number of other third-party security researchers, who were enthusiastic about the test’s results. 

“This is probably the coolest project I’ve seen so far,” said Michael-Angelo Zummo, senior intelligence analyst at threat intelligence provider Cybersixgill, “setting up a honeypot to detect threat actors through ChatGPT opens up a world of opportunities. This experiment only involved a printer, which still successfully attracted at least one human that was curious enough to log in and press buttons.” 

Similarly, Henrique Teixeira, a Gartner senior analyst, said this “exercise is an example of LLM [large language model] helping to augment humans’ ability to execute difficult tasks. In this case, the task at hand was Python programming.” More broadly, “this exercise is a significant example that enables citizen developers to be more productive.”

Exploring deception cybersecurity  

While it’s too early to argue that ChatGPT will revolutionize deception cybersecurity, this pilot does indicate that generative AI has the potential to streamline the creation of decoys in the deception technology market. A market that ResearchAndMarkets valued at $1.9 billion as of 2020, and estimated will reach $4.2 billion by 2026. 

But what is deception cybersecurity exactly? “Deception is a very popular threat detection technique in cybersecurity that ‘tricks’ attackers by using fake assets (or honeypots). Typically, it can use automated mapping to collect intelligence with security frameworks like MITRE ATT&CK, for example,” Teixeira said.  

Using generative AI to create a single virtual printer is one thing, but if this use case could be expanded to set up an entire emulated network, it would become much easier for a security team to harden their defenses against threat actors by obscuring potential entry points. 

It’s important to note that the general development of AI is changing the face of deception cybersecurity, leading toward what one Gartner report (requires subscription) calls an automated moving-target defense (AMTD) strategy, where an organization uses automation to move or change the attack surface in real time. 

Essentially, an organization identifies a target asset and sets a timing interval to automate movement, reconfiguration, morphing or encryption to trick attackers. Adding generative AI as part of this strategy to generate decoys at scale could be a powerful force multiplier. 

Gartner predicted that AMTD alone is likely to mitigate most zero-day exploits within a decade and said that by 2025, 25% of cloud applications will leverage AMTD features and concepts as part of built-in prevention approaches. 

As AI-driven solutions and tools like ChatGPT continue to evolve, organizations will have a valuable opportunity to experiment with deception cybersecurity and go on the offensive against threat actors.

Originally appeared on: TheSpuzz

Scoophot
Logo