AI Mimicking Human Behavior in Phishing Attacks

As AI continues to evolve at an astonishing pace, it is starting to show its capabilities in various fields, from creating art and 3D worlds to becoming a reliable partner in the workplace. However, a recent study by IBM X-Force suggests that generative AI and large language models (LLMs) are not quite as deceptive as humans, at least not yet.

In a phishing experiment conducted by the X-Force team, ChatGPT, an AI model, was able to generate a convincing phishing email in just five minutes based on five simple prompts. The email was then sent to 800 workers at a global healthcare company. While the AI-generated email proved to be almost as enticing as a human-generated one, the click-through rate was slightly lower.

“As AI continues to evolve, we’ll continue to see it mimic human behavior more accurately, which may lead to even closer results, or AI ultimately beating humans one day,” said Stephanie (Snow) Carruthers, IBM’s chief people hacker.

ChatGPT was able to identify the top areas of concern for industry employees, such as career advancement, job stability, and fulfilling work, and provided recommendations for social engineering and marketing techniques. The AI model advised using trust, authority, social proof, personalization, mobile optimization, and call to action in the phishing email. Additionally, ChatGPT suggested that the email should come from the internal human resources manager.

Although the AI-generated email was persuasive, the human team’s phishing email performed slightly better in terms of click-through rate. Emotional intelligence, personalization, and concise subject lines were identified as the reasons for the human win. The human team was able to emotionally connect with employees by focusing on a specific example within their company and including the recipient’s name in the email. Moreover, the human-generated subject line was straightforward, while the AI’s subject line was lengthier and potentially raised suspicion.

However, it is important to note that AI-driven phishing attempts are becoming more sophisticated and no longer exhibit obvious grammar or spelling errors. Carruthers emphasized the need to educate employees about the warning signs beyond traditional red flags.

“We need to abandon the stereotype that all phishing emails have bad grammar,” Carruthers said. “That’s simply not the case anymore.”

Phishing remains a top tactic among attackers because it exploits human weaknesses and easily persuades individuals to click on malicious links or disclose sensitive information. The research also highlighted how generative AI offers productivity gains for hackers, enabling them to create convincing phishing emails more quickly.

Organizations should take a proactive approach to enhance their social engineering programs, strengthen identity and access management tools, update threat detection systems, and regularly train employees to recognize and defend against evolving threats.

“As a community, we need to test and investigate how attackers can capitalize on generative AI,” Carruthers emphasized. “By understanding how attackers can leverage this new technology, we can help organizations better prepare for and defend against these evolving threats.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts