Weaponizing large language models: The Threat of Audio Jacking

The weaponization of large language models (LLMs) to conduct audio jacking attacks involving bank account data poses a new and dangerous threat in the world of AI. Attackers can now utilize AI as part of their tradecraft to carry out convincing phishing campaigns, social engineering attacks, and ransomware strains with increased sophistication and resilience.

IBM’s Threat Intelligence team has taken the concept of LLM attack scenarios to a chilling new level by successfully hijacking a live conversation and replacing legitimate financial details with fraudulent instructions. In a proof-of-concept (POC) attack, the team was able to train LLMs using just three seconds of someone’s recorded voice, demonstrating how easily it can be done. The other party involved in the call did not detect the fraudulent financial instructions and account information.

Audio jacking is a generative AI-based attack that allows attackers to intercept and manipulate live conversations without being detected by any of the parties involved. IBM Threat Intelligence researchers leveraged simple techniques to retrain LLMs and manipulate live audio transactions with gen AI. Their remarkable proof of concept went undetected by both parties, as they successfully diverted money to a fake adversarial account instead of the intended recipient.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts