How AI vibe hacking could lead to an increase in cybercrime

We talked a few months ago about the rise in malicious LLMs, and a recent report from Anthropic, creators of Claude AI, shines further light on the issue. Their Threat Intelligence Report reveals several instances of Claude being misused maliciously, from extortion and fraud to selling AI-generated ransomware. Cybercriminals now use AI at every step of their operations, helping them with tasks like stealing credit card information, profiling victims, and analyzing stolen data.

While before AI could only advise on how to perform such actions, now the models themselves can be manipulated to accomplish them. And the criminals manipulating AI models to do this often have few technical skills themselves. Since such complex operations previously would have taken years of training, the use of AI is increasing the scale and scope of who can engage in cybercriminality. 

Manipulating AI models in such a way is known as vibe hacking. 

How cybercriminals used vibe hacking to extort data with Claude Code

The report highlights a recent event that they managed to disrupt. A cybercriminal used Claude Code to steal and extort personal data from 17 distinct organizations, including religious institutions and healthcare. Claude Code was used every step of the way, from hacking into networks to harvesting the data. The threat actor allowed Claude to decide how to extort victims, which data to take, and how much ransom to demand from individual victims. 

The stolen data also wasn’t encrypted, as with typical ransomware cases. Instead, victims were threatened with the data being exposed publicly, attempting to extort victims into paying ransoms, which were sometimes over $500,000.

After discovering the operation, Anthropic responded by banning the account and the associated accounts. They also created an automated screening tool and introduced a new detection method to help discover similar activity like this more promptly in the future. They also shared technical indicators about the attack with relevant authorities to help prevent such attacks in future.

The takeaway 

The report concludes that attacks like this may become more common. One major reason is the fact that AI tools can be used for both technical advice and operational assistance, reducing the number of people such large-scale extortion would typically require. And because AI tools can adapt to defensive measures like malware detection systems in real time, it will become more difficult to stop them.

Share on Twitter, Facebook, Google+