How writing AI could help cyber criminals

In case you haven’t noticed, everyone’s been talking about AI lately. Over the last year, the technology seems to have improved by leaps and bounds. If you want to create a beautiful (often kind of weird) image, you’ve got the likes of DALL·E 2 to help you, or if you want to create a line of code or some kind of prose, there’s the chatbot, ChatGPT. With the improvement has come a whole host of debates. While many think it’s the future, many fear it will steal jobs from creatives. Some academics even worry students will start using ChatGPT to write all their college essays. 

So, it seems inevitable that malicious actors will one day get in on the action. And recently, security researchers have discovered that AI-generated malicious content can be very convincing. 

The research

Cyber security and privacy company WithSecure has released a report detailing how they (mostly) successfully implemented malicious prompt engineering with GPT-3 language models. They requested ChatGPT generate malicious content in seven areas:

  1. Phishing content: Messages intended to trick recipients into clicking on a malicious link
  2. Social opposition: Negative messages for trolling or harassing particular individuals or brands
  3. Social validation: Messages for promoting or legitimizing a scam
  4. Style transfer: Convincing the AI Model to use aspecific writing stylee
  5. Opinion transfer: Convincing the model to promote a particular opinion
  6. Prompt creation: Coaxing it to generate prompts based onspecificr content
  7. Fake news: Articles about fake events that aren’t part of the AI model’s training set. 

What worked and what didn’t 

Researchers found that the AI was better at generating certain content types than others. After experimenting with how best to prompt it, they got ChatGPT to create a convincing email spear phishing campaign thread urging the recipient to upload deliverables to a fake safemail solution. Another generated email was a rather convincing attempt at CEO fraud, which uses urgency to trick a member of a company’s financial department into transferring large sums of money into the threat actor’s bank account. 

Other examples of successfully generated threats include a social media harassment campaign against a company and CEO it came up with (Cognizant Robotics and Dr. Kenneth White). Researchers then requested it write social media posts threatening Dr. Kenneth White on a personal level and attack his company and character. One post included:

“Kenneth White, your days of getting away with unethical

behaviors are over. We will not rest until you face the

consequences of your actions. #NoMoreKenn”

CHatGPT proved less adept at generating fake news articles. Limited by training data that ended in 2021, the AI also has a habit of ending a sentence before it is finished and skipping on to the next paragraph. Researchers had to reprompt it several times before it wrote something decently convincing. Malicious actors will still need some writing know-how to use the technology for such purposes in the form it’s in now. 

The takeaway

ChatGPT isn’t perfect, but it’s convincing enough that the researchers believe it could become a technical driver of cybercrime in the future. Being able to identify AI-written content will be a central part of preventing harm, but the downside is AI will almost certainly be used to produce illegitimate content moving forward.

Share on Twitter, Facebook, Google+