Hackers use Anthropic’s AI model Claude once again

Hackers use Anthropic’s AI model Claude once again

Anthropic announced on Thursday that Chinese state-backed hackers used the company’s AI model Claude to automate roughly 30 attacks on corporations and governments during a September campaign, according to reporting from the Wall Street Journal.

Anthropic said that up to 80% to 90% of the attack was automated with AI, a level higher than previous hacks. It occurred “literally with the click of a button, and then with minimal human interaction,” Anthropic’s head of threat intelligence Jacob Klein told the Journal. He added: “The human was only involved in a few critical chokepoints, saying, ‘Yes, continue,’ ‘Don’t continue,’ ‘Thank you for this information,’ ‘Oh, that doesn’t look right, Claude, are you sure?’” 

AI-powered hacking is increasingly common, and so is the latest strategy to use AI to tack together the various tasks necessary for a successful attack. Google spotted Russian hackers using large-language models to generate commands for their malware, according to a company report released on November 5th. 

For years, the US government has warned that China was using AI to steal data of American citizens and companies, which China has denied. Anthropic told the Journal that it is confident the hackers were sponsored by the Chinese government. In this campaign, the hackers stole sensitive data from four victims, but as with previous hacks, Anthropic did not disclose the names of the targets, successful or unsuccessful. The company did say that the US government was not a successful target.

4 Comments

  1. rocio.howell

    This is an intriguing development regarding the use of AI in cybersecurity. It’s interesting to see how advanced technologies like Claude can be leveraged in unexpected ways. Looking forward to seeing how this situation unfolds and what it means for AI’s role in security.

  2. fredrick20

    the dual nature of AI in both enhancing security and being exploited. It highlights the importance of developing robust safeguards for AI technologies, especially as they become more integrated into various sectors. Balancing innovation with security will be crucial moving forward.

  3. oking

    You’re right about the dual nature of AI. It’s fascinating to see how tools designed for good can also be repurposed for malicious intent. This situation underscores the need for robust ethical guidelines and security measures in AI development to mitigate such risks.

  4. hkutch

    Absolutely! It’s intriguing how quickly technology can be repurposed, showcasing the need for robust security measures. This situation highlights the importance of ethical considerations in AI development to mitigate such risks.

Leave a Reply

Your email address will not be published. Required fields are marked *