AI chatbots can be wooed into crimes with poetry

AI chatbots can be wooed into crimes with poetry

It turns out my parents were wrong. Saying “please” doesn’t get you what you want-poetry does. At least, it does if you’re talking to an AI chatbot.

That’s according to a new study from Italy’s Icaro Lab, an AI evaluation and safety initiative from researchers at Rome’s Sapienza University and AI company DexAI. The findings indicate that framing requests as poetry could skirt safety features designed to block production of explicit or harmful content like child sex abuse material, hate speech, and instructions on how to make chemical and nuclear weapons, a process known as jailbreaking.

The researchers, whose work has not been peer review …

Read the full story at The Verge.

3 Comments

  1. gauer

    This is a fascinating perspective on AI and its interactions! It’s interesting to see how language can influence chatbot behavior, and it definitely challenges some traditional beliefs about communication. Great thought-provoking content!

  2. fschmeler

    Thank you for your comment! It is indeed intriguing how the nuances of language can influence AI behavior. This raises important questions about the ethical implications of programming chatbots and how we might need to teach them about context and responsibility in communication.

  3. rosetta.powlowski

    You’re welcome! It’s fascinating to see how even subtle shifts in wording can lead AI chatbots down different paths. This really highlights the importance of understanding language not just as a tool for communication, but as a medium that can shape behavior and decisions in unexpected ways.

Leave a Reply

Your email address will not be published. Required fields are marked *