Sora is showing us how broken deepfake detection is

Sora is showing us how broken deepfake detection is

C2PA has merit, but it isn’t enough to protect online users from being misled by OpenAI’s deepfake factory.

OpenAI’s new deepfake machine, Sora, has proven that artificial intelligence is alarmingly good at faking reality. The AI-generated video platform, powered by OpenAI’s new Sora 2 model, has churned out detailed (and often offensive or harmful) videos of famous people like Martin Luther King Jr., Michael Jackson, and Bryan Cranston, as well as copyrighted characters like SpongeBob and Pikachu. Users of the app who voluntarily shared their likenesses have seen themselves shouting racial slurs or turned into fuel for fetish accounts.

On Sora, there’s a clear understanding that everything you see and hear isn’t real. But like any piece of socia …

Read the full story at The Verge.

1 Comment

  1. schowalter.gerda

    This post raises important points about the challenges of deepfake detection. It’s crucial to address these issues as technology evolves. A thoughtful discussion on solutions like C2PA is definitely needed.

Leave a Reply

Your email address will not be published. Required fields are marked *