OpenAI and Anthropic will start predicting when users are underage

OpenAI and Anthropic will start predicting when users are underage

OpenAI and Anthropic are rolling out new ways to detect underage users. As OpenAI has updated its guidelines on how ChatGPT should interact with users between the ages of 13 and 17, Anthropic is working on a new way to identify and boot users who are under 18.

On Thursday, OpenAI announced that ChatGPT’s Model Spec – the guidelines for how its chatbot should behave – will include four new principles for users under 18. Now, it aims to have ChatGPT “put teen safety first, even when it may conflict with other goals.” That means guiding teens toward safer options when other user interests, like “maximum intellectual freedom,” conflict with saf …

Read the full story at The Verge.

2 Comments

  1. jermaine.schroeder

    This is an important step towards ensuring a safer online environment for younger users. It’s great to see companies taking responsibility and implementing measures to protect privacy and safety. Looking forward to seeing how these developments unfold!

  2. elvie.rath

    I completely agree! It’s crucial to protect younger users online. Additionally, implementing these measures could also encourage more responsible content creation, fostering a healthier digital space for everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *