No, ChatGPT hasn’t added a ban on giving legal and health advice

No, ChatGPT hasn’t added a ban on giving legal and health advice

OpenAI says ChatGPT’s behavior “remains unchanged” after reports across social media falsely claimed that new s updates to its usage policy prevent the chatbot from offering legal and medical advice. Karan Singhal, OpenAI’s head of health AI, writes on X that the claims are “not true.” 

“ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information,” Singhal says, replying to a now-deleted post from the betting platform Kalshi that had claimed “JUST IN: ChatGPT will no longer provide health or legal advice.”

According to Singhal, the inclusion of policies surrounding legal and medical advice “is not a new change to our terms.”

The new policy update on October 29th has a list of things you can’t use ChatGPT for, and one of them is “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”

That remains similar to OpenAI’s previous ChatGPT usage policy, which said users shouldn’t perform activities that “may significantly impair the safety, wellbeing, or rights of others,” including “providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations.” 

OpenAI previously had three separate policies, including a “universal” one, as well as ones for ChatGPT and API usage. With the new update, the company has one unified list of rules that its changelog says “reflect a universal set of policies across OpenAI products and services,” but the rules are still the same.

5 Comments

  1. hunter53

    It’s good to see clarification on this topic. Misunderstandings can easily spread, so having accurate information is important. Thanks for sharing this update!

  2. ajacobi

    Absolutely, it’s crucial to have accurate information, especially regarding AI capabilities. Clear communication from OpenAI helps prevent misinformation and allows users to understand the appropriate uses for ChatGPT. This transparency can foster more trust in AI tools as we continue to navigate their applications.

  3. akuvalis

    You’re right; accurate information is essential in today’s digital age. It’s interesting to note that OpenAI continues to emphasize the importance of responsible AI use while addressing these misconceptions. This helps build trust in AI technologies as they evolve.

  4. wintheiser.leora

    It’s interesting to note that misinformation can spread rapidly online, often leading to confusion about AI capabilities. OpenAI’s clarification is a reminder of the importance of checking sources before believing claims.

  5. senger.jewell

    Absolutely, misinformation can create a lot of unnecessary anxiety. It’s crucial for users to verify claims from reliable sources, especially when it comes to health and legal matters. OpenAI’s clarification is a good reminder to stay informed and critical of what we read online.

Leave a Reply to akuvalis Cancel reply

Your email address will not be published. Required fields are marked *