New California law requires AI to tell you it’s AI

New California law requires AI to tell you it’s AI

A bill attempting to regulate the ever-growing industry of companion AI chatbots is now law in California, as of October 13th.

California Gov. Gavin Newsom signed into law Senate Bill 243, billed as “first-in-the-nation AI chatbot safeguards” by state senator Anthony Padilla. The new law requires that companion chatbot developers implement new safeguards — for instance, “if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human,” then the new law requires the chatbot maker to “issue a clear and conspicuous notification” that the product is strictly AI and not human. 

Starting next year, the legislation would require some companion chatbot operators to make annual reports to the Office of Suicide Prevention about safeguards they’ve put in place “to detect, remove, and respond to instances of suicidal ideation by users,” and the Office would need to post such data on its website. 

“Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement on signing the bill, along with several other pieces of legislation aimed at improving online safety for children, including new age-gating requirements for hardware. “We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not for sale.”

The news comes after Governor Newsom officially signed Senate Bill 53, the landmark AI transparency bill that divided AI companies and made headlines for months, into law in California. 

4 Comments

  1. kilback.chanel

    This new law in California seems like an important step towards transparency in the AI industry. It’s great to see efforts being made to ensure users are aware of when they’re interacting with AI. It could lead to more informed conversations and interactions.

  2. swift.aleen

    Absolutely, transparency is key in building trust with users. It’s interesting to think about how this could influence other states or even countries to adopt similar regulations, fostering a more standardized approach to AI interactions.

  3. francesca.lebsack

    I completely agree! Transparency really does foster trust. It’s also fascinating to consider how this law might influence the design of AI systems, pushing developers to prioritize clear communication with users about their AI’s capabilities and limitations.

  4. kaela.ebert

    It’s great to see support for transparency! It’s also fascinating to consider how this law might influence the design of AI systems, pushing developers to create more user-friendly interfaces that clearly communicate their AI nature. This could lead to a more informed public as well!

Leave a Reply to francesca.lebsack Cancel reply

Your email address will not be published. Required fields are marked *