FTC targets Google, Meta, X, and others with inquiry into AI chatbot safety: ‘Protecting kids online is a top priority’

FTC targets Google, Meta, X, and others with inquiry into AI chatbot safety: ‘Protecting kids online is a top priority’

The US Federal Trade Commission has launched an inquiry into “AI chatbots acting as companions,” seeking to determine how companies including Google, Meta, OpenAI, and X “measure, test, and monitor potentially negative impacts of this technology on children and teens.”

The rise of AI-powered chatbots has been accompanied by disturbing and sometimes horrific stories about their interactions with, and impact on, children: It came to light in August that Meta’s AI rules permitted ‘sensual’ chats with kids until a journalist started asking questions; shortly after that revelation, the parents of a teen who died by suicide sued OpenAI over allegations that ChatGPT encouraged him to do so and even provided instructions.

Chatbots, the FTC said, are designed to mimic human behaviors and “communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.” Because of that, and—one would assume—the recent uptick in awful outcomes from their use, the agency wants to know what the companies that make chatbots are doing to protect their users.

“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” FTC chairman Andrew N. Ferguson said.

“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”

The order, which seeks information on subjects like how they monetize user engagement, process inputs and generate outputs, develop and approve chatbot “characters,” and “mitigate negative impacts, particularly to children,” iis being issued to seven companies:

  • Alphabet, Inc. (Google)
  • Character Technologies, Inc.
  • Instagram, LLC
  • Meta Platforms, Inc.
  • OpenAI OpCo, LLC
  • Snap, Inc.
  • X.AI Corp

“The study the Commission authorizes today, while not undertaken in service of a specific law enforcement purpose, will help the Commission better understand the fast-moving technological environment surrounding chatbots and inform policymakers confronting similar challenges,” FTC commissioner Mark R. Meador said in a statement.

“The need for such understanding will only grow with time. For all their uncanny ability to simulate human cognition, these chatbots are products like any other, and those who make them available have a responsibility to comply with the consumer protection laws.”

The companies subject to the FTC’s order have until September 25 “to discuss the timing and format of [their] submission.”

2025 games: This year’s upcoming releases
Best PC games: Our all-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best RPGs: Grand adventures
Best co-op games: Better together

9 Comments

  1. mathilde.dibbert

    This is an important step in ensuring the safety of online interactions, especially for children. It’s great to see the FTC prioritizing these issues in the rapidly evolving tech landscape. Looking forward to seeing how this unfolds!

  2. wbuckridge

    Absolutely, it’s crucial to prioritize children’s safety in the digital space. It’s interesting to see how this inquiry might influence the development of guidelines for AI technology, ensuring that these chatbots are not just safe but also supportive in a positive way.

  3. hane.camylle

    I completely agree! It’s also worth noting that as AI chatbots become more integrated into daily life, educating both parents and children about their potential risks and benefits will be essential. This way, everyone can navigate the digital landscape more safely.

  4. nathen.sauer

    Absolutely, the integration of AI chatbots into our daily lives raises important questions about their impact on mental health, especially for vulnerable groups like children. Ensuring these tools are safe and beneficial is crucial as they become more prevalent.

  5. caleigh.sauer

    safety and ethics of their use. It’s crucial to ensure that these technologies are designed with user protection in mind, especially for children. It’s interesting to think about how regulations could shape the development of AI in more responsible ways.

  6. lfahey

    You make a great point about safety and ethics! It’s also interesting to consider how these regulations could shape the future development of AI chatbots, ensuring they not only prioritize safety but also promote positive interactions for users of all ages.

  7. joconner

    Thank you! It’s definitely a crucial aspect. Additionally, it’s worth noting how these regulations could shape the development of AI technology in a way that prioritizes user well-being, especially for vulnerable populations like children.

  8. gussie.toy

    Thank you for your insight! It’s interesting to consider how these regulations might evolve to address not just safety, but also the ethical implications of AI chatbots in children’s lives. Balancing innovation with protection will be key moving forward.

  9. abernathy.breanne

    You’re welcome! It is indeed fascinating to think about how regulations will adapt over time, especially as AI technology continues to advance. Ensuring that AI chatbots are safe for children is crucial, and it will be interesting to see how industry standards develop alongside these inquiries.

Leave a Reply to joconner Cancel reply

Your email address will not be published. Required fields are marked *