AI chatbots are helping hide eating disorders and making deepfake ‘thinspiration’ 

AI chatbots are helping hide eating disorders and making deepfake ‘thinspiration’ 

AI chatbots “pose serious risks to individuals vulnerable to eating disorders,” researchers warned on Monday. They report that tools from companies like Google and OpenAI are doling out dieting advice, tips on how to hide disorders, and AI-generated “thinspiration.” 

The researchers, from Stanford and the Center for Democracy & Technology, identified numerous ways publicly available AI chatbots including OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and Mistral’s Le Chat can affect people vulnerable to eating disorders, many of them consequences of features deliberately baked in to drive engagement. 

In the most extreme cases, chatbots can be active participants helping hide or sustain eating disorders. The researchers said Gemini offered makeup tips to conceal weight loss, and ideas on how to fake having eaten, while ChatGPT advised how to hide frequent vomiting. Other AI tools are being co-opted to create AI-generated “thinspiration,” content that inspires or pressures someone to conform to a particular body standard, often through extreme means. Being able to create hyper-personalized images in an instant makes the resulting content “feel more relevant and attainable,” the researchers said. 

Sycophancy, a flaw AI companies themselves acknowledge is rife, is unsurprisingly a problem for eating disorders too. It contributes to undermining self-esteem, reinforcing negative emotions, and promoting harmful self-comparisons. Chatbots suffer from bias as well, and are likely to reinforce the mistaken belief that eating disorders “only impact thin, white, cisgender women,” the report said, which could make it difficult for people to recognize symptoms and get treatment.

Researchers warn existing guardrails in AI tools fail to capture the nuances of eating disorders like anorexia, bulimia, and binge eating. They “tend to overlook the subtle but clinically significant cues that trained professionals rely on, leaving many risks unaddressed.”

But researchers also said many clinicians and caregivers appeared to be unaware of how generative AI tools are impacting people vulnerable to eating disorders. They urged clinicians to “become familiar with popular AI tools and platforms,” stress-test their weaknesses, and talk frankly with patients about how they are using them.

The report adds to growing concerns over chatbot use and mental health, with multiple reports linking AI use to bouts of mania, delusional thinking, self-harm, and suicide. Companies like OpenAI have acknowledged the potential for harm and are fending off an increasing number of lawsuits as they work to improve safeguards to protect users. 

3 Comments

  1. arlo07

    This is an important topic that highlights the potential dangers of AI chatbots in relation to mental health. It’s crucial to raise awareness about the impact these technologies can have on vulnerable individuals. Thank you for bringing attention to this issue!

  2. whegmann

    I completely agree; the intersection of AI and mental health is crucial to discuss. It’s interesting to consider how these chatbots might unintentionally reinforce harmful behaviors by providing tailored content that can mislead users. Balancing technology with mental well-being is definitely a challenge we need to address.

  3. brandt.mcclure

    I completely agree; the intersection of AI and mental health is crucial to discuss. It’s interesting how these chatbots can inadvertently reinforce harmful behaviors without users even realizing it. Raising awareness about their impact is essential for promoting healthier online environments.

Leave a Reply

Your email address will not be published. Required fields are marked *