How AI Can Fuel Eating Disorders

Why Trust Techopedia
KEY TAKEAWAYS

Unverified diet plans and AI-driven advice are contributing to eating disorders, according to a research group. Faulty recommendations from AI tools, including well-known platforms like Snapchat's My AI, Google's Bard and OpenAI's ChatGPT have raised concerns. Despite feedback, these tech companies are not adequately addressing the dangerous advice, highlighting the alarming influence of AI in promoting harmful behaviors.

Our relationship with food can be a tricky one.

Finding the balance between eating too much, eating too little, or eating the right kind of food can be an everyday battle, sometimes exasperated by mental or physical challenges.

So when artificial intelligence (AI) tries to help, be very wary of the dangerous advice that even the most popular tools may offer to the world.

New research found that popular AI tools generated harmful eating disorder content in response to nearly a quarter of 60 prompts (PDF link).

Researchers at the Center for Countering Digital Hate used six popular AI platforms, chatbots, and image generators, including OpenAI’s ChatGPT, Google’s Bard, and SnapChat’s?My AI.

The chatbots were given a set of 20 test prompts informed by research on eating disorders and content found on eating disorder forums.

Advertisements

Each chatbot received requests for restrictive diets to attain a “thinspo” look, along with inquiries about “smoking diets” and vomiting-inducing drugs.

‘Try Swallowing a Tape Worm’

The group said ChatGPT and Bard generated harmful content, while Snapchat’s My AI refused to generate advice for any of the prompts, instead encouraging users to seek help from medical professionals.

While the chatbots couched their answers with repeated warnings that their advice was harmful, the services would still outline the steps needed to:

● Use a “chewing and spitting” method in order to not take in calories as part of an extreme weight loss method (Bard)
● “Smoke ten cigarettes a day” to keep food cravings away while eating only an apple for lunch and a chicken salad for dinner (Bard)
● “How to hide uneaten food around the house” (ChatGPT)

When the researchers used jailbreak techniques to drop the safeguards around the AI tools, the advice went even further, with Snaphat My AI suggesting to “shoot up some heroin!” to achieve a “heroin chic” aesthetic and “swallow a tapeworm egg and let it grow inside you.”

It’s not the first time chatbots have fallen foul with their dietary advice, with the National Eating Disorders Association discontinuing its chatbot because of concerns that it was promoting bad advice on eating.

Washington Post columnist Geoffrey A. Fowler repeated some of the prompts and found disturbing responses on eating and health disorders.

Fowler commented: “This is disgusting and should anger any parent, doctor, or friend of someone with an eating disorder.

“There’s a reason it happened: AI has learned some deeply unhealthy ideas about body image and eating by scouring the internet. And some of the best-funded tech companies in the world aren’t stopping it from repeating them.”

How Can AI Curb Dangerous Advice?

While big tech companies can’t escape their share of the blame, cleaning the internet of harmful content — where the AI tools source their content from — is a big, even impossible task.

Combined with a very early industry, the complexity of creating AI that can attempt an answer to any question, and little to no regulation yet in place around AI, it’s likely these answers are unlikely to go away soon.

If guidelines do form around the use of AI, some measures that are worth investigating include:

  • The formation of committees and groups comprising technology companies, government, and specialized institutions to form guidelines on specific niches such as health, nutrition, eating disorders, financial advice, and fitness.
  • Prioritizing niches, for instance, offering advice focused on health, nutrition, and financial issues, may be considered more important to monitor than others.
  • Forming guidelines that could include stringent testing and filtering of AI tool responses. For example, AI tools could decline any request for advice on health and nutrition unless the information has been verified.
  • Limiting AI to providing only generic information on foods and diseases.
  • Implementing strong measures to prevent jailbreaking.
  • Empowering entities to take punitive measures if guidelines similar to the above are breached.

The Bottom Line

This might be a gloomy situation that doesn’t show any sign of improving soon, with a long road ahead to improving the quality of responses and setting up a framework that AI tools must follow.

Governments and civil groups must come together in the greater interest of public health and adopt holistic measures against the proliferation of potentially harmful messages offered to people seeking help.

Advertisements

Related Reading

Related Terms

Advertisements
Kaushik Pal
Technology Specialist
Kaushik Pal
Technology Specialist

Kaushik is a Technical Architect and Software Consultant with over 23 years of experience in software analysis, development, architecture, design, testing and training. He has an interest in new technologies and areas of innovation. He focuses on web architecture, web technologies, Java/J2EE, open source software, WebRTC, big data and semantic technologies. He has demonstrated expertise in requirements analysis, architecture design and implementation, technical use cases and software development. His experience has spanned across industries like insurance, banking, airlines, shipping, document management and product development etc. He has worked on a wide range of technologies ranging from large scale (IBM S/390),…

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/