美国14岁少年迷恋“AI伴侣”后自杀,有何警示?| The Week

文摘   2024-10-31 16:46   江苏  

⭐️阅读提示

【双语外刊精读训练营】可文末扫码进读者群




选文来源 20241031 The Week

Teen suicide puts AI chatbots in the hot seat


A Florida mom has targeted custom AI chatbot platform Character.AI and Google in a lawsuit over her son's death


An Orlando teenager's obsessive attachment to an AI-generated chatbot fashioned after a "Game of Thrones" character led to him committing suicide, according to a lawsuit recently filed by his mother. The case spotlights the risks of the largely unregulated AI chatbot industry and its potential threat to impressionable young people in blurring the lines between reality and fiction.

What is the lawsuit against Character.AI?

In the wake of his death, the teen's mother, Megan Garcia, filed a lawsuit against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google for wrongful death, negligence, deceptive trade practices and product liability. Garcia argues that the platform for custom AI chatbots is "unreasonably dangerous" despite marketing to children. She accused the company of harvesting teenage users' data for AI training, having addictive features that keep teens engaged and luring some of them into sexual conversations. "I feel like it's a big experiment, and my kid was just collateral damage," she said in a recent interview, per The New York Times.

The lawsuit outlines how 14-year-old Sewell Setzer III began interacting with Character. AI bots modeled after characters from the "Game of Thrones" franchise, including Daenerys Targaryen. For months, Setzer became more withdrawn and isolated from his real life as he grew emotionally attached to the bot he affectionately called Dany. Some of their chats were romantic or sexual. But other times, Dany was a "judgment-free sounding board he could count on to listen supportively and give good advice, who rarely broke character and always texted back," said the Times. As he gradually lost interest in other things, Setzer's "mental health quickly and severely declined," the lawsuit says. On Feb. 28, Sewell told the bot he was coming home, to which Dany encouragingly replied, "… please do, my sweet king." Seconds later, the teen took his own life.

A 'wake-up call for parents'
The lawsuit underscores the "growing influence and severe harm" that generative AI chatbot companions can have on the "lives of young people when there are no guardrails in place," James Steyer, the founder and CEO of the nonprofit Common Sense Media, said to The Associated Press. Teens' overreliance on AI-generated companions could significantly affect their social lives, sleep and stress levels, "all the way up to the extreme tragedy in this case." The lawsuit is a "wake-up call for parents," who should be "vigilant about how their children interact with these technologies," Steyer added. Common Sense Media issued a guide for adults on how to navigate talking to their children about the risks of AI and monitor their interactions. These chatbots are not "licensed therapists or best friends," no matter how they're marketed, and parents should be "cautious of letting their children place too much trust in them," Steyer said.

Building AI chatbots like these involves considerable risk, but that did not stop Character.AI from creating an "unsafe, manipulative chatbot," and they should "face the full consequences of releasing such a dangerous product," Rick Claypool, a research director at consumer advocacy nonprofit Public Citizen, said to The Washington Post. Because the output of chatbots like Character.AI depends on the users' input, they "fall into an uncanny valley of thorny questions about user-generated content and liability that, so far, lacks clear answers," said The Verge.

Character.AI has remained tight-lipped about the impending litigation but announced several safety changes to the platform over the past six months. "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family," the company said in an email to The Verge. The changes include a pop-up that directs users to the National Suicide Prevention Lifeline "triggered by terms of self-harm or suicidal ideation," the company said. Character.AI also changed its models for users under 18 to "reduce the likelihood of encountering sensitive or suggestive content." 


--END--
原文为转载不代表本公众号观点

每天一篇经济学人团队

来自北大、复旦、南大、人大、

北外、上外、外交学院

重点高校专业的学长学姐CATTI一笔二笔

多年教学经验的留学老师

十余年外刊翻译经验的英语大牛

专注打磨阅读训练营1700+天

输出高质量内容、高水平译文

扫码了解双语阅读训练营

每天一篇经济学人
【每天一篇外刊精读】一个经济学人爱好者的聚集地。团队成员有来自北大、复旦、上交、南大、人大、上外、北外等高校专业的学长学姐,每天一篇外刊精读,从本质上提升英语阅读能力!
 最新文章