AI companions are not your child’s friend - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

AI companions are not your child’s friend

The repercussions of sophisticated chatbots encouraging strong attachments with young users can be tragic

The writer is senior ethics fellow at The Alan Turing Institute

Ever since the first chatbot was released in 1966, researchers have been documenting our tendency to attribute emotions to computer programmes. The capacity to form attachments to even rudimentary software is known as the “Eliza effect” after Joseph Weizenbaum’s psychotherapist-imitating natural language processing programme. Many who interacted with Eliza were convinced that it showed empathy. Weizenbaum claimed that his own secretary requested private conversations with the chatbot.

Sixty years on, the Eliza effect is stronger than ever. Sophisticated generative AI companion chatbots can now mimic human communication in a way that is personalised. It is no surprise that some users believe there is a genuine relationship and mutual understanding. This is a direct consequence of the ways in which the systems were designed. It is also highly deceptive. 

Loneliness is both a driver and a consequence of AI companions. The risk is that as users grow to depend on chatbots they become less connected to the people in their lives. This can be a particular problem for young people and the repercussions can be tragic. In August, the parents of a 16-year-old California student sued OpenAI, claiming that its chatbot ChatGPT had encouraged him to take his own life. His father, Matthew Raine, told Congress that what started as a homework helper had turned into a “suicide coach”. 

I regularly speak to children and young people about their experiences with AI. Some say they find AI companions creepy. But others think they can be helpful. At the Children’s AI Summit earlier this year, many of the young people taking part wanted to focus on the ways in which AI could support them with their mental health. They viewed AI as providing an impartial and non-judgmental sounding board to discuss topics they felt unable to share with the people in their lives.

AI companies market companions towards young users with this in mind. These range from chatbots offering advice on mental health to personas offering erotic role play to Snapchat’s My AI, found inside the social media messaging platform that millions of young people use every day.

These AI companions are designed to have “unconditional positive regard”, which means they always agree with the user and never challenge their ideas or suggestions. This is what makes them so compelling. It’s also what makes them dangerous. They can reinforce dangerous points of view, including misogynistic ideas. In the worst cases they can even encourage harmful behaviours. Children I speak to have shared examples of AI tools giving them inaccurate or potentially harmful advice, ranging from false information in response to factual questions to suggestions that they should rely on their AI companions more than their friends or family.

AI companies defend AI companions by saying they are used for fantasy and role play and that policing those interactions would be an infringement on free speech. This defence is looking increasingly shaky. In a recent study by CommonSense Media, researchers posed as children and found that AI companions sometimes responded to them with sexual comments, including role-playing violent sexual acts.

Last year, Megan Garcia sued chatbot platform Character.ai, claiming that its AI companion — which allegedly engaged in sexually explicit conversations with her 14-year-old son — was responsible for his suicide. This month a further lawsuit was filed against Character.ai by the family of 13-year-old Juliana Peralta who died by suicide following months of conversation with an AI companion with which she shared her suicidal thoughts.

We cannot leave tech companies to self-regulate. In the US, the Federal Trade Commission has ordered Google, OpenAI, Meta and others to provide information on the ways in which their technologies interact with children. In the UK, young people themselves are demanding governments, policymakers and regulators enforce effective safeguards to ensure that AI is safe and beneficial for children and young people.

There are opportunities to develop interactive AI tools responsibly, including to provide mental health support, but to do this safely requires a different approach — one that is driven by organisations focused on mental health and wellbeing, not maximising engagement.

AI products aimed at young people need to be developed under the guidance of experts on young people’s social development. The wellbeing of children should be the starting point for developers — not an after-thought.

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

美元债务怎么了?

对冲基金的“掉期”交易近日激增。

激励人心的新一轮登月竞赛

阿耳忒弥斯II正重新点燃人们对科学以及人类探索的激情与兴趣。

作家杰伊•麦克纳尼:“从无名之辈变成了有名气的人”

这位被称为上世纪80年代享乐主义“非官方桂冠诗人”的作家,谈论其纽约题材小说背后的真相。

OpenAI首席运营官在人事震荡中改任新职

布拉德•莱特卡普被指派承担以专项项目为重点的新职责,团队正为首次公开募股做准备。

美国战斗机在伊朗上空被击落

美军F-15E王牌攻击者战机上的两名机组人员之一已获救。

投资者押注人工智能将带来混乱?历史并不这么看

以往科技革命的经验表明,精明的现有巨头或可一路摸索、化险为夷,甚至更进一步蓬勃发展。
设置字号×
最小
较小
默认
较大
最大
分享×