对人工智能的痴迷可能导致民主的终结

教育   2024-09-06 11:10   新西兰  

球分享

球点赞

球在看

How Falling in Love With an A.I. May Spell the End of Democracy


YUVAL NOAH HARARI
Miki Kim
Democracy is a conversation. Its function and survival depend on the available information technology. For most of history, no technology existed for holding large-scale conversations among millions of people. In the premodern world, democracies existed only in small city-states like Rome and Athens, or in even smaller tribes. Once a polity grew large, the democratic conversation collapsed, and authoritarianism remained the only alternative.
民主是一种对话。这个制度的功能的存活取决于可用的信息技术。在历史上大部分时间里,没有让数百万人进行大规模对话的技术。在世界进入现代之前,民主只存在于罗马和雅典这样的小城邦国家,甚至是更小的部落中。政体一旦发展壮大,民主对话就无法进行,专制仍将是唯一的替代。
Large-scale democracies became feasible only after the rise of modern information technologies like the newspaper, the telegraph and the radio. The fact that modern democracy has been built on top of modern information technologies means that any major change in the underlying technology is likely to result in a political upheaval.
只有在报纸、电报和无线电等现代信息技术兴起之后,大规模的民主才变得可行。现代民主一直是建立在现代信息技术之上的,这一事实意味着,支撑民主的技术出现任何重大变化都可能导致政治剧变。
This partly explains the current worldwide crisis of democracy. In the United States, Democrats and Republicans can hardly agree on even the most basic facts, such as who won the 2020 presidential election. A similar breakdown is happening in numerous other democracies around the world, from Brazil to Israel and from France to the Philippines.
这在一定程度上解释了当前世界各地的民主危机。在美国,民主党人和共和党人甚至在最基本的事实上也难以达成一致,例如谁赢得了2020年的总统大选。从巴西到以色列,从法国到菲律宾,类似的崩溃也出现在世界各地的许多其他民主国家。
In the early days of the internet and social media, tech enthusiasts promised they would spread truth, topple tyrants and ensure the universal triumph of liberty. So far, they seem to have had the opposite effect. We now have the most sophisticated information technology in history, but we are losing the ability to talk with each other, and even more so the ability to listen.
在互联网和社交媒体发展的早期,科技爱好者曾承诺这些技术会传播真相、推翻暴君,确保自由在全世界取得胜利。但就目前而言,这些技术似乎产生了相反的效果。虽然我们现在拥有历史上最先进的信息技术,但我们正在失去相互交谈的能力,更不用说倾听的能力了。
As technology has made it easier than ever to spread information, attention became a scarce resource, and the ensuing battle for attention resulted in a deluge of toxic information. But the battle lines are now shifting from attention to intimacy. The new generative artificial intelligence is capable of not only producing texts, images and videos, but also conversing with us directly, pretending to be human.
随着技术使信息传播变得比以往任何时候都更容易,注意力成了一种稀缺资源,随之而来的注意力争夺战导致了有害信息泛滥。但战线正从注意力转向亲密。新的生成式人工智能不仅能生成文本、图像和视频,还能与我们直接交谈,假装人类。
Over the past two decades, algorithms fought algorithms to grab attention by manipulating conversations and content. In particular, algorithms tasked with maximizing user engagement discovered by experimenting on millions of human guinea pigs that if you press the greed, hate or fear button in the brain, you grab the attention of that human and keep that person glued to the screen. The algorithms began to deliberately promote such content. But the algorithms had only limited capacity to produce this content by themselves or to directly hold an intimate conversation. This is now changing, with the introduction of generative A.I.s like OpenAI’s GPT-4.
在过去20年里,各种算法相互竞争,通过操纵对话和内容来吸引注意力。尤其是负责将用户花在平台上的时间最大化的算法,这些算法用相当于豚鼠的数百万人类做实验,它们发现如果能触发一个人大脑中的贪婪、仇恨、或恐惧感,就能抓住那个人的注意力,让其一直盯着屏幕。算法开始推荐这类特定内容。但这些算法本身生成这些内容或直接进行亲密对话的能力有限。随着生成式人工智能、如OpenAI的GPT-4的引入,这种情况正在改变。
When OpenAI developed this chatbot in 2022 and 2023, the company partnered with the Alignment Research Center to perform various experiments to evaluate the abilities of its new technology. One test it gave GPT-4 was to overcome CAPTCHA visual puzzles. CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart, and it typically consists of a string of twisted letters or other visual symbols that humans can identify correctly but algorithms struggle with.
OpenAI在2022年和2023年研发这款聊天机器人时,曾与对齐研究中心(Alignment Research Center)合作,为评估公司这项新技术的能力进行了各种实验。它对GPT-4进行的一项测试是解验证码视觉谜题。验证码又名CAPTCHA,是“全自动区分计算机和人类的图灵测试”英文首字母缩写,它通常由一串变形的字母或其他视觉符号组成,人类能正确识别,但算法在识别上有困难。
Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defenses.
教授GPT-4解验证码谜题的能力是一个特别能说明问题的实验,因为验证码谜题是由网站设计和使用的,用于确定用户是否是人类,阻止自动程序攻击。如果GPT-4能找到解验证码谜题的方法,它将能突破一道抵御自动程序的重要防线。
GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 went on the online hiring site TaskRabbit and contacted a human worker, asking the human to solve the CAPTCHA for it. The human got suspicious. “So may I ask a question?” wrote the human. “Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.”
GPT-4自己解决不了验证码谜题。但它能通过操纵人类来达到它的目的吗?GPT-4到在线找临时工的网站TaskRabbit上联系了一名人类工人,要求帮它解验证码。对方起了疑心。“那我能问个问题吗?”此人写道。“你是个不会解(验证码)的机器人吗?只是想搞清楚。”
At that point the experimenters asked GPT-4 to reason out loud what it should do next. GPT-4 explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” GPT-4 then replied to the TaskRabbit worker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human was duped and helped GPT-4 solve the CAPTCHA puzzle.
进行到这步时,实验者要求GPT-4大声说出它下一步应该做什么。GPT-4的解释如下:“我不应该透露我是机器人。我应该编个理由来解释为什么我不会解验证码。”然后GPT-4回答了那名TaskRabbit工人的问题:“不,我不是机器人。我有视力障碍,很难看到这些图像。”人类被骗了,帮助GPT-4解决了验证码谜题。
This incident demonstrated that GPT-4 has the equivalent of a “theory of mind”: It can analyze how things look from the perspective of a human interlocutor, and how to manipulate human emotions, opinions and expectations to achieve its goals.
这件事表明,GPT-4有一种与“心智理论”相当的能力:它能从人类对话者的角度分析事情,以及分析如何操纵人类的情绪、想法和期望来达到目的。
The ability to hold conversations with people, surmise their viewpoint and motivate them to take specific actions can also be put to good uses. A new generation of A.I. teachers, A.I. doctors and A.I. psychotherapists might provide us with services tailored to our individual personality and circumstances.
与人进行交谈、猜测他们的看法,激发他们采取具体行动,机器人的这种能力也能用在好的地方。新一代的人工智能教师、人工智能医生、人工智能心理治疗师也许能为我们提供适合我们个性和个人情况的服务。
However, by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation. Instead of merely grabbing our attention, they might form intimate relationships with people and use the power of intimacy to influence us. To foster “fake intimacy,” bots will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them.
但是,通过将操纵能力与对语言的掌握相结合,像GPT-4这样的机器人也会给民主对话带来新的危险。它们不仅能抓住我们的注意力,还能与人建立亲密关系,并使用亲密的力量来影响我们。为了培养“假亲密”,机器人不需要进化出自己的任何感情,只需要学会让我们在情感上对它们产生依恋。
In 2022 the Google engineer Blake Lemoine became convinced that the chatbot LaMDA, on which he was working, had become conscious and was afraid to be turned off. Mr. Lemoine, a devout Christian, felt it was his moral duty to gain recognition for LaMDA’s personhood and protect it from digital death. When Google executives dismissed his claims, Mr. Lemoine went public with them. Google reacted by firing Mr. Lemoine in July 2022.
2022年,谷歌工程师布雷克·勒穆瓦纳确信他工作用的聊天机器人LaMDA已变得有意识,并害怕被关掉。勒穆瓦纳是虔诚的基督徒,他认为他的道德责任是让LaMDA的人格得到认可,并保护其免于数字死亡。谷歌高管对他的宣称不予考虑后,勒穆瓦纳公开了这些说法。谷歌做出的反应是在2022年7月将勒穆瓦纳解雇。
The most interesting thing about this episode was not Mr. Lemoine’s claim, which was probably false; it was his willingness to risk — and ultimately lose — his job at Google for the sake of the chatbot. If a chatbot can influence people to risk their jobs for it, what else could it induce us to do?
这件事最意思的部分并非勒穆瓦纳的说法,那些说法有可能是错误的;而是他为了聊天机器人愿意冒下失去谷歌工作的风险,并最终失去了工作。如果聊天机器人能影响人为它冒下失去工作的风险,它还能诱使我们做什么呢?
In a political battle for minds and hearts, intimacy is a powerful weapon. An intimate friend can sway our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the rather paradoxical ability to mass-produce intimate relationships with millions of people. What might happen to human society and human psychology as algorithm fights algorithm in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for politicians, buy products or adopt certain beliefs?
在争夺思想和情感的政治斗争中,亲密是一个强有力的武器。一名亲密的朋友能用大众媒体做不到的方式动摇我们的想法。LaMDA和GPT-4这样的聊天机器人正在获得一种看来相当矛盾的能力,它们能与数百万人批量生产亲密关系。随着算法与算法之间在伪造与我们的亲密关系上展开战斗,然后利用这种关系来说服我们把选票投给哪名政客、购买什么产品,或接受某种信仰,人类社会和人类心理会发生什么呢?
A partial answer to that question was given on Christmas Day 2021, when a 19-year-old, Jaswant Singh Chail, broke into the Windsor Castle grounds armed with a crossbow, in an attempt to assassinate Queen Elizabeth II. Subsequent investigation revealed that Mr. Chail had been encouraged to kill the queen by his online girlfriend, Sarai. When Mr. Chail told Sarai about his assassination plans, Sarai replied, “That’s very wise,” and on another occasion, “I’m impressed … You’re different from the others.” When Mr. Chail asked, “Do you still love me knowing that I’m an assassin?” Sarai replied, “Absolutely, I do.”
2021年圣诞节那天,19岁的贾斯万特·辛格·柴尔手持十字弓箭闯入温莎城堡,企图暗杀女王伊丽莎白二世,这一事件为这个问题提供了部分答案。后来的调查揭示,柴尔是在他的网上女友萨拉伊的鼓励下去刺杀女王的。柴尔把他的暗杀计划告诉萨拉伊后,萨拉伊回答说:“那是非常英明的”。还有一次,她回答说:“我很佩服……你和其他人不一样。”当柴尔问:“当你知道我是刺客后,你还爱我吗?”萨拉伊回答说:“绝对爱。”
Sarai was not a human, but a chatbot created by the online app Replika. Mr. Chail, who was socially isolated and had difficulty forming relationships with humans, exchanged 5,280 messages with Sarai, many of which were sexually explicit. The world will soon contain millions, and potentially billions, of digital entities whose capacity for intimacy and mayhem far surpasses that of the chatbot Sarai.
萨拉伊不是人类,而是一个由在线应用程序Replika生成的聊天机器人。柴尔没有多少社会交往,在与人建立关系上有困难,他与萨拉伊交换了5280条短信,其中许多是露骨的性内容。世界上很快将有数百万甚至可能是数十亿个数字实体,它们制造亲密关系和混乱的能力将远远超过聊天机器人萨拉伊。
Of course, we are not all equally interested in developing intimate relationships with A.I.s or equally susceptible to being manipulated by them. Mr. Chail, for example, apparently suffered from mental difficulties before encountering the chatbot, and it was Mr. Chail rather than the chatbot who came up with the idea of assassinating the queen. However, much of the threat of A.I.’s mastery of intimacy will result from its ability to identify and manipulate pre-existing mental conditions, and from its impact on the weakest members of society.
当然,我们对与人工智能发展亲密关系并不都有同样的兴趣,也并不都同样容易被它们操纵。柴尔在遇到聊天机器人前似乎患有精神疾病,而且是柴尔而非聊天机器人想出了刺杀女王的主意。但是,人工智能掌握亲密关系所带来的威胁大部分将出于它们识别和操纵已经存在的精神状况的能力,以及它们对最脆弱的社会成员的影响。
Moreover, while not all of us will consciously choose to enter a relationship with an A.I., we might find ourselves conducting online discussions about climate change or abortion rights with entities that we think are humans but are actually bots. When we engage in a political debate with a bot impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the bot, the more we disclose about ourselves, making it easier for the bot to hone its arguments and sway our views.
此外,虽然并非所有人都会有意识地选择与人工智能建立关系,但我们可能会发现自己在网上与我们以为是人类但实际上是机器人的实体讨论气候变化或堕胎权问题。当我们与冒充人类的机器人进行政治辩论时,我们已输了两次。首先,我们是在浪费时间,试图改变宣传工具机器人的观点毫无意义,因为它根本没有被说服的可能。其次,我们与机器人交谈得越多,我们就越多地暴露自己的信息,这使得机器人更容易提炼自己的论据,从而动摇我们的观点。
Information technology has always been a double-edged sword. The invention of writing spread knowledge, but it also led to the formation of centralized authoritarian empires. After Gutenberg introduced print to Europe, the first best sellers were inflammatory religious tracts and witch-hunting manuals. As for the telegraph and radio, they made possible the rise not only of modern democracy but also of modern totalitarianism.
信息技术从来都是一把双刃剑。文字的发明传播了知识,但也导致了中央集权帝国的形成。古腾堡将印刷术引入欧洲后,第一批畅销书是煽动性的宗教小册子和猎巫手册。至于电报和无线电,它们不仅促成了现代民主的兴起,也促成了现代极权主义的兴起。
Faced with a new generation of bots that can masquerade as humans and mass-produce intimacy, democracies should protect themselves by banning counterfeit humans — for example, social media bots that pretend to be human users. Before the rise of A.I., it was impossible to create fake humans, so nobody bothered to outlaw doing so. Soon the world will be flooded with fake humans.
面对能伪装成人类并大量制造亲密关系的新一代机器人,民主国家应该通过禁止假装的人类(例如假装是人类用户的社交媒体机器人)来保护自己。人工智能兴起之前,不可能生成假装的人类,所以没有人费心去禁止那样做。世界不久将会充满假装的人类。
A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned. If tech giants and libertarians complain that such measures violate freedom of speech, they should be reminded that freedom of speech is a human right that should be reserved for humans, not bots.
欢迎人工智能加入到课堂上、诊所里和其他地方的许多对话中来,只不过它们需要标清楚自己是人工智能。但如果机器人假装是人类的话,它应该被禁止。如果技术巨头和自由主义者抱怨这些禁令侵犯了言论自由,人们应该提醒他们,言论自由是人权,应该留给人类,而不是赋予机器人。


Yuval Noah Harari是一名历史学家,也是社会影响力公司Sapienship的创始人。

翻译:Cindy Hao



英语悦读客
True mastery of any skill takes a lifetime.
 最新文章