「经济学人」AI大模型是否应该开源——11月第二周(含精校翻译)

教育   2024-11-14 05:10   上海  

「免费领取外刊+投行研报」加入金融&经济&英语」学习群


Why open-source AI models are good for the world

Their critics dwell on the dangers and underestimate the benefits

为什么开源人工智能模型对世界有益

批评他们的人总是纠缠于危险而低估了好处



OPEN INNOVATION lies at the heart of the artificial-intelligence (AI) boom. The neural network transformer—the T in GPT—that underpins OpenAI’s was first published as research by engineers at Google. TensorFlow and PyTorch, used to build those neural networks, were created by Google and Meta, respectively, and shared with the world. Today, some argue that AI is too important and sensitive to be available to everyone, everywhere. Models that are open-source—ie, that make underlying code available to all, to remix and reuse as they please—are often seen as dangerous.


开放创新是当前人工智能热潮的核心驱动力。例如,OpenAI所使用的神经网络“变换器”技术——GPT中的“T——最早是由谷歌的工程师们发表的研究成果。而构建这些神经网络的TensorFlowPyTorch工具,分别由谷歌和Meta开发,并慷慨地与全球共享。然而,现在有些人认为,人工智能过于关键和敏感,不应该无限制地向所有人提供。特别是那些“开源”的AI模型——也就是那些将底层代码公开,允许任何人自由修改和使用的模型——被很多人视为潜在的风险。



Several charges are levelled against open-source AI. One is that it is helping America’s rivals. On November 1st it emerged that researchers in China had taken Llama 2, Meta’s open large language model, and adapted it for military purposes. Another argument against open-source AI is its use by terrorists and criminals, who can strip a model of carefully built safeguards against malicious or harmful activity. Anthropic, a model-maker, has called for urgent regulation, warning about the unique risks of open models, such as their ability to be fine-tuned using data on, say, making a bioweapon.


开源人工智能(AI)面临一些批评:

1.  技术外流风险:有人担心,开源AI可能会无意中帮助美国的竞争对手。例如,有报道指出,中国的研究人员已经获取了Meta公司的开源大型语言模型Llama 2,并将其用于特定领域。

2.  安全风险:另一个担忧是,开源AI可能被不良行为者利用,他们可能会移除模型中为防止不当行为而设置的安全措施。

3.  监管需求:模型制造商Anthropic公司呼吁加强监管,指出开源模型存在特殊风险,比如它们可以被调整用于不当目的。

这些担忧提示我们,开源AI模型的安全和潜在滥用问题是一个复杂且紧迫的议题,需要行业、政策制定者和社会各界共同努力,确保技术的安全性和负责任的使用。




True, open-source models can be abused, like any other tech. But such thinking puts too much weight on the dangers of open-source AI and too little on the benefits. The information needed to build a bioweapon already exists on the internet and, as Mark Zuckerberg argues, open-source AI done right should help defenders more than attackers. Besides, by some measures, China’s home-grown models are already as good as Meta’s.


确实,开源模型可能会被不当使用,这和其他技术并无二致。但这种观点过分强调了开源AI的潜在风险,而没有充分认识到它所带来的积极影响。制造某些敏感物品的信息已经可以在互联网上找到,正如马克·扎克伯格所强调的,如果正确开发和使用,开源AI更应该增强安全防护的能力,而不是被用于不当目的。此外,根据一些评估标准,中国自主研发的技术已经达到了Meta的水平。



Meanwhile, the benefits of open software are plain to see. It underpins the technology sector as a whole, and powers the devices billions of people use every day. The software foundation of the web, the standards of which were released into the public domain by Tim Berners-Lee from CERN, is open-source; so, too, is the Ogg Vorbis compression algorithm used by Spotify to stream music to millions.

Making software free has long helped developers make their code stronger. It has allowed them to prove the trustworthiness of their work, harness vast amounts of volunteer labour and, in some cases, make money by selling tech support to those who use it. Openness should underpin innovation in AI as well. If the technology has as much potential as its backers say, then it is a way to ensure that power is not concentrated in the hands of a few Californian firms.


开源软件的优势显而易见。它不仅是整个技术行业的基石,也是全球数十亿人日常使用的设备的动力源泉。比如,由蒂姆·伯纳斯-李在CERN工作时发布的、构成互联网基础的软件标准,就是开源的;Spotify用来向数百万用户传输音乐的Ogg Vorbis压缩算法同样也是开源的。

免费提供软件长期以来帮助开发者强化了他们的代码质量。这种做法不仅使他们能够证明工作的可靠性,还能动员大量的志愿者参与开发,并且在某些情况下,通过提供技术支持服务来实现盈利。在人工智能领域,开放性同样应该是创新的基石。如果这项技术真的像其支持者所声称的那样具有巨大潜力,那么开源就是一种确保技术力量不被少数加州公司所垄断的方式。



Closed models will have their place, for uses that are sensitive, or tasks that need to be conducted at the cutting edge. But models that are open or partly open will be crucial, too. The Open Source Initiative, an industry body, defines a model as open-source if you can download it and use it as you want, and if a description of the underlying training data is provided. None of the open models of the big labs, such as Alibaba and Meta, qualifies. But by offering partially open platforms, the labs provide insight into their models, allowing others to learn from, and sometimes build on, their techniques.


封闭模型在某些特定领域,比如敏感用途或需要尖端技术的任务中,确实有其必要性。但开放或部分开放的模型也同样重要。根据开源倡议组织的定义,如果一个模型可以被下载并自由使用,并且提供了底层训练数据的详细描述,那么这个模型就可以被视为开源。目前,像阿里巴巴和Meta这样的大实验室所提供的开源模型并不符合这一定义。然而,通过提供部分开放的平台,这些实验室让人们能够了解他们的模型,从而允许其他人学习他们的方法,甚至在某些情况下,基于他们的技术进行进一步的开发。



One reason the Open Source Initiative says Meta’s models are not open-source is that access to them is restricted, notably because their use is limited to applications with fewer than 700m monthly users. But Meta may yet find it in its own interest to open up further. The more it does, the more attractive its platform could become to developers, and the more likely that a future superstar application is nurtured on its technology.

Governments, too, should allow open-source AI to thrive, imposing safety regulations uniformly and eschewing restrictions and intellectual-property protections that force research under lock and key. With artificial intelligence, as with a lot of other software, innovation flourishes in the open. 


开源软件倡议组织认为Meta的模型不算真正的开源,其中一个原因是对这些模型的访问受到限制,特别是因为它们仅限于月活跃用户少于7亿的应用使用。但Meta可能会发现,进一步开放对其自身也是有利的。它越是开放,其平台对开发者的吸引力就越大,未来某个明星级应用在其技术基础上孕育的可能性也就越高。

政府也应该允许开源人工智能蓬勃发展,统一实施安全法规,避免施加限制和知识产权保护,这些做法会迫使研究工作秘密进行。在人工智能领域,就像许多其他软件一样,开放性促进了创新的繁荣。



往期文章:

「经济学人」特朗普胜选之于“中”是美梦成真亦或反之——11月第二周(含精校翻译)(含精校翻译)

最新「经济学人」封面文章“欢迎来到特朗普的世界”11月第二周(含精校翻译(含精校翻译)

「经济学人」“坐收渔翁,坐山观虎斗”别太天真,特朗普贸易战,亚洲都将受损11月第一周(含精校翻译(含精校翻译)

「经济学人」中国经济复苏破局点“空置房”11月第一周(含精校翻译)

「经济学人」“火箭筒刺激政策”留给特朗普——11月第一周(含精校翻译)

「经济学人」掌控全球矿产资源「Eco」11月第一周(含精校翻译)

「经济学人」特朗普已登顶,欧洲该醒醒了——11月第二周(含精校翻译)

Economic英语进化群
核心分享“国际投行”高精中文翻译,以及其他优质金融经济相关英语学习资源。坚持每天训练,持续进化,成为更强的自己!
 最新文章