美国、英国和欧盟签署欧洲理事会高级别人工智能安全条约
目前我们还不清楚人工智能法规将如何实施和确保,但今天包括美国、英国和欧盟在内的许多国家签署了由国际标准和人权组织欧洲委员会 (COE) 制定的人工智能安全条约。
欧洲委员会将该条约的正式名称定为《欧洲委员会人工智能与人权、民主和法治框架公约》,欧洲委员会将其描述为“有史以来第一份具有国际法律约束力的条约,旨在确保人工智能系统的使用完全符合人权、民主和法治。”
今天在立陶宛维尔纽斯举行的会议上,该条约正式开放签署。除上述三大主要市场外,其他签署国还包括安道尔、格鲁吉亚、冰岛、挪威、摩尔多瓦共和国、圣马力诺和以色列。
这份名单意味着,COE 的框架已经覆盖了全球一些最大 AI 公司总部或正在建立大规模业务的多个国家。但或许同样重要的是,迄今为止尚未列入的国家:例如亚洲、中东和俄罗斯均未列入。
这份高级别条约着眼于人工智能如何与三大领域产生关联:人权,包括防止数据滥用和歧视,以及确保隐私;保护民主;保护“法治”。从本质上讲,第三项条约要求签署国设立监管机构,以防范“人工智能风险”。(条约并未具体说明这些风险可能是什么,但这也是一项循环要求,指的是条约正在解决的另外两个主要领域。)
该条约的具体目标与其希望解决的领域一样崇高。“该条约提供了一个涵盖人工智能系统整个生命周期的法律框架,”欧洲委员会指出。“它促进了人工智能的进步和创新,同时管理了它可能对人权、民主和法治构成的风险。为了经得起时间的考验,它是技术中立的。”
(背景:欧洲委员会不是一个立法机构,而是在第二次世界大战后成立的,其职责是维护人权、民主和欧洲法律体系。它起草对签署国具有法律约束力的条约并执行这些条约;例如,它是欧洲人权法院背后的组织。)
人工智能监管一直是科技界的一个棘手问题,在复杂的利益相关者之间不断争论。
各种反垄断、数据保护、金融和通信监管机构——可能考虑到他们未能预见到其他技术创新和问题——已经采取了一些早期行动,试图规划如何更好地控制人工智能。
这个想法似乎是,如果人工智能确实代表着世界运作方式的巨大变化,如果不仔细观察,并非所有这些变化都可能是最好的,因此积极主动很重要。然而,监管机构显然也担心越界,并被指责行动过早或范围过广而阻碍创新。
人工智能公司也早早加入进来,宣称它们也对所谓的人工智能安全感兴趣。愤世嫉俗者将私人利益描述为监管俘获;乐观主义者认为,公司需要在监管谈判桌上占有一席之地,以便更好地沟通它们正在做什么以及接下来可能做什么,从而为适当的政策和规则制定提供信息。
政客也无处不在,有时支持监管机构,但有时采取更加亲商的立场,以发展国家经济的名义将公司利益置于中心。(上一届英国政府就属于这一 AI 啦啦队阵营。)
这种组合产生了各种各样的框架和声明,例如2023 年英国人工智能安全峰会、七国集团牵头的广岛人工智能进程或今年早些时候联合国通过的决议等活动。我们还看到了国家级人工智能安全机构的建立和区域性法规的出台,例如加利福尼亚州的SB 1047 法案、欧盟的人工智能法案等。
听起来,COE 的条约希望为所有这些努力提供一种协调一致的方法。
英国司法部在签署条约的声明中指出:“该条约将确保各国监督其发展,并确保任何技术都在严格的参数范围内进行管理。”“一旦该条约在英国获得批准并生效,现有的法律和措施将得到加强。”
“我们必须确保人工智能的崛起能够维护我们的标准,而不是破坏它们,”欧洲委员会秘书长马里娅·佩奇诺维奇·布里奇在一份声明中表示。“《框架公约》旨在确保这一点。这是一份有力且平衡的文本——这是起草时采取的开放和包容态度的结果,并确保它能从多方和专家的角度受益。”
“《框架公约》是一项具有全球影响力的开放条约。我希望这些国家能成为众多国家签署的第一批国家,并希望它们能很快得到批准,以便该条约能尽快生效,”她补充道。
虽然最初的框架公约最初是在2024年5月由欧洲委员会部长委员会谈判通过的,但它将“在五个签署国(包括至少三个欧洲委员会成员国)批准之日起三个月后的下个月第一天”正式生效。
换句话说,周四签署该协议的国家仍需单独批准,并且此后还需要三个月时间才能使条款生效。
目前尚不清楚这个过程需要多长时间。例如,英国表示打算制定人工智能立法,但尚未确定何时提出法案草案。具体到 COE 框架,它只是表示将“适时”对其实施情况进行更多更新。
US, UK and EU sign on to the Council of Europe’s high-level AI safety treaty
We’re not very close to any specifics on how, exactly, AI regulations will be implemented and ensured, but today a swathe of countries including the U.S., the U.K. and the European Union signed up to a treaty on AI safety laid out by the Council of Europe (COE), an international standards and human rights organization.
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law — as the treaty is formally called — is described by the COE as “the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.”
At a meeting today in Vilnius, Lithuania, the treaty was formally opened for signature. Alongside the aforementioned trio of major markets, other signatories include Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino and Israel.
The list means the COE’s framework has netted a number of countries where some of the world’s biggest AI companies are either headquartered or are building substantial operations. But perhaps as important are the countries not included so far: none in Asia, the Middle East, nor Russia, for example.
The high-level treaty sets out to focus on how AI intersects with three main areas: human rights, which includes protecting against data misuse and discrimination, and ensuring privacy; protecting democracy; and protecting the “rule of law.” Essentially the third of these commits signing countries to setting up regulators to protect against “AI risks.” (It doesn’t specify what those risks might be, but it’s also a circular requirement referring to the other two main areas it’s addressing.)
The more specific aim of the treaty is as lofty as the areas it hopes to address. “The treaty provides a legal framework covering the entire lifecycle of AI systems,” the COE notes. “It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral.”
(For background: The COE is not a lawmaking entity, but was founded in the wake of World War II with a function to uphold human rights, democracy and Europe’s legal systems. It draws up treaties which are legally binding for its signatories and enforces them; for example, it’s the organization behind the European Court of Human Rights.)
TechCrunch Disrupt 2024
Join 10,000+ startup & VC leaders and learn from tech giants.
Artificial intelligence regulation has been a hot potato in the world of technology, tossed among a complicated matrix of stakeholders.
Various antitrust, data protection, financial and communications watchdogs — possibly thinking of how they failed to anticipate other technological innovations and problems — have made some early moves to try to frame how they might have a better grip on AI.
The idea seems to be that if AI does represent a mammoth change to how the world operates, if not watched carefully, not all of those changes may turn out to be for the best, so it’s important to be proactive. However there is also clearly nervousness among regulators about overstepping the mark and being accused of crimping innovation by acting too early or applying too broad a brush.
AI companies have also jumped in early to proclaim that they, too, are just as interested in what’s come to be described as AI Safety. Cynics describe private interest as regulatory capture; optimists believe that companies need seats at the regulatory table to communicate better about what they are doing and what might be coming next to inform appropriate policies and rulemaking.
Politicians are also ever-present, sometimes backing regulators, but sometimes taking an even more pro-business stance that centers the interests of companies in the name of growing their countries’ economies. (The last U.K. government fell into this AI cheerleading camp.)
That mix has produced a smorgasbord of frameworks and pronouncements, such as those coming out of events like the U.K.’s AI Safety Summit in 2023 or the G7-led Hiroshima AI Process or the resolution adopted by the UN earlier this year. We’ve also seen country-based AI safety institutes established and regional regulations such as the SB 1047 bill in California, the European Union’s AI Actand more.
It sounds like the COE’s treaty is hoping to provide a way for all of these efforts to align.
“The treaty will ensure countries monitor its development and ensure any technology is managed within strict parameters,” the U.K. Ministry of Justicenoted in a statement on the signing of the treaty. “Once the treaty is ratified and brought into effect in the U.K., existing laws and measures will be enhanced.”
“We must ensure that the rise of AI upholds our standards, rather than undermining them,” said COE Secretary General Marija Pejčinović Burić in a statement. “The Framework Convention is designed to ensure just that. It is a strong and balanced text — the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives.”
“The Framework Convention is an open treaty with a potentially global reach. I hope that these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible,” she added.
While the original framework convention was first negotiated and adopted by the COE’s committee of ministers in May 2024 it will formally enter into force “on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it.”
In other words, countries that signed on Thursday will still individually need to ratify it, and from then it will take another three months before the provisions go into effect.
It’s not clear how long that process might take. The U.K., for example, has said it intends to work on AI legislationbut has not put a firm timeline on when a draft bill might be introduced. On the COE framework specifically, it only says that it will have more updates on its implementation “in due course.”
More TechCrunch
上传失败,网络异常。
重试
Google’s AI-powered Ask Photos feature begins US rollout
重试
Comment