当前位置:首页|资讯|ChatGPT|人工智能|法律

[常速英语] 欧盟收紧AI监管对科研与ChatGPT的影响

作者:_Pathfinder_发布时间:2024-02-19

[Nature · News Explainer] What the EU’s tough AI law means for research and ChatGPT

The EU AI Act is the world’s first major legislation on artificial intelligence and strictly regulates general-purpose models.

欧盟的人工智能法案是世界上第一部针对人工智能的官方法律,用于严格管理通用模型。(译者注:媒体和学术期刊大多把“general model”、“general-purpose model”翻译为“通用模型”,译者本打算译为“泛用”以体现模型多功能性,日语对该词汇的翻译亦为“汎用モデル”,然而中文词典中不存在“泛用”一词,或者说这个词其实是和制词(现在而不是古语的“量产”、“专用”、“特装”、“特制”等也都是和制词),遂作罢)

原文发表日期: 2024.02.16

作者/Author: Elizabeth Gibney

European Union countries are poised to adopt the world’s first comprehensive set of laws to regulate artificial intelligence (AI). The EU AI Act puts its toughest rules on the riskiest AI models, and is designed to ensure that AI systems are safe and respect fundamental rights and EU values.

欧盟各国准备开始实行世界第一套监管人工智能(以下简称AI)的综合性法案。欧盟AI法案对风险最高的AI模型实施最严格的管制,并确保AI系统尊重欧盟价值体系和基本权利且无害。(译者注:“be poised to”指准备好做某事)

“The act is enormously consequential, in terms of shaping how we think about AI regulation and setting a precedent,” says Rishi Bommasani, who researches the societal impact of AI at Stanford University in California.

加利福尼亚斯坦福大学研究AI社会性影响的里希·博马萨尼评价道:“这套法案意义重大,不仅会影响我们在AI监管的治理方面的思路,也开创了这方面的先河。”(译者注:“consequential”指结果性的,决定性的,有重大影响的)

The legislation comes as AI develops apace. This year is expected to see the launch of new versions of generative AI models — such as GPT, which powers ChatGPT, developed by OpenAI in San Francisco, California — and existing systems are being used in scams and to propagate misinformation. China already uses a patchwork of laws to guide commercial use of AI, and US regulation is under way. Last October, President Joe Biden signed the nation’s first AI executive order, requiring federal agencies to take action to manage the risks of AI.

在AI突飞猛进的同时,这套法案应运而生。今年可以预见不少生成式模型的新版本发布——例如在ChatGPT上搭载的,由加利福尼亚旧金山的OpenAI开发的GPT——而现行的系统有些被用于诈骗或扩散不实信息。中国已经在引导AI的商业用途上出台了法律“组合拳”,美国对该方面的管理也在发展中。去年十月,美国总统乔·拜登签署了第一份AI行政令,要求联邦各部门采取措施以应对AI带来的风险。(译者注:“apace”指进展上的迅速;“scam”指以金钱或利益为目标的诈骗;“patchwork”指碎布拼接而成的衣料,或多种不同事物的混合体)

EU nations’ governments approved the legislation on 2 February, and the law now needs final sign-off from the European Parliament, one of the EU’s three legislative branches; this is expected to happen in April. If the text remains unchanged, as policy watchers expect, the law will enter into force in 2026.

欧盟各国政府在2月2日通过了AI法案,现在这套法案只待欧盟三大立法机构中的欧洲议会批下许可;如果四月份通过时,法律条文没有更改,如政策观察者们所料的话,这套法案会在2026年生效。(译者注:“sign-off”指签字,签核)

Some researchers have welcomed the act for its potential to encourage open science, whereas others worry that it could stifle innovation. Nature examines how the law will affect research.

当然有研究者很欢迎这套法案在科学公开化上可能带来的促进作用,而也有研究者担心这会扼杀创新潜能。《自然》将带您了解该法案对研究领域的影响。(译者注:“stifle”指使窒息,扼杀)

What is the EU’s approach?欧盟的措施

The EU has chosen to regulate AI models on the basis of their potential risk, by applying stricter rules to riskier applications and outlining separate regulations for general-purpose AI models, such as GPT, which have broad and unpredictable uses.

欧盟决定根据AI的潜在风险,对更危险的用途实行更加严格的管理制度来管理AI模型,并针对诸如GPT之类,用途更广更难以预测的通用模型制定了专门的条例。

The law bans AI systems that carry ‘unacceptable risk’, for example those that use biometric data to infer sensitive characteristics, such as people’s sexual orientation. High-risk applications, such as using AI in hiring and law enforcement, must fulfil certain obligations; for example, developers must show that their models are safe, transparent and explainable to users, and that they adhere to privacy regulations and do not discriminate. For lower-risk AI tools, developers will still have to tell users when they are interacting with AI-generated content. The law applies to models operating in the EU and any firm that violates the rules risks a fine of up to 7% of its annual global profits.

法案对具有“不可接受风险”的AI系统,例如使用生物数据对推断诸如性取向等敏感特征的系统,施以禁令。高风险应用的开发,例如将AI用于招聘或执法时,必须履行特定责任,比如说,开发者必须证明他们的模型安全、透明,且能向用户解释,并遵守隐私法规和无歧视原则。对于风险较低的AI工具,开发者仍需向用户注明会与AI生成内容进行交互的情景。该法律适用于在欧盟国家运行的AI模型,任何违反规定的公司都有可能被处以高达其全球年利润 7% 的罚款。(译者注:“不可接受风险”这种算是硬翻的说法,但是在行业内似乎已经流行开来,这里就不做创新了;“unacceptable”这个概念其实是跟规则有关的,超出了规则允许范围的就叫“unacceptable”,比如说翻成“风险超限”?)

“I think it’s a good approach,” says Dirk Hovy, a computer scientist at Bocconi University in Milan, Italy. AI has quickly become powerful and ubiquitous, he says. “Putting a framework up to guide its use and development makes absolute sense.”

意大利米兰博科尼大学的计算机科学家德克·霍维认为,“这是一项很好的举措”。他说,随着现在AI变得更加普及更加强大,“制定一个法律框架来引导AI的用途和发展是绝对有必要的。”(译者注:“ubiquitous”指无处不在的)

Some don’t think the laws go far enough, leaving “gaping” exemptions for military and national-security purposes, as well as loopholes for AI use in law enforcement and migration, says Kilian Vieth-Ditlmann, a political scientist at AlgorithmWatch, a Berlin-based non-profit organization that studies the effects of automation on society.

有学者认为法案覆盖得还不够全面。位于柏林,研究社会自动化影响的非营利组织“算法观察”的政治学家基利安·维特·迪特曼认为,法案在军用和国家安全上开出了“巨大的”口子,而对AI在法律执行和移民方面应用的监管也是漏洞百出。(译者注:"loophole"指漏洞)

How much will it affect researchers?对研究者的影响

In theory, very little. Last year, the European Parliament added a clause to the draft act that would exempt AI models developed purely for research, development or prototyping. The EU has worked hard to make sure that the act doesn’t affect research negatively, says Joanna Bryson, who studies AI and its regulation at the Hertie School in Berlin. “They really don’t want to cut off innovation, so I’d be astounded if this is going to be a problem.”

理论上来说,对研究者的影响基本没有。去年,欧洲议会在法案草案中增加了一项条款,规定纯粹为研究、开发或作为原型设计而开发的人工智能模型可以免责。柏林赫蒂学院研究AI与监管的乔安娜·布赖森说,欧盟一直在努力保证法案不会对研究带来负面影响。"他们真的不想扼杀创新,如果存在这种问题的话,我会很震惊。"(译者注:“clause”指法律文件中的条款)

But the act is still likely to have an effect, by making researchers think about transparency, how they report on their models and potential biases, says Hovy. “I think it will filter down and foster good practice,” he says.

霍维还补充道,法案中要求研究者考虑项目透明度、对模型进行项目报告和可能引起的偏见等可能会有所作用,“我觉得这会在暗中促成更多正面案例”(译者注:“filter down”指缓慢传播,渗透)

Robert Kaczmarczyk, a physician at the Technical University of Munich in Germany and co-founder of LAION (Large-scale Artificial Intelligence Open Network), a non-profit organization aimed at democratizing machine learning, worries that the law could hinder small companies that drive research, and which might need to establish internal structures to adhere to the laws. “To adapt as a small company is really hard,” he says.

LAION是一个致力于推动机器学习公开化的非营利性组织,其联合创始人兼德国慕尼黑工业大学的物理学家的罗伯特·卡茨马尔奇克担心这项法规会对进行研究的小企业造成阻碍,小企业可能需要根据法规建立公司内部架构,“作为小企业来适应这种管理规定真的很难”。(译者注:“hinder”指阻碍,妨碍;“adhere”指黏附,附着)

What does it mean for powerful models such as GPT?对AI模型的影响

After heated debate, policymakers chose to regulate powerful general-purpose models — such as the generative models that create images, code and video — in their own two-tier category.

在激烈的辩论后,决策者们决定将用于生成图片、代码、视频之类的强大的通用模型分成两类进行管理。

The first tier covers all general-purpose models, except those used only in research or published under an open-source licence. These will be subject to transparency requirements, including detailing their training methodologies and energy consumption, and must show they respect copyright laws .

第一分类包括了所有通用模型,而仅研究用途和根据开源协议发布的模型除外。这类模型需要满足透明性要求,包括详细说明模型的训练方法和耗能,并且遵守版权法规。

The second, much stricter, tier will cover general-purpose models deemed to have “high-impact capabilities”, which pose a higher “systemic risk”. These models will be subject to “some pretty significant obligations”, says Bommasani, including stringent safety testing and cybersecurity checks. Developers will be made to release details of their architecture and data sources.

第二分类更加严格,包含了具有更高“系统性风险”,被认为可能“造成严重影响”的通用模型。博马萨尼介绍道,这些模型需要“承担某些重要责任”,包括进行严格的安全性检测和网络安全检查。开发者必须公布他们采用的技术和数据源等的相关细节。(译者注:“be deemed to do”指被认为有做某事的可能)

For the EU, ‘big’ effectively equals dangerous: any model that uses more than 1025 FLOPs (the number of computer operations) in training qualifies as high impact. Training a model with that amount of computing power costs between US$50 million and $100 million — so it is a high bar, says Bommasani. It should capture models such as GPT-4, OpenAI’s current model, and could include future iterations of Meta’s open-source rival, LLaMA. Open-source models in this tier are subject to regulation, although research-only models are exempt.

对于欧盟而言,“大”就是“怕”:任何在训练过程中每秒浮点运算次数(电脑单位时间内运算的次数)超过1025次的模型都属于“高影响”模型。训练这种模型所需要的电脑算力价位差不多在五千万到一亿美刀——博马萨尼评价道,这个门槛其实很高。它的主要目标应该是诸如OpenAI现在的GPT-4,和与之角力的Meta的开源模型LLaMA的未来迭代版本之类的模型。这一分类下的开源模型也需要接受监管,而仅研究用途的模型则是例外。(译者注:“effectively”指实际上,事实上;作为参考,“In 2023, OpenAI’s generative AI model ChatGPT-4 had a computing capacity of 2.1 x 1025 FLOPs.”)

Some scientists are against regulating AI models, preferring to focus on how they’re used. “Smarter and more capable does not mean more harm,” says Jenia Jitsev, an AI researcher at the Jülich Supercomputing Centre in Germany and another co-founder of LAION. Basing regulation on any measure of capability has no scientific basis, adds Jitsev. They use the analogy of defining as dangerous all chemistry that uses a certain number of person-hours. “It’s as unproductive as this.”

有些科学家不赞同对AI模型的监管,而更赞成将监管的精力集中在AI模型的用途上。同为LAION联合创始人,来自德国尤利希超算中心的AI研究者杰尼亚·伊采夫认为,“模型更加智能和强大并不意味着更具有危害性”。伊采夫还补充道,不管用多少算力标准来作为现在的监管门槛都是缺乏科学依据的。他们用“定义超过一定工时的化学品为危险品”来打比方,“这是非常没有效率的”。

Will the act bolster open-source AI?对AI开源的促进

EU policymakers and open-source advocates hope so. The act incentivizes making AI information available, replicable and transparent, which is almost like “reading off the manifesto of the open-source movement”, says Hovy. Some models are more open than others, and it remains unclear how the language of the act will be interpreted, says Bommasani. But he thinks legislators intend general-purpose models, such as LLaMA-2 and those from start-up Mistral AI in Paris, to be exempt.

欧盟决策者和拥护开源理念的人们都希望如此。这项法案鼓励推进AI信息的可获取、可复现和透明;霍维评价道,这基本上就是在“重读开源运动的宣言”。博马萨尼补充道,部分模型比其他的更加开放,而在这方面法案中的用词会被如何解读依然不明确。但他认为,立法者打算将诸如 LLaMA-2和巴黎初创公司Mistral AI的模型之类的通用模型排除在外。(译者注:“incentivize”指刺激,激励;“manifesto”指宣言)

The EU’s approach of encouraging open-source AI is notably different from the US strategy, says Bommasani. “The EU’s line of reasoning is that open source is going to be vital to getting the EU to compete with the US and China.”

博马萨尼说,欧盟鼓励AI开源的策略与美国明显不同,“欧盟认为,开源对于欧盟与美国和中国竞争至关重要。”

How it is the act going to be enforced?未来的执行

The European Commission will create an AI Office to oversee general-purpose models, advised by independent experts. The office will develop ways to evaluate the capabilities of these models and monitor related risks. But even if companies such as OpenAI comply with regulations and submit, for example, their enormous data sets, Jitsev questions whether a public body will have the resources to scrutinize submissions adequately. “The demand to be transparent is very important,” they say. “But there was little thought spent on how these procedures have to be executed.”

欧盟委员会将设立AI办公室,在第三方专家的建议下监管通用模型。办公室将制定方法以评估模型的能力并监控相关风险。不过,伊采夫质疑道,即便是如OpenAI遵循相关规定并提交相关数据,比如说他们庞大的数据集,这样的公司而言,公共机构是否有足够的资源对他们提交的资料进行充分核查依然是个问题。他们补充道,“透明性要求非常重要,但是关于这些措施该如何执行的顾虑少之又少。”(译者注:“scrutinize”指细看,仔细审查)


注:部分翻译由DeepL辅助

封面PID:102940261



Copyright © 2024 aigcdaily.cn  北京智识时代科技有限公司  版权所有  京ICP备2023006237号-1