当前位置:首页|资讯|人工智能

《经济学人》双语:人工智能是天使还是魔鬼?(Part 2)

作者:自由英语之路发布时间:2023-04-25

原文标题:
How to worry wisely about AI
Rapid progress in AI is arousing fear as well as excitement. How concerned should you be?

如何明智地看待人工智能
人工智能的迅猛发展既带来了兴奋,也引发了担忧。你应该有多担忧呢?

Technology and society
科技与社


[Paragraph 10]

The degree of existential risk posed by AI has been hotly debated. Experts are divided.

人工智能带来的生存风险程度一直备受争议。专家们意见不一。


In a survey of AI researchers carried out in 2022, 48% thought there was at least a 10% chance that AI’s impact would be “extremely bad (eg, human extinction)”.

在 2022 年对人工智能研究人员进行的一项调查中,48% 的人认为人工智能至少有 10% 的可能性会产生“极坏影响(例如,人类灭绝)”。


But 25% said the risk was 0%; the median researcher put the risk at 5%.

但 25% 的人认为风险是 0%;介于两者之间的研究人员认为风险是5%。


The nightmare is that an advanced AI causes harm on a massive scale, by making poisons or viruses, or persuading humans to commit terrorist acts.

人类的噩梦是,如果先进的人工智能会制造毒药或病毒,或说服人类实施恐怖行为,就会造成大规模伤害。


It need not have evil intent: researchers worry that future AIs may have goals that do not align with those of their human creators.
AI不一定有邪恶的意图:但研究人员担心未来人工智能的目标可能与其人类创造者的目标不一致。


[Paragraph 11]

Such scenarios should not be dismissed. But all involve a huge amount of guesswork, and a leap from today’s technology.

这种情况不应该被忽视。但所有这些都涉及大量的猜测,以及与今天相比的技术飞跃。


And many imagine that future AIs will have unfettered access to energy, money and computing power, which are real constraints today, and could be denied to a rogue AI in future.

许多人想象未来的人工智能将会无限制地获取能源、金钱和算力,而这些在今天是真正的限制条件,在未来可能会拒绝将这些资源供给流氓人工智能。


Moreover, experts tend to overstate the risks in their area, compared with other forecasters. (And Mr Musk, who is launching his own AI startup, has an interest in his rivals downing tools.)

此外,与其他预测者相比,专家们倾向于夸大他们自己领域的风险。(而马斯克正在建立自己的人工智能新公司,他当然有兴趣让对手放下工具)


Imposing heavy regulation, or indeed a pause, today seems an over-reaction. A pause would also be unenforceable.
实施严格的监管,或者说暂停开发AI,目前似乎是一种过度反应。暂停开发也难以执行。


[Paragraph 12]

Regulation is needed, but for more mundane reasons than saving humanity.

监管是需要的,但其原因比拯救人类更平凡。


Existing AI systems raise real concerns about bias, privacy and intellectual-property rights.

现有的人工智能系统引起了人们对偏见、隐私和知识产权的真正担忧。


As the technology advances, other problems could become apparent.

随着技术的革新发展,其他问题可能会变得明显。


The key is to balance the promise of AI with an assessment of the risks, and to be ready to adapt.
关键是要平衡人工智能的前景和风险评估,并做好适应的准备。 


[Paragraph 13]

So far governments are taking three different approaches.

到目前为止,各国政府正采取3种不同的策略。


At one end of the spectrum is Britain, which has proposed a light-touch” approach with no new rules or regulatory bodies, but applies existing regulations to AI systems.

英国处于一个极端,它提出了一种“温和干预”的方法,即没有新规则或监管机构,但会将现有法规应用于人工智能系统。


The aim is to boost investment and turn Britain into an “AI superpower”.

目的是促进投资,将英国变成一个“人工智能超级大国”。


America has taken a similar approach, though the Biden administration is now seeking public views on what a rulebook might look like.

美国也采取了类似的做法,不过拜登政府现在正在征求公众对人工智能规则的意见。

[Paragraph 14]

The EU is taking a tougher line.

欧盟正在采取更强硬的策略。


Its proposed law categorises different uses of AI by the degree of risk, and requires increasingly stringent monitoring and disclosure as the degree of risk rises from, say, music-recommendation to self-driving cars.

其拟议的法律根据风险程度对 AI 的不同用途进行分类,并且随着风险程度的增加(例如从音乐推荐到自动驾驶汽车),进行更严格的监控和披露。


Some uses of AI are banned altogether, such as subliminal advertising and remote biometrics. Firms that break the rules will be fined.

人工智能的某些用途被完全禁止,例如潜意识广告和远程生物识别。违反规定的公司将面临罚款。


For some critics, these regulations are too stifling.
批评者认为这些规定太令人窒息了。


[Paragraph 15]

But others say an even sterner approach is needed.

但其他人认为需要采取更严厉的措施。


Governments should treat AI like medicines, with a dedicated regulator, strict testing and pre-approval before public release.

政府应该像对待药品一样对待人工智能,在公开发布之前有专门的监管机构对其进行严格的测试和预先批准。


China is doing some of this, requiring firms to register AI products and undergo a security review before release.
中国正在做这方面的工作,要求企业注册人工智能产品,并在发布前接受安全审查。


[Paragraph 16]

What to do? The light-touch approach is unlikely to be enough.

怎么办?温和干预可能不够。


If AI is as important a technology as cars, planes and medicines—and there is good reason to believe that it is—then, like them, it will need new rules.

如果人工智能与汽车、飞机和药品一样是一项重要的技术——有充分的理由相信它是如此——那么与它们一样,人工智能也需要新的规则。


Accordingly, the EU’s model is closest to the mark, though its classification system is overwrought and a principles-based approach would be more flexible.

因此,欧盟模式最接近目标,尽管它的分类系统过于严格,但基于原则的方法可以更加灵活。


Compelling disclosure about how systems are trained, how they operate and how they are monitored, and requiring inspections, would be comparable to similar rules in other industries.
强制披露系统的培训方式、运行方式和监控方式,以及要求进行检查,这将与其他行业的类似规则相媲美。


[Paragraph 17]

This could allow for tighter regulation over time, if needed.

如果需要,随着时间的推移,这可以允许更严格的监管。


A dedicated regulator may then seem appropriate; so too may intergovernmental treaties, similar to those that govern nuclear weapons, should plausible evidence emerge of existential risk.

然后,一个专门的监管机构似乎也是必要的;如果有可信的证据表明风险存在,政府间管理人工智能条约也可能类似于管理核武器的条约。


To monitor that risk, governments could form a body modelled on CERN, a particle-physics laboratory, that could also study AI safety and ethics—areas where companies lack incentives to invest as much as society might wish.
为了监控这种风险,政府可以组建一个类似于粒子物理实验室CERN的机构,这种机构还可以研究 AI 安全和伦理——在这些领域,公司没有动力按照社会的意愿进行投资。


[Paragraph 18]

This powerful technology poses new risks, but also offers extraordinary opportunities. Balancing the two means treading carefully.

这项强大的技术带来了新的风险,但也带来了非凡的机遇。平衡两者意味着要谨慎行事。


A measured approach today can provide the foundations on which further rules can be added in future.

今天采取审慎的策略可以为未来增加更多规则提供基础。


But the time to start building those foundations is now.
但现在开始建立这些基础的时候到了。

(恭喜读完,本篇英语词汇量697/1406左右)
原文出自:2023年4月22日《The Economist》Leaders版块。

精读笔记来源于:自由英语之路

本文翻译整理: Irene

本文编辑校对: Irene
仅供个人英语学习交流使用。

【补充资料】(来自于网络)
温和干预Light-touch Approach又称“轻度监管,宽松政策”,通常是指对某个问题或领域的监管或干预采取较为温和、灵活的方式,以尽可能减少对其自主性和自由度的限制。这种方法通常强调依靠市场力量和自我调节能力,与过度干预相比,其干预力度和程度更小。

欧洲粒子物理学研究中心CERN(Conseil Européen pour la Recherche Nucléaire),是世界上最大的基础科学研究机构之一。其总部位于瑞士日内瓦附近,拥有22个成员国。CERN致力于研究基本粒子的物理学,包括了使用加速器加速带电粒子的能量以及开发探测器来观察反应产生的现象等方面。CERN曾经发现了许多重要的粒子,如W和Z玻色子、夸克、胶子等,并成功地发现了已知物质的基本构成。此外,CERN还是Web技术的诞生地,它发明了万维网技术,为现代信息通讯技术做出了巨大的贡献。


【重点句子】(3个)
Imposing heavy regulation, or indeed a pause, today seems an over-reaction. A pause would also be unenforceable.
实施严格的监管,或者说暂停开发AI,目前似乎是一种过度反应。暂停开发也难以执行。

Governments should treat AI like medicines, with a dedicated regulator, strict testing and pre-approval before public release.
政府应该像对待药品一样对待人工智能,在公开发布之前有专门的监管机构对其进行严格的测试和预先批准。

This powerful technology poses new risks, but also offers extraordinary opportunities. Balancing the two means treading carefully.
这项强大的技术带来了新的风险,但也带来了非凡的机遇。平衡两者意味着要谨慎行事。


自由英语之路




Copyright © 2024 aigcdaily.cn  北京智识时代科技有限公司  版权所有  京ICP备2023006237号-1