当前位置:首页|资讯|人工智能|GPT-4|马斯克

人类公开信|暂停大型人工智能实验6个月

作者:产品君发布时间:2023-03-29

北京时间3月29日上午9:00,1000多位科学家发表联名公开信,呼吁暂停GPT-4及以上AI训练,至少6个月。正式打响人类反抗AI第一枪。


签名大佬有马斯克、Gary Marcus、Y. Bengio、S. Russell、Max Tegmark、V. Kraknova、P. Maes、Grady Booch、Andrew Yang、Tristan Harris、Tristan Harris,共计1125名科学家和企业家


如果不是利益驱动,一定是他们真的发现AI不得了的东西


以下是公开信GPT翻译版:


一封公开信:暂停巨型人工智能实验

我们呼吁所有人工智能实验室立即暂停培训超过GPT-4水平的人工智能系统至少6个月


具有人类竞争力的AI系统可能对社会和人类造成深刻的风险,这已经得到广泛研究[1]并被顶尖AI实验室所认可[2]。正如Asilomar AI原则所指出的那样,先进的AI可能代表着地球生命史上深刻的变革,并且应该计划和管理好相应的关注和资源。不幸的是,即使最近几个月来看到了AI实验室在无法控制地竞赛开发和部署越来越强大、甚至连他们自己也无法理解、预测或可靠控制数字思维体系时,这种规划和管理水平仍未发生。


当今智能系统正在变得具有一般任务方面与人类竞争力[3] ,我们必须问自己:我们是否应该让机器用宣传和虚假信息充斥我们的信息渠道?我们是否应该将所有工作都自动化处理,包括那些富有成效性质?我们是否应该开发非人类思维体系以至于最终数量超过、比智商更高、过时并取代了我们? 我们是否冒着失去文明控制权之风险?此类决策不应该委托给未经选举的技术领袖。只有在我们确信其效果将是积极的、风险可控时,才应该开发强大的AI系统。这种信心必须得到充分证明,并随着系统潜在影响的重要性而增加。OpenAI最近关于人工通用智能的声明指出:“在某些时候,在开始训练未来系统之前获得独立审查可能很重要,对于最先进的努力来说,同意限制用于创建新模型所使用计算机速度也很重要。” 我们赞成这一观点。现在就是那个时间。


因此,我们呼吁所有AI实验室立即暂停至少6个月比GPT-4更强大的AI系统培训。这种暂停应该是公开和可验证的,并包括所有关键参与者。如果无法迅速实施这样一个暂停,则政府应介入并实施禁令。


AI实验室和独立专家应利用此次暂停共同开发和执行一套共享安全协议,以进行高级别AI设计和开发,并由独立外部专家进行严格审核和监督管理。这些协议应确保遵守它们的系统是安全的,超出合理怀疑范围[4]。这并不意味着对AI发展总体暂停,而仅仅是从危险的竞赛中退后一步,避免使用具有新颖能力、难以预测的黑盒模型。


人工智能研究和开发应重新聚焦于使当今强大的、最先进的系统更加准确、安全、可解释、透明、稳健、对齐,值得信赖和忠诚。


同时,人工智能开发者必须与政策制定者合作,大力推动健全人工智能治理体系的发展。这些至少应包括:专门从事人工智能监管的新型有权机构;对高度复杂的人工智能系统和大量计算资源进行监督和跟踪;起源追溯和水印系统以帮助区分真实与合成,并跟踪模型泄漏;一个强有力的审计和认证生态系统;因人工智能造成损害而承担责任;为技术性人工智能安全研究提供充足公共资金支持;以及为应对人工智能将引起的巨大经济和政治颠覆(特别是民主)而设立充足资源机构。


在AI方面,我们可以享受一个繁荣昌盛的未来。成功地创建了強大 AI 系统后,我们现在可以享受“AI 夏季”,收获回报,并将这些系统设计为所有人都受益的明显好处,并给社会一个适应的机会。社会已经暂停了其他可能对社会产生灾难性影响的技术。我们可以在这里做到这一点。让我们享受漫长的 AI 夏季,不要匆忙地进入秋天而毫无准备


以下是公开信原文:

Pause Giant AI Experiments: An Open Letter


We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.


AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.


Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.


Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.


AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.


AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.


In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.


Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.


部分签名大佬:


Yoshua Bengio,蒙特利尔大学教授,图灵奖得主,深度学习开发者,蒙特利尔机器学习算法研究所负责人


Stuart Russell,伯克利大学计算机科学教授、智能系统中心主任和标准教材《人工智能:现代方法》的合著者


Elon Musk,SpaceX、Tesla 和 Twitter 的首席执行官。

Steve Wozniak,苹果公司联合创始人


Yuval Noah Harari, 希伯来大学作家和教授


Andrew Yang, Forward Party 共同主席、2020 年总统候选人、纽约时报畅销书作者和全球企业家总统大使.

Connor Leahy, Conjecture 首席执行官


Jaan Tallinn, Skype 联合创始人、存在风险研究中心和未来生命研究所共同创办人


Evan Sharp, Pinterest 联合创始人.


Chris Larsen, Ripple 联合创始人.


Emad Mostaque, Stability AI 首席执行官.


Valerie Pisano,MILA 总裁兼首席执行官


John J Hopfield ,普林斯顿大学名誉退休教授 ,关联神经网络发明者


Rachel Bronson,Bulletin of the Atomic Scientists 总裁


Max Tegmark, MIT 人工智能和基本相互作用中心教授,未来生命研究所主席.


Anthony Aguirre, 加州大学圣克鲁兹分校物理学教授,未来生命研究所执行董事.


Victoria Krakovna, DeepMind 研究科学家、未来生命研究所联合创始人.


Emilia Javorsky, 医师-科学家&总监,Future of Life Institute


Sean O'Heigeartaigh,Cambridge 存在风险研究中心执行董事


Tristan Harris, Center for Humane Technology 执行董事


Marc Rotenberg,AI 和数字政策中心总裁


Nico Miailhe, The Future Society (TFS) 创始人和总裁


Zachary Kenton ,DeepMind 高级研究科学家


Ramana Kumar ,DeepMind 研究科学家


Gary Marcus ,纽约大学 AI 研究员、名誉退休教授


Steve Omohundro,Beneficial AI Research 首席执行官


Copyright © 2024 aigcdaily.cn  北京智识时代科技有限公司  版权所有  京ICP备2023006237号-1