正文翻译
Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.
萨姆·奥特曼表示,催生了ChatGPT的研究策略已经用尽,未来人工智能的进步将需要新的想法。
THE STUNNING CAPABILITIES of ChatGPT, the chatbot from startup OpenAI, has triggered a surge of new interest and investment in artificial intelligence. But late last week, OpenAI’s CEO warned that the research strategy that birthed the bot is played out. It''s unclear exactly where future advances will come from.
来自OpenAI初创公司的聊天机器人ChatGPT的惊人能力激发了社会对人工智能的新兴趣和投资的激增。但就在上周晚些时候,OpenAI的首席执行官警告说,催生该机器人的研究策略已经用尽。目前尚不清楚未来该向何处发展。
OpenAI has delivered a series of impressive advances in AI that works with language in recent years by taking existing machine-learning algorithms and scaling them up to previously unimagined size. GPT-4, the latest of those projects, was likely trained using trillions of words of text and many thousands of powerful computer chips. The process cost over $100 million.
近年来,OpenAI通过将现有的机器学习算法扩展到以前难以想象的规模,在自然语言处理的人工智能领域取得了一系列令人印象深刻的进步。这些项目中最新的一个是GPT-4,它很可能是用数万亿文字和数千个强大的计算机芯片进行训练的。这个过程耗资超过1亿美元。
But the company’s CEO, Sam Altman, says further progress will not come from making models bigger. “I think we're at the end of the era where it's going to be these, like, giant, giant models,” he told an audience at an event held at MIT late last week. “We'll make them better in other ways.”
但该公司首席执行官萨姆·奥特曼表示,进一步的进展不会是制造更大的模型。“我认为我们已经到了这样一个时代的尽头,大模型的尽头”他上周晚些时候在麻省理工学院举行的一次活动上对观众说。“我们会用其他方式让人工智能变得更好。”
Altman’s declaration suggests an unexpected twist in the race to develop and deploy new AI algorithms. Since OpenAI launched ChatGPT in November, Microsoft has used the underlying technology to add a chatbot to its Bing search engine, and Google has launched a rival chatbot called Bard. Many people have rushed to experiment with using the new breed of chatbot to help with work or personal tasks.
奥特曼的声明表明,在开发和部署新人工智能算法的竞赛中出现了一个意想不到的转折。自从OpenAI在11月推出ChatGPT以来,微软已经使用底层技术为其必应搜索引擎添加了一个聊天机器人,谷歌也推出了一个名为Bard的聊天机器人作为竞争对手。许多人急于尝试使用这种新型聊天机器人来帮助工作或个人任务。
Meanwhile, numerous well-funded startups, including Anthropic, AI21, Cohere, and Character.AI, are throwing enormous resources into building ever larger algorithms in an effort to catch up with OpenAI’s technology. The initial version of ChatGPT was based on a slightly upgraded version of GPT-3, but users can now also access a version powered by the more capable GPT-4.
与此同时,包括Anthropic、AI21、Cohere和Character.AI在内的许多资金充足的初创公司正在投入巨大资源构建越来越大的算法,以努力赶上OpenAI的技术。ChatGPT的最初版本是基于GPT-3的略微升级版本,但现在用户也可以访问由能力更强的GPT-4驱动的版本。
Altman’s statement suggests that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place. In the paper describing GPT-4, OpenAI says its estimates suggest diminishing returns on scaling up model size. Altman said there are also physical limits to how many data centers the company can build and how quickly it can build them.
奥特曼的声明表明,GPT-4可能是OpenAI制造更大模型并输入更多数据的战略中出现的最后一个重大进步。他没有说明可能取代它的将会是哪种研究策略或技术。在描述GPT-4的论文中,OpenAI表示,其估计表明,扩大模型规模的收益正在减少。奥特曼说,对于公司可以建造多少数据中心以及建造速度有多快,也存在物理限制。
Nick Frosst, a cofounder at Cohere who previously worked on AI at Google, says Altman’s feeling that going bigger will not work indefinitely rings true. He, too, believes that progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. “There are lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model,” he says. Frosst says that new AI model designs, or architectures, and further tuning based on human feedback are promising directions that many researchers are already exploring.
Cohere的联合创始人尼克·弗罗斯特曾在谷歌从事人工智能工作,他表示,阿尔特曼认为不断扩大规模并非长久之计的感觉是正确的。他也认为,在GPT-4及其竞争对手的核心——基于注意力机制的机器学习模型上取得进展,不仅仅在于规模。“有很多方法可以让注意力机制模型变得更好、更有用,而且这些方法并不包括向模型添加参数,”他说。弗罗斯特表示,新的人工智能模型设计或架构,以及基于人类反馈的进一步调整,是许多研究人员已经在探索的有希望的方向。
Each version of OpenAI’s influential family of language algorithms consists of an artificial neural network, software loosely inspired by the way neurons work together, which is trained to predict the words that should follow a given string of text.
OpenAI有影响力的语言算法家族的每个版本都由一个人工神经网络组成,该软件的灵感来自于神经元协同工作的方式,经过训练可以预测给定文本字符串后应跟随的单词。
The first of these language models, GPT-2, was announced in 2019. In its largest form, it had 1.5 billion parameters, a measure of the number of adjustable connections between its crude artificial neurons.
这些语言模型中的第一个,GPT-2,于2019年宣布。在其最大形式中,它有15亿个参数,这是其原始人工神经元之间可调节连接数量的度量。
At the time, that was extremely large compared to previous systems, thanks in part to OpenAI researchers finding that scaling up made the model more coherent. And the company made GPT-2’s successor, GPT-3, still bigger, with a whopping 175 billion parameters. That system’s broad abilities to generate poems, emails, and other text helped convince other companies and research institutions to push their own AI models to similar and even greater size.
当时,与以往系统相比,这非常庞大,部分归功于OpenAI研究人员发现扩大规模会使模型更加连贯的特性。该公司使GPT-2的继任者GPT-3更加庞大,拥有惊人的1750亿参数。该系统广泛的能力能够生成诗歌、电子邮件和其他文本,帮助说服其他公司和研究机构推动他们自己的人工智能模型达到类似甚至更大的规模。
After ChatGPT debuted in November, meme makers and tech pundits speculated that GPT-4, when it arrived, would be a model of vertigo-inducing size and complexity. Yet when OpenAI finally announced the new artificial intelligence model, the company didn’t disclose how big it is—perhaps because size is no longer all that matters. At the MIT event, Altman was asked if training GPT-4 cost $100 million; he replied, “It’s more than that.”
ChatGPT在11月首次亮相后,表情包制作者和科技评论家推测,GPT-4一旦到来,将是一个令人眩晕的大小和复杂度的模型。然而,当OpenAI最终宣布了新的人工智能模型时,公司并没有透露它有多大——也许是因为大小不再那么重要。在麻省理工学院的活动中,奥特曼被问及训练GPT-4是否花费了1亿美元;他回答说,“超过那个数目。”
Although OpenAI is keeping GPT-4’s size and inner workings secret, it is likely that some of its intelligence already comes from looking beyond just scale. On possibility is that it used a method called reinforcement learning with human feedback, which was used to enhance ChatGPT. It involves having humans judge the quality of the model’s answers to steer it towards providing responses more likely to be judged as high quality.
尽管OpenAI对GPT-4的大小和内部工作保持秘密,但它的一些智能表现可能已经说明它不仅仅靠堆叠参数规模而做到的。一种可能性是它使用了一种称为强化学习与人类反馈的方法,这种方法被用来增强ChatGPT。它包括让人类判断模型答案的质量,以引导它提供更有可能被评为高质量的回答。
The remarkable capabilities of GPT-4 have stunned some experts and sparked debate over the potential for AI to transform the economy but also spread disinformation and eliminate jobs. Some AI experts, tech entrepreneurs including Elon Musk, and scientists recently wrote an open letter calling for a six-month pause on the development of anything more powerful than GPT-4.
GPT-4的卓越能力震撼了一些专家,并引发了关于人工智能可能如何转变经济的辩论,同时也引发了关于散布虚假信息和消除就业岗位的担忧。一些人工智能专家、包括埃隆·马斯克在内的科技企业家和科学家最近写了一封公开信,呼吁暂停开发任何比GPT-4更强大的技术六个月。
At MIT last week, Altman confirmed that his company is not currently developing GPT-5. “An earlier version of the letter claimed OpenAI is training GPT-5 right now,” he said. “We are not, and won't for some time.”
上周在麻省理工学院,奥特曼确认他的公司目前没有在开发GPT-5。“早期版本的公开信声称OpenAI目前正在训练GPT-5,”他说。“但其实我们没有,而且短时间内也不会。”