OpenAI 研究人员在 CEO 下台前警告董事会人工智能的突破

原标题:Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters. 11 月 22 日(路透社)——两位知情人士告诉路透社,在 OpenAI 首席执行官萨姆·奥尔特曼流亡四天之前,几位研究人员给董事会写了一封信,警告一项强大的人工智能发现,他们称这一发现可能威胁人类。 The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader. 两位消息人士称,此前未报道的信件和人工智能算法是董事会罢免生成人工智能典型代表奥特曼之前的关键进展。在周二晚些时候凯旋归来之前,700 多名员工威胁要辞职并加入支持者微软 (MSFT.O),以声援被解雇的领导人。 The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment. 消息人士称,这封信是导致奥特曼被解雇的董事会一系列不满的因素之一,其中包括对在了解后果之前将先进技术商业化的担忧。路透社无法查看这封信的副本。写这封信的工作人员没有回应置评请求。 After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy. 其中一位知情人士称,在接受路透社联系后,拒绝发表评论的 OpenAI 在给员工的内部消息中承认有一个名为 Q* 的项目,并在周末活动之前致董事会一封信。 OpenAI 发言人表示,该消息由长期担任高管的 Mira Murati 发送,提醒员工注意某些媒体报道,但没有评论其准确性。 Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks. 一位知情人士告诉路透社,OpenAI 的一些人认为 Q*(发音为 Q-Star)可能是该初创公司探索通用人工智能 (AGI) 的突破。 OpenAI 将 AGI 定义为在最具经济价值的任务中超越人类的自主系统。 Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said. 这位不愿透露姓名的人士表示,鉴于庞大的计算资源,新模型能够解决某些数学问题,因为该人士无权代表公司发言。消息人士称,虽然数学成绩仅达到小学生的水平,但在此类测试中取得好成绩让研究人员对 Q* 未来的成功非常乐观。 Reuters could not independently verify the capabilities of Q* claimed by the researchers. 路透社无法独立验证研究人员声称的 Q* 的功能。 'VEIL OF IGNORANCE' “无知之幕” Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe. 研究人员认为数学是生成式人工智能发展的前沿。目前,生成式人工智能擅长通过统计预测下一个单词来进行写作和语言翻译,而同一问题的答案可能会有很大差异。但征服数学能力(只有一个正确答案)意味着人工智能将拥有类似于人类智能的更强推理能力。例如,人工智能研究人员认为,这可以应用于新颖的科学研究。 Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend. 与只能解决有限数量运算的计算器不同,AGI 可以概括、学习和理解。 In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest. 消息人士称,研究人员在给董事会的信中指出了人工智能的能力和潜在危险,但没有具体说明信中提到的具体安全问题。计算机科学家长期以来一直在讨论高度智能的机器所带来的危险,例如他们是否会认为毁灭人类符合他们的利益。 Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said. 研究人员还标记了一个“人工智能科学家”团队的工作,多个消息来源证实了该团队的存在。一位知情人士表示,该小组由早期的“Code Gen”和“Math Gen”团队合并而成,正在探索如何优化现有的人工智能模型,以提高其推理能力并最终开展科学工作。 Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI. Altman 努力使 ChatGPT 成为历史上增长最快的软件应用程序之一,并从 Microsoft 吸引了必要的投资和计算资源,以更接近 AGI。 In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight. 除了在本月的演示中宣布一系列新工具外,奥特曼上周还在旧金山举行的世界领导人峰会上开玩笑说,他相信重大进展即将到来。 "Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit. “在 OpenAI 的历史上已经有四次了,最近一次是在过去几周,当我们推开无知的面纱并向前推进发现的前沿时,我已经在房间里了,并且能够做到这一点是一生的职业荣誉,”他在亚太经济合作组织峰会上说道。 A day later, the board fired Altman. 一天后,董事会解雇了奥特曼。 source:https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

Date:
Words:
1834
Time to read:
9 mins