新闻来源:www.nytimes.com
原文地址:A.I. Pioneers Call for Protections Against ‘Catastrophic Risks’
新闻日期:2024-09-16

人工智能先驱呼吁制定全球监管机制

全球多名人工智能专家共同发表声明,要求各国政府出台监管措施,以应对快速发展的技术可能带来的灾难性风险。他们认为,这项技术的发展速度如此之快,单靠企业和国家难以决定如何监督和管理。

会议召集人之一的 Yoshua Bengio 表示,这次活动旨在促进国际交流与合作,避免“潘多拉魔盒”被打开。来自中国知名人工智能研究机构的 Fu Hongyu 也表达了类似观点,认为需要共同协作来制定人工智能监管机制。

此次会议上,Geoffrey Hinton、Andrew Yao 和多位中国人工智能领域的知名人士参加了讨论。他们在中国文艺复兴宫殿举行会议,探讨如何避免人工智能带来的灾难性结果。

随着中美两国在科技领域的竞争加剧,人工智能的安全问题愈发受到关注。最新一次的会议结果显示,来自全球 28 个国家的代表已签署了一份联合声明,以促进人工智能的合作与协作。


原文摘要:

Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology.The release of ChatGPT and a string of similar services that can create text and images on command have shown how A.I. is advancing in powerful ways. The race to commercialize the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it.In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that “loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity.”If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University.“If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?” Dr. Hadfield said.On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by a nonprofit research group in the United States called Far.AI.Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors.The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.Scientists from the United States, China, Britain, Singapore, Canada and elsewhere signed the statement.Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China’s top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing.The group also included scientists from several of China’s leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.Their latest gathering in Venice took place at a building owned by the billionaire philanthropist Nicolas Berggruen. The president of the Berggruen Institute think tank, Dawn Nakagawa, participated in the meeting and signed the statement released on Monday.The meetings are a rare venue for engagement between Chinese and Western scientists at a time when the United States and China are locked in a tense competition for technological primacy.In recent months, Chinese companies have unveiled technology that rivals the leading American A.I. systems.Government officials in both China and the United States have made artificial intelligence a priority in the past year. In July, a Chinese Communist Party conclave that takes place every five years called for a system to regulate A.I. safety. Last week, an influential technical standards group in China published an A.I. safety framework.Last October, President Biden signed an executive order that required companies to report to the federal government about the risks that their A.I. systems could pose, like their ability to create weapons of mass destruction or potential to be used by terrorists.President Biden and China’s leader, Xi Jinping, agreed when they met last year that officials from both countries should hold talks on A.I. safety. The first took place in Geneva in May.In a broader government initiative, representatives from 28 countries signed a declaration in Britain last November, agreeing to cooperate on evaluating the risks of artificial intelligence. They met again in Seoul in May. But these gatherings have stopped short of setting specific policy goals.Distrust between the United States and China adds to the difficulty of achieving alignment.“Both countries are hugely suspicious of each other’s intentions,” said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, who was not part of the dialogue. “They’re worried that if they pump the breaks because of safety concerns, that will allow the other to zoom ahead,” Mr. Sheehan said. “That suspicion is just going to be baked in.”The scientists who met in Venice this month said their conversations were important because scientific exchange is shrinking amid the competition between the two geopolitical superpowers.In an interview, Dr. Bengio, one of the founding members of the group, cited talks between American and Soviet scientists at the height of the Cold War that helped bring about coordination to avert nuclear catastrophe. In both cases, the scientists involved felt an obligation to help close the Pandora’s box opened by their research.Technology is changing so quickly that is difficult for individual companies and governments to decide how to approach it, and collaboration is crucial, said Fu Hongyu, the director of A.I. governance at Alibaba’s research institute, AliResearch, who did not participate in the dialogue.“It’s not like regulating a mature technology,” Mr. Fu said. “Nobody knows what the future of A.I. looks like.”

Verified by MonsterInsights