Clear
33.8 ° F
Full Weather | Burn Day
Sponsored By:

General purpose AI could lead to array of new risks, experts say in report ahead of AI summit

Sponsored by:

LONDON (AP) — Advanced artificial intelligence systems have the potential to create extreme new risks, such as fueling widespread job losses, enabling terrorism or running amok, experts said in a first-of-its-kind international report Wednesday cataloging the range of dangers posed by the technology.

The International Scientific Report on the Safety of Advanced AI is being released ahead of a major AI summit in Paris next month. The paper is backed by 30 countries including the U.S. and China, marking rare cooperation between the two countries as they battle over AI supremacy, highlighted by Chinese startup DeepSeek stunning the world this week with its budget chatbot in spite of U.S. export controls on advanced chips to the country.

The report by a group of independent experts is a “synthesis” of existing research intended to help guide officials working on drawing up guardrails for the rapidly advancing technology, Yoshua Bengio, a prominent AI scientist who led the study, told the Associated Press in an interview.

“The stakes are high,” the report says, noting that while a few years ago the best AI systems could barely spit out a coherent paragraph, now they can write computer programs, generate realistic images and hold extended conversations.

While some AI harms are already widely known, such as deepfakes, scams and biased results, the report said that “as general-purpose AI becomes more capable, evidence of additional risks is gradually emerging” and risk management techniques are only in their early stages.

It comes amid warnings this week about artificial intelligence from the Vatican and the group behind the Doomsday Clock.

The report focuses on general purpose AI, typified by chatbots such as OpenAI’s ChatGPT used to carry out many different kinds of tasks. The risks fall into three categories: malicious use, malfunctions and widespread “systemic” risks.

Bengio, who with two other AI pioneers won computer science’s top prize in 2019, said the 100 experts who came together on the report don’t all agree on what to expect from AI in the future. Among the biggest disagreements within the AI research community is the timing of when the fast-developing technology will surpass human capabilities across a variety of tasks and what that will mean.

“They disagree also about the scenarios,” Bengio said. “Of course, nobody has a crystal ball. Some scenarios are very beneficial. Some are terrifying. I think it’s really important for policymakers and the public to take stock of that uncertainty.”

Researchers delved into the details surrounding possible dangers. AI makes it easier, for example, to learn how to create biological or chemical weapons because AI models can provide step by step plans. But it’s “unclear how well they capture the practical challenges” of weaponizing and delivering the agents, it said.

General purpose AI is also likely to transform a range of jobs and “displace workers,” the report says, noting that some researchers believe it could create more jobs than it takes away, while others think it will drive down wages or employment rates, though there’s plenty of uncertainty over how it will play out.

AI systems could also run out of control, either because they actively undermine human oversight or humans pay less attention, the report said.

However, a raft of factors make it hard to manage the risks, including AI developers knowing little about how their models work, the authors said.

The paper was commissioned at an inaugural global summit on AI safety hosted by Britain in November 2023, where nations agreed to work together to contain potentially “catastrophic risks.” At a follow-up meeting hosted by South Korea last year, AI companies pledged to develop AI safety while world leaders backed setting up a network of public AI safety institutes.

The report, also backed by the United Nations and the European Union, is meant to weather changes in governments, such as the recent presidential transition in the U.S., leaving it up to each country to choose how it responds to AI risks. President Donald Trump rescinded former President Joe Biden’s AI safety policies on his first day in office, and has since directed his new administration to craft its own approach. But Trump hasn’t made any move to disband the AI Safety Institute that Biden formed last year, part of a growing international network of such centers.

World leaders, tech bosses and civil society are expected to convene again at the Paris AI Action Summit on Feb 10-11. French officials have said countries will sign a “common declaration” on AI development, and agree to a pledge on sustainable development of the technology.

Bengio said the report’s aim was not to “propose a particular way to evaluate systems or anything.” The authors stayed away from prioritizing particular risks or making specific policy recommendations. Instead they laid out what the scientific literature on AI says “in a way that’s digestible by policymakers.”

“We need to better understand the systems we’re building and the risks that come with them so that we can we can take these better decisions in the future,” he said.

__

AP Technology Writer Matt O’Brien in Providence, Rhode Island contributed to this report.

By KELVIN CHAN
AP Business Writer

Feedback