Artificial intelligence could pose existential risks and governments need to know how to keep it from being “exploited by evil people,” says former Google CEO Eric Schmidt. warned Wednesday.
The future of AI is at the center of debate among technologists and policy makers as they consider what the technology will look like in the future and how it should be regulated.
ChatGPT, the chatbot that made headlines last year, undoubtedly helped raise awareness of artificial intelligence even further as major companies around the world launched competing products to showcase their AI capabilities. .
At the Wall Street Journal CEO Council Summit in London, Schmidt said he fears AI is an “existential risk.”
“And existential risk is defined as very many, many, many people being harmed or killed,” Schmidt said.
“There are scenarios where these systems will be able to detect zero-day exploits in cyberattacks very soon, but not today. You can solve problems or discover new kinds of biology. Now, this is fiction today, but the reasoning is probably true. And when that happens, we want to be prepared to know how to keep these things from being exploited by evil people. ”
A zero-day exploit is a security vulnerability found in software or systems by hackers.
Schmidt, who was Google’s chief executive officer from 2001 to 2011, had no clear views on how AI should be regulated, but it is a “broader issue for society.” said. But he said it was unlikely that the United States would create a new regulator dedicated to AI regulation.
Schmidt isn’t the first major tech insider to warn about the risks of AI.
Sam Altman, CEO of OpenAI, which developed ChatGPT, admitted in March that he was “a little scared” of artificial intelligence. He said he was concerned about authoritarian governments developing the technology.
Tesla CEO Elon Musk has said in the past that he believes AI is one of the “biggest risks” to civilization.
Even at current Google, alphabet CEO Sundar Pichai, who recently oversaw the launch of the company’s proprietary chatbot called Bard AI, said the technology would “impact every product from every company,” and that society will change. He added that we need to be prepared for
Schmidt is a member of the U.S. National Security Council on AI, which in 2019 began a review of the technology, including potential regulatory frameworks. The commission will release an investigative report in 2021 warning that the US is ill-prepared for the AI era.
