Would You like a feature Interview?
All Interviews are 100% FREE of Charge
OpenAI CEO and co-founder Sam Altman speaks at a Senate Judiciary Subcommittee hearing in Washington, D.C., USA, Tuesday, May 16, 2023. Congress debates the potential of products like ChatGPT and the pitfalls of artificial intelligence. Questions about the future of the creative industry and its ability to distinguish fact from fiction.
Eric Lee | Bloomberg | Getty Images
Last week, OpenAI CEO Sam Altman charmed a room full of Washington, D.C. politicians with a dinner, and then spent nearly three hours testifying before a Senate hearing about the potential risks of artificial intelligence. bottom.
After the hearing, he summed up his position on AI regulation using terms that are not well known to the public.
“AGI safety is very important and the frontier model should be regulated,” Altman tweeted. “Regulatory capture is a bad thing and you shouldn’t mess with substandard models.”
“AGI” in this case refers to “artificial general intelligence”. As a concept, it is used to mean AI that is much more advanced than is currently possible, i.e. AI that can do most things as well or better than most humans, including self-improvement.
The “frontier model” is how we talk about AI systems that are the most expensive to manufacture and analyze the most data. Large language models like OpenAI’s GPT-4 are frontier models, compared to smaller AI models that perform specific tasks such as identifying cats in pictures.
Most agree that as the pace of development accelerates, we need laws governing AI.
“Machine learning, deep learning, has developed very rapidly over the last decade or so. Science professor Mi Thai said: at the University of Florida. “We worry that we are competing with a more powerful system that doesn’t fully understand or anticipate what it can do.”
But the language surrounding this debate makes it clear that there are two big camps between academics, politicians, and the tech industry. Some people are more worried about something called “”.AI safety.“The other faction is worried about what they will call it.”Ethics of AI.”
When Altman addressed Congress, he largely avoided jargon, but his tweets suggested he was primarily concerned with the safety of AI, which he said was run by Mr. Altman. It’s a stance shared by many industry leaders such as OpenAI, Google DeepMind, and well-capitalized startups. They are concerned about the possibility of building an unfriendly AGI of him with unimaginable power. This faction believes that urgent attention from governments is needed to regulate development and prevent the premature demise of humanity. This is a similar effort to nuclear non-proliferation.
“I am happy to hear that many people are starting to think seriously about the safety of AGI,” said DeepMind founder and current Inflection AI CEO Mustafa Suleiman. tweeted on friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what a comparable safety program could achieve today.”
But much of the debate in Congress and the White House on regulation is taking place through the lens of the current harm-focused AI ethics.
In this regard, the government should be more transparent about how AI systems collect and use data, limit its use in areas covered by anti-discrimination laws such as housing and employment, and ensure that current AI We need to explain how the technology is inadequate.of the white house Proposed AI Bill of Rights Since late last year, many of these concerns have been included.
Representing this camp at the congressional hearings was IBM Chief Privacy Officer Christina Montgomery told lawmakers that she believes companies working on these technologies should have a window on “AI ethics.”
“We need clear guidance on the categories of AI end-uses and AI-supported activities that are inherently riskier,” Montgomery told Congress.
How to understand AI jargon like an insider
See: How to Talk About AI Like an Insider
It should come as no surprise that the debate around AI has developed its own terminology. It started as a technical academic discipline.
Much of the software currently being discussed is based on so-called large language models (LLMs), which use graphics processing units (GPUs) to predict statistically likely sentences, images, or music. This is a process called “inference”. Of course, the AI model must first be built through a data analysis process called “training”.
But other terms, especially those by AI safety advocates, are more cultural in nature and often refer to shared references and insider jokes.
For example, people who care about the safety of AI might say they are worried about being turned into AI. paper clip. It refers to a thought experiment popularized by philosopher Nick Bostrom, in which his super-powered AI (“super-intelligence”) was given a mission to create as many paper clips of him as possible, You can logically decide to kill a human being. their remains.
The OpenAI logo was inspired by this story, and the company also created a paper clip in the shape of the logo.
Another concept in AI safety is “hard takeoff” again “fast take off” This is a phrase that suggests that even if someone succeeds in building AGI, it is already too late to save humanity.
Sometimes this idea is explained in onomatopoeic terms – “hmm— especially among those who criticize the concept.
“It sounds like you believe in a ridiculous hard-take off ‘hoom’ scenario and don’t quite understand how it all works.” tweeted Yann LeCun, the head of Meta AI, is skeptical of AGI’s claims in a recent discussion on social media.
AI ethics also has its own jargon.
When describing the limitations of current LLM systems, which are unable to understand meaning and only produce human-like language, AI ethicists often liken it to:stochastic parrot.”
This analogy, coined by Emily Bender, Timnit Gebrew, Angelina McMillan-Major, and Margaret Mitchell in a paper some of the authors wrote while they were at Google, suggests that sophisticated AI models are a reality. It emphasizes that although it can generate text that looks plausible, the software cannot understand it. The concept behind language — something like a parrot.
If these LLMs make up false facts in their responses,Hallucination. ”
One of the topics IBM’s Montgomery argued at the hearing was:explainabilityIn other words, biases inherent in LLM can be masked if researchers and practitioners are unable to pinpoint the exact number and path of operations that large AI models use to derive their output.
Adnan Masood, AI Architect at UST-Global, said, “We need explainability in terms of algorithms.” “In the past, when you looked at a classical algorithm, you would say, ‘Why are you making that decision? “
Another important term is “guardrailThis includes the software and policies that Big Tech companies are now building around AI models to ensure they don’t leak data or generate disturbing content. This is often called “”.I’m going off track.”
It can also refer to certain applications, such as Nvidia, that protect AI software from straying from the subject. “Nemo’s Guardrail” product.
“Our AI Ethics Committee plays a key role in overseeing our internal AI governance process, creating sensible guardrails for introducing technology into the world in a responsible and safe manner,” Montgomery said this week. We are building,” he said.
In some cases, these terms can have multiple meanings, as in the case of “”.sudden action. ”
A recent paper from Microsoft Research called “The Spark of Artificial General Intelligence” says it identified several “emergency behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs. claimed.
But it can also explain what happens when simple changes are made on a very large scale, like the patterns birds make. flying in flockor in the case of AI, what happens when a product similar to ChatGPT is used by millions of people, including widespread spam and disinformation.
