Would You like a feature Interview?
All Interviews are 100% FREE of Charge
See Also: Parrots, Paperclips, Safety and Ethics: Why the Artificial Intelligence Argument Sounds Like a Foreign Language
Here is a list of terms used by AI insiders.
AGI— AGI stands for “artificial general intelligence”. As a concept, it means an AI that is much more advanced than is currently possible, capable of doing most things as well or better than most humans, including self-improvement.
Example: “To me, AGI is the median number of people you can hire as co-workers. said in his speech. Recent Greylock VC Events.
Ethics of AI It describes a desire to prevent AI from causing imminent harm, addressing questions such as how AI systems collect and process data and potential bias in areas such as housing and employment. often focus.
AI safety It explains the long-term fear that AI will suddenly advance and super-intelligent AI will harm or destroy humanity.
Alignment The practice of fine-tuning an AI model to produce the output desired by the author. In the short term, coordination refers to software building and content moderation practices. But it can also point to the much larger and still theoretical task of ensuring that AGI is friendly to humanity.
For example: “What these systems are calibrated to, whose values, what their boundaries are, are somehow set by society at large and by governments. Set, it can create an AI constitution or whatever, and it should be very widely disseminated by society,” Sam Altman told a Senate hearing last week.
urgent action — Emergent behavior is the technical term for some AI models exhibiting abilities that were not originally intended. It can also explain surprising results from commonly deployed AI tools.
For example: “However, even as a first step, GPT-4 challenges a significant number of widely accepted assumptions about machine intelligence and introduces new ones whose sources and mechanisms are difficult to pinpoint at this time. It exhibits excellent behavior and functionality,” said a Microsoft researcher. Written in the spark of artificial intelligence.
fast takeoff or hard takeoff — A phrase suggesting that even if someone succeeds in building AGI, it’s already too late to save humanity.
Example: “AGI could be in the near or far future. Take-off speeds from early AGI to more powerful successors could be slow or fast,” said Sam Altman, CEO of OpenAI. says. in a blog post.
Hoom — Also known as “hard take off”. It’s an onomatopoeia, also described in several blog posts and essays as an acronym for “Fast Onset of Overwhelming Mastery.”
For example: “You sound like you believe in the ridiculous hard take off ‘hmm’ scenario and don’t quite understand how it all works. ” tweeted Yann LeCun, Head of Meta AI.
GPUs — A chip used to train the model and perform inference. It is a descendant of chips used to play advanced computer games. The most commonly used model at the moment is his Nvidia’s A100.
Example: From Stability AI founder Emad Mostque:
guardrail These are the software and policies that big tech companies are now building around AI models, often referred to as “derailments,” to ensure that data breaches and disturbing content generation are prevented. It can also refer to specific applications that protect AI from getting off topic, like Nvidia’s “NeMo Guardrails” product.
Example: “At this time of public focus on AI, the time for governments to play a role is precisely when we define and build the appropriate guardrails to protect people and their interests.” Christina, Chairman of IBM The Montgomery AI Ethics Committee and the company’s vice president told Congress this week.
Inference — Use AI models to make predictions or generate text, images, or other content. Inference can require a large amount of computational power.
For example: “The problem with inference is when the workload spikes very quickly. This is what happened to ChatGPT. We hit 1 million users in 5 days. GPU capacity can’t keep up Sid Sheth, founder of D-Matrix, previously told CNBC.
Large language model — The kind of AI model that powers ChatGPT and Google’s new generative AI feature. Its peculiarity is that it uses terabytes of data to find statistical relationships between words to generate human-like text.
For example: “Google announced last week that its new large-scale language model will use almost five times more training data than its predecessor from 2022 and will be able to perform more advanced coding, math and creative writing tasks. CNBC reported earlier this week. .
paper clip These are important symbols for AI safety advocates, as they symbolize the potential for AGI to destroy humanity. This refers to a thought experiment published by philosopher Nick Bostrom about a “superintelligence” tasked with making as many paperclips as possible. It decides to turn all humans, the Earth, and an ever-growing portion of the universe into paperclips. Open AI logo is a reference to this story.
Example: “It’s entirely possible that there could be a superintelligence whose only goal is something completely arbitrary, such as producing as many paperclips as possible, and who would go all out to resist any attempt to change this goal. seems to be.” Bostrom I have written in his thought experiment.
Singularity “” is an old term, not used much today, but refers to the moment when technological change becomes self-enhancing, or when AGI is created. This is a metaphor. Literally, a singularity is a point on a black hole with infinite density.
Example: “The advent of artificial general intelligence is called the singularity because it’s very difficult to predict what will happen after that,” Tesla CEO Elon Musk said in an interview with CNBC this week.