AI is confusing — here’s your cheat sheet
What the words behind generative AI tools actually mean.
www.theverge.com
AI is confusing — here’s your cheat sheet
If you can’t tell the difference between AGI and RAG, don’t worry! We’re here for you.
By Jay Peters, a news editor who writes about technology, video games, and virtual worlds. He’s submitted several accepted emoji proposals to the Unicode Consortium.
Jul 22, 2024, 8:00 AM EDT
Image: Hugo J. Herrera for The Verge
Artificial intelligence is the hot new thing in tech — it feels like every company is talking about how it’s making strides by using or developing AI. But the field of AI is also so filled with jargon that it can be remarkably difficult to understand what’s actually happening with each new development.
To help you better understand what’s going on, we’ve put together a list of some of the most common AI terms. We’ll do our best to explain what they mean and why they’re important.
What exactly
Artificial intelligence: Often shortened to AI, the term “artificial intelligence” is technically the discipline of computer science that’s dedicated to making computer systems that can think like a human.
But right now, we’re mostly hearing about AI as a technology and or even an entity, and what exactly that means is harder to pin down. It’s also frequently used as a marketing buzzword, which makes its definition more mutable than it should be.
Google, for example, talks a lot about how it’s been investing in AI for years. That refers to how many of its products are improved by artificial intelligence and how the company offers tools like Gemini that appear to be intelligent, for example. There are the underlying AI models that power many AI tools, like OpenAI’s GPT. Then, there’s Meta CEO Mark Zuckerberg, who has used AI as a noun to refer to individual chatbots.
As more companies try to sell AI as the next big thing, the ways they use the term and other related nomenclature might get even more confusing
As more companies try to sell AI as the next big thing, the ways they use the term and other related nomenclature might get even more confusing. There are a bunch of phrases you are likely to come across in articles or marketing about AI, so to help you better understand them, I’ve put together an overview of many of the key terms in artificial intelligence that are currently being bandied about. Ultimately, however, it all boils down to trying to make computers smarter.
(Note that I’m only giving a rudimentary overview of many of these terms. Many of them can often get very scientific, but this article should hopefully give you a grasp of the basics.)
Machine learning: Machine learning systems are trained (we’ll explain more about what training is later) on data so they can make predictions about new information. That way, they can “learn.” Machine learning is a field within artificial intelligence and is critical to many AI technologies.
Artificial general intelligence (AGI): Artificial intelligence that’s as smart or smarter than a human. (OpenAI in particular is investing heavily into AGI.) This could be incredibly powerful technology, but for a lot of people, it’s also potentially the most frightening prospect about the possibilities of AI — think of all the movies we’ve seen about superintelligent machines taking over the world! If that isn’t enough, there is also work being done on “superintelligence,” or AI that’s much smarter than a human.
Generative AI: An AI technology capable of generating new text, images, code, and more. Think of all the interesting (if occasionally problematic) answers and images that you’ve seen being produced by ChatGPT or Google’s Gemini. Generative AI tools are powered by AI models that are typically trained on vast amounts of data.
Hallucinations: No, we’re not talking about weird visions. It’s this: because generative AI tools are only as good as the data they’re trained on, they can “hallucinate,” or confidently make up what they think are the best responses to questions. These hallucinations (or, if you want to be completely honest, bullshyt) mean the systems can make factual errors or give gibberish answers. There’s even some controversy as to whether AI hallucinations can ever be “fixed.”
Bias: Hallucinations aren’t the only problems that have come up when dealing with AI — and this one might have been predicted since AIs are, after all, programmed by humans. As a result, depending on their training data, AI tools can demonstrate biases. For example, 2018 research from Joy Buolamwini, a computer scientist at MIT Media Lab, and Timnit Gebru, the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), co-authored a paper that illustrated how facial recognition software had higher error rates when attempting to identify the gender of darker-skinned women.
Image: Hugo J. Herrera for The Verge
I keep hearing a lot of talk about models. What are those?
AI model: AI models are trained on data so that they can perform tasks or make decisions on their own.
Large language models, or LLMs: A type of AI model that can process and generate natural language text. Anthropic’s Claude, which, according to the company, is “a helpful, honest, and harmless assistant with a conversational tone,” is an example of an LLM.
Diffusion models: AI models that can be used for things like generating images from text prompts. They are trained by first adding noise — such as static — to an image and then reversing the process so that the AI has learned how to create a clear image. There are also diffusion models that work with audio and video.
Foundation models: These generative AI models are trained on a huge amount of data and, as a result, can be the foundation for a wide variety of applications without specific training for those tasks. (The term was coined by Stanford researchers in 2021.) OpenAI’s GPT, Google’s Gemini, Meta’s Llama, and Anthropic’s Claude are all examples of foundation models. Many companies are also marketing their AI models as multimodal, meaning they can process multiple types of data, such as text, images, and video.
Frontier models: In addition to foundation models, AI companies are working on what they call “frontier models,” which is basically just a marketing term for their unreleased future models. Theoretically, these models could be far more powerful than the AI models that are available today, though there are also concerns that they could pose significant risks.
Image: Hugo J. Herrera for The Verge