The AI Slang Glossary
Plain English, no exceptions.
“There is no good reason these words should make anyone feel foolish. They are, in most cases, ordinary concepts wearing an expensive coat.”

20 terms
AI Agent
The SkepticAn AI system that doesn't just answer a question — it takes actions in sequence to complete a task. A basic AI tool responds to each prompt individually. An agent can be given a goal, break it into steps, use tools (search engines, calendars, email), and work through the steps without you prompting each one. They are genuinely useful. They also do things autonomously, which raises questions worth understanding before you hand over access to your inbox.
Why it matters
This is where AI is heading fastest. You'll encounter the term more and more in the next twelve months.
Last updated: March 2026
API
The TinkererApplication Programming Interface. A way for one piece of software to talk to another. When someone says "we connected our app to the ChatGPT API," they mean they wrote code that sends requests to ChatGPT and gets answers back — without anyone opening a browser. APIs are how AI gets built into apps, websites, and tools you already use.
Why it matters
You don't need to use APIs yourself, but understanding the word helps you follow conversations about how AI is being integrated into everyday tools.
Last updated: March 2026
Automation
The PragmatistUsing technology — including AI tools — to perform a task that would otherwise require human time and attention. In an AI context, automation often means setting up a tool to handle a repeated task: drafting responses to routine emails, organising data, generating regular reports. The goal is to remove yourself from the loop for low-value repetitive work.
Why it matters
This is the most direct path to AI paying for itself. One well-designed automated process can return hours per week.
Last updated: March 2026
Chatbot
The SkepticAny AI tool you interact with through a conversation-style interface — you type something, it types back. ChatGPT is a chatbot. So is Claude, Gemini, and Grok. The word has been around much longer than modern AI — early chatbots followed simple rules and were mostly terrible. Today's chatbots are powered by large language models, which makes them vastly more capable.
Why it matters
Don't let the word fool you. Modern chatbots are not the useless pop-up boxes from 2015. They are genuinely powerful tools wearing an old name.
Last updated: March 2026
Context Window
KevinThe amount of text an AI tool can "hold in mind" at once during a conversation. Older models had small context windows — they'd forget what you said at the beginning of a long conversation. Newer models have much larger windows. When a tool starts giving answers that seem to ignore something you said earlier, it has usually run out of context window.
Why it matters
Knowing this exists explains a common frustration and gives you a practical fix: start a new conversation, or paste back the relevant context.
Last updated: March 2026
Fine-Tuning
The TinkererTaking an existing AI model and training it further on a specific set of data so it becomes better at a particular task. Think of a general-purpose AI as a broadly educated graduate — fine-tuning is like sending them to a specialist course. A company might fine-tune a model on their own support tickets so it gives answers that match their products and tone.
Why it matters
You'll see this term when companies talk about "custom AI." It's not magic — it's targeted additional training.
Last updated: March 2026
Generative AI
The GentlemanAI that creates new content — text, images, audio, video, code — rather than simply analysing or organising existing content. When people say "AI" in most everyday conversations, they mean generative AI. ChatGPT generates text. DALL-E generates images. These are all generative AI tools.
Why it matters
The distinction between generative AI and other types of AI (the kind that recommends your next Netflix show, for instance) is useful for understanding what you're actually using.
Last updated: March 2026
Hallucination
The TinkererWhen an AI tool produces information that is confidently stated but factually wrong. Not a glitch, not a virus — the AI is doing exactly what it's designed to do (predict the most likely next word based on patterns), and sometimes that produces plausible-sounding nonsense. It's an inherent limitation, not a scandal. It means you should verify important facts, especially for anything consequential.
Why it matters
This is the most common reason people lose trust in AI tools after their first experience. Understanding why it happens makes it manageable rather than alarming.
Last updated: March 2026
Image Generation
The AdventurerUsing AI to create images from text descriptions. You type "a watercolour painting of a cat reading a newspaper" and the AI produces an image matching that description. Tools like DALL-E, Midjourney, and Stable Diffusion do this. The results can be stunning, bizarre, or both — and they're improving rapidly.
Why it matters
Image generation is one of the most accessible and immediately impressive uses of AI. No artistic skill required — just a clear description.
Last updated: March 2026
Knowledge Cutoff
KevinThe date after which an AI model has no training data. If a model has a knowledge cutoff of April 2024, it genuinely does not know about anything that happened after that date — elections, product launches, news events. Some tools compensate by connecting to the internet for current information, but the base model itself stops at that line.
Why it matters
If an AI gives you outdated information, it's not lying. It literally doesn't know. Check whether your tool can browse the web for current data.
Last updated: March 2026
Large Language Model (LLM)
The TinkererThe type of AI behind tools like ChatGPT, Claude, and Gemini. "Large" refers to the enormous amount of text it was trained on. "Language model" means it works by predicting which words are most likely to come next in a given context — which, when done with enough data and computing power, produces responses that can feel surprisingly thoughtful. It is not thinking. It is very sophisticated pattern recognition.
Why it matters
Understanding what these tools actually are — pattern matchers, not minds — makes you a better user of them.
Last updated: March 2026
Model
The GentlemanThe trained AI system that powers a tool. When someone says "GPT-4" or "Claude 3.5," they're referring to a specific model. A model is the result of training — the thing that has absorbed patterns from data and can now generate responses. One model can power many different tools and products. ChatGPT is a product; GPT-4 is the model behind it.
Why it matters
Knowing the difference between a model and a product helps you understand why the same tool can get dramatically better overnight — they updated the model.
Last updated: March 2026
Multimodal
The TinkererAn AI model that can work with more than one type of input — not just text, but also images, audio, or video. GPT-4o is multimodal: you can show it a photo and ask what's in it, or upload a document and ask it to summarise the contents. A text-only model can't do that.
Why it matters
Multimodal tools are becoming the standard. If your AI tool can't handle images or files yet, the next version probably will.
Last updated: March 2026
Output
KevinWhatever the AI produces in response to your prompt. Could be text, an image, code, a table, a list — anything the tool generates. The quality of the output depends almost entirely on the quality of the input. Better prompts, better output. That's the whole game.
Why it matters
If your output is disappointing, the fix is almost always in your prompt, not in the tool.
Last updated: March 2026
Prompt
KevinThe instruction or question you type into an AI tool. If you ask ChatGPT "write me an email declining a meeting," the whole thing — every word of it — is your prompt. The quality of what an AI produces depends heavily on the quality of what you ask. A vague prompt produces a vague result. A specific prompt produces something useful.
Why it matters
Learning to write better prompts is the single most effective way to get more out of any AI tool.
Last updated: March 2026
Prompt Engineering
KevinThe practice of carefully crafting your prompts to get better results from AI tools. It sounds more technical than it is. In practice, it means being specific about what you want, giving the AI a role to play, and telling it what format you want the answer in. Role + Task + Format. That's the structure. That's what Prompt School teaches.
Why it matters
You don't need a computer science degree. You need to know how to ask clearly. That's a skill anyone can learn in an afternoon.
Last updated: March 2026
RAG (Retrieval-Augmented Generation)
The TinkererA technique where an AI tool looks up relevant information from a specific source — like a company's documents or a database — before generating its answer. Instead of relying solely on what it learned during training, it retrieves current, specific data first. This makes answers more accurate and grounded in real, up-to-date information.
Why it matters
RAG is how companies make AI tools that can answer questions about their own products, policies, and data without fine-tuning the whole model.
Last updated: March 2026
Temperature
The TinkererA setting that controls how creative or predictable an AI's output will be. Low temperature (closer to 0) makes the AI stick to safe, expected answers. High temperature (closer to 1 or 2) makes it more creative and unpredictable. For factual tasks, low temperature is better. For brainstorming or creative writing, higher temperature can be useful.
Why it matters
Some AI tools let you adjust this. Knowing what it does means you can tune the tool for different jobs.
Last updated: March 2026
Token
KevinThe basic unit of text that an AI model processes. A token is roughly three-quarters of a word in English. "Hamburger" is one token. "I love hamburgers" is about four tokens. AI tools charge by the token and measure context windows in tokens. When you hit a limit, it's usually a token limit.
Why it matters
Understanding tokens helps you understand pricing, context window limits, and why very long documents sometimes get cut off.
Last updated: March 2026
Training Data
The TinkererThe text that an AI model learned from before it was made available to the public — books, articles, websites, code, and other written material, totalling billions of words. The model doesn't "remember" any of it the way you remember reading a book. It absorbed patterns from it. What an AI knows, and the limits of what it knows, come from what was in that training data.
Why it matters
It explains why AI tools sometimes know obscure things, sometimes miss obvious things, and have a knowledge cutoff date after which they don't know what happened in the world.
Last updated: March 2026
Want the full glossary?
Members get every term, regularly updated as new terminology appears. Plus Prompt School, the Prompt Vault, and everything else.
Start Free Trial — No Card Required