AI Glossary 2026
Plain-English definitions of every AI term you need to know — from AI agents and RAG to MCP, vibe coding, and fine-tuning.
Core AI
A technique that improves AI answers by retrieving relevant documents from an external knowledge base before generating a response.
A database optimized for storing and searching high-dimensional vector embeddings — the numeric representations that power semantic AI search.
Numerical vector representations of text (or images, audio) that capture semantic meaning — the foundation of AI search and memory.
Language Models
AI Agents
An AI system that perceives its environment, makes decisions, and takes actions autonomously to achieve a goal.
AI systems that can autonomously plan, act, and iterate toward goals over multiple steps without human intervention at each step.
An AI architecture where multiple specialized AI agents collaborate, each handling a sub-task, to complete complex goals no single agent could handle alone.
Dev Tools
An open standard that lets AI models connect to external tools, data sources, and services through a unified interface.
A coding approach where you describe what you want in natural language and let an AI write the code, focusing on intent rather than syntax.
The ability of an LLM to call external functions or APIs as part of generating a response, enabling it to interact with the real world.
MLOps
Prompting
The practice of designing and refining inputs to AI models to get the most useful, accurate, and consistent outputs.
The hidden instructions given to an AI model at the start of a session that define its role, behavior, constraints, and persona.
A technique where you instruct an AI to reason through a problem step by step before giving its final answer, improving accuracy on complex tasks.
Safety & Alignment
Systematic methods for measuring AI model and application quality — the testing framework for AI products.
When an AI model confidently generates false or fabricated information that is not supported by its training data or provided context.
The challenge of ensuring AI systems pursue goals that are beneficial to humans, rather than goals that are misspecified, harmful, or contrary to human values.
