The AI Internship
Core AI

What is Embeddings?

Numerical vector representations of text (or images, audio) that capture semantic meaning — the foundation of AI search and memory.

Definition

Embeddings are numerical representations of data — typically arrays of hundreds or thousands of floating-point numbers — produced by an embedding model. They capture the semantic meaning of text: similar meanings produce vectors that are close together in the vector space. Embeddings are the fundamental technology behind semantic search, RAG systems, recommendation engines, clustering, and any AI application that needs to find conceptually similar content.

Why it matters

Embeddings power every modern AI search experience. When Notion's AI finds relevant pages, when Spotify recommends songs, when a support bot retrieves the right documentation — embeddings are doing the work. Understanding embeddings helps you build better RAG pipelines, debug retrieval quality, and evaluate semantic search systems.

How it works

Text is fed into an embedding model (e.g., OpenAI's text-embedding-3-small or Anthropic's embedding models). The model outputs a vector of numbers — typically 1,536 dimensions for OpenAI. Similar text produces similar vectors (measurable via cosine similarity). These vectors are stored in a vector database and searched at query time by comparing the query's embedding to all stored embeddings.

Examples in practice

Semantic search across documentation

A user searches "how do I cancel my subscription." The query is embedded and compared to all documentation embeddings. The system returns the cancellation policy article even if it uses the word "unsubscribe" rather than "cancel."

Common questions about Embeddings

What are embeddings in machine learning?
Embeddings are numerical representations (vectors) of data that capture semantic meaning. Text with similar meaning produces similar vectors. This enables computers to measure how conceptually related two pieces of text are — the foundation of semantic search, RAG, and recommendation systems.
What is the best embedding model in 2026?
OpenAI's text-embedding-3-small is the most widely used (cheap, fast, good quality). text-embedding-3-large is better quality for high-stakes applications. Cohere and Google also have strong embedding models. For on-premises deployments, sentence-transformers (open-source) is popular. For most RAG applications, text-embedding-3-small is the right default.

Related terms

Learn Embeddings in depth