AI Glossary — Every AI Term Explained | Free.ai

Complete glossary of AI terms. LLM, tokens, inference, fine-tuning, RAG, and 50+ more explained simply.

A B C D E F G H I L M N O P Q R S T V Z

A

API (Application Programming Interface)

A way to access AI tools programmatically via HTTP requests.

Agentic AI

AI systems that can autonomously plan, use tools, and take actions to accomplish goals.

Attention Mechanism

A technique that allows AI models to focus on relevant parts of the input when generating output.

B

Benchmark

A standardized test used to compare AI model performance. Examples: MMLU, HumanEval, MT-Bench.

C

Computer Vision

AI that can understand and analyze images and video content.

Context Window

The maximum amount of text an AI model can process at once, measured in tokens. GPT-4o has 128K tokens.

D

Diffusion Model

An AI image generation technique that starts with noise and gradually refines it into a coherent image. Used by FLUX, Stable Diffusion.

E

Embedding

A numerical representation of text, images, or other data that AI models can process and compare.

F

Few-Shot Learning

Giving an AI model a few examples in the prompt to guide its output.

Fine-Tuning

Training a pre-trained AI model on specialized data to improve performance on specific tasks.

G

GAN (Generative Adversarial Network)

An older image generation technique using two competing neural networks.

GPU (Graphics Processing Unit)

Specialized hardware that runs AI models much faster than CPUs. NVIDIA A100, H100, etc.

H

Hallucination

When an AI model generates false or fabricated information that sounds confident and plausible.

I

Inference

The process of running an AI model to generate a response. When you send a message to ChatGPT, the model performs inference.

L

LLM (Large Language Model)

A neural network trained on massive text datasets that can generate, understand, and manipulate human language. Examples: GPT-4, Qwen, Claude.

M

Multimodal AI

AI models that can process multiple types of input — text, images, audio, video.

N

NLP (Natural Language Processing)

The field of AI focused on understanding and generating human language.

O

OCR (Optical Character Recognition)

AI technology that extracts text from images, PDFs, and scanned documents.

Open Source AI

AI models released with open licenses (MIT, Apache 2.0) allowing anyone to use, modify, and deploy them.

P

Parameter

A trainable weight in an AI model. Larger models have more parameters (7B, 70B, 400B).

Prompt

The input text you give to an AI model. Better prompts lead to better outputs.

Prompt Engineering

The practice of crafting effective prompts to get the best results from AI models.

Q

Quantization

A technique to compress AI models (e.g., from 16-bit to 4-bit) so they use less memory while maintaining quality.

R

RAG (Retrieval-Augmented Generation)

A technique where AI retrieves relevant documents before generating a response, improving accuracy.

S

STT (Speech-to-Text)

AI technology that converts spoken audio into written text. Also called ASR (Automatic Speech Recognition).

T

TTS (Text-to-Speech)

AI technology that converts written text into natural-sounding spoken audio.

Temperature

A parameter that controls AI output randomness. Low temperature = more focused. High temperature = more creative.

Token

The basic unit of text processing in AI models. Roughly 1 token = 4 characters of English text. Used for billing and context limits.

Transformer

The neural network architecture behind modern AI models. Introduced in the 2017 paper "Attention Is All You Need."

V

VRAM

Video RAM — the memory on a GPU used to store AI model weights during inference.

Z

Zero-Shot Learning

An AI model performing a task without any specific examples — just from its general training.

Ready to try AI tools?

Put these concepts into practice with 200+ free AI tools.

Explore Free AI Tools

Like this tool? Share it!

Парақты бағалау