AI Context Window 2026: What It Is + Limits for Every Model
TL;DR: An AI context window is the maximum amount of text (measured in tokens) a model can read and reason over in one conversation. In 2026, frontier models range from 128K tokens (GPT-5.2, free Gemini) up to 2M tokens (Gemini 3 Ultra). Larger windows let AI handle full books, long codebases, or hours of meeting transcripts without losing the thread.
Key Takeaways
- A context window is the number of tokens (~words) an AI can read at once. 1 token ≈ ¾ of an English word.
- Gemini 3 Ultra leads with a 2,000,000-token window — the largest in any consumer AI in 2026.
- Claude Sonnet 4.6 and Claude Opus 4.7 offer 1,000,000-token windows for long codebases and books.
- GPT-5.2 (ChatGPT Plus) uses a 256,000-token context — enough for ~190,000 words or 700 pages.
- Free tiers are usually capped well below the model's max: ChatGPT Free is ~32K, Gemini Free is ~32K, Claude Free is ~100K.
- Once a conversation exceeds the window, older messages are truncated, summarized, or evicted — the model effectively 'forgets' them.
An AI context window is the maximum amount of text — measured in tokens — that an AI model can read and reason over in a single conversation. It's the model's working memory: prompts you've typed, the AI's responses, attached files, and any system instructions all share the same budget. Exceed it, and the AI starts forgetting the earliest parts of the conversation.
In 2026, context windows vary enormously across consumer AI. The smallest free-tier limit is about 32,000 tokens (~24,000 words). The largest, on Gemini 3 Ultra, is 2,000,000 tokens — about 1.5 million words, or 6,000 pages of text. This guide explains what a context window actually is in plain English, lists the exact 2026 limits for every major AI model, and shows when you actually need a large window versus when 128K is plenty.
What is a token, exactly?
A token is the unit AI models use to break up text. Roughly speaking, one token is three-quarters of an English word — or about 4 characters. Common words like "the" or "and" are usually one token each. Longer or unusual words ("antidisestablishmentarianism") may be split into 4–6 tokens. Code, punctuation, and non-Latin scripts use tokens differently.
Useful rules of thumb:
- 1,000 tokens ≈ 750 English words ≈ 3 paragraphs
- 10,000 tokens ≈ a 30-page document
- 100,000 tokens ≈ a short novel (≈ 75,000 words)
- 1,000,000 tokens ≈ a long technical book or 8 hours of meeting transcript
Every major AI model's context window (2026)
The table below shows the maximum context window — paid and free tier — for each frontier AI model available in mid-2026. "Max output" is the largest single response the model can generate inside that window.
| Model | Vendor | Paid context | Free context | Max output |
|---|---|---|---|---|
| GPT-5.2 | OpenAI | 256,000 | 32,000 | 16,000 |
| GPT-5.2 mini | OpenAI | 128,000 | 32,000 | 16,000 |
| Claude Sonnet 4.6 | Anthropic | 1,000,000 | 100,000 | 64,000 |
| Claude Opus 4.7 | Anthropic | 1,000,000 | Not on free | 64,000 |
| Gemini 3 Pro | 1,000,000 | 1,000,000 | 64,000 | |
| Gemini 3 Ultra | 2,000,000 | Not on free | 64,000 | |
| Grok 4 | xAI | 256,000 | 32,000 | 16,000 |
| Grok 4 Heavy | xAI | 256,000 | Not on free | 32,000 |
| Perplexity Pro | Perplexity | 200,000 | 32,000 | 16,000 |
| DeepSeek V3.2 | DeepSeek | 128,000 | 128,000 | 16,000 |
| Llama 4 Maverick | Meta | 1,000,000 | 1,000,000 | 32,000 |
| Qwen 3 Max | Alibaba | 256,000 | 256,000 | 16,000 |
| Mistral Large 2 | Mistral | 128,000 | 32,000 | 16,000 |
| Microsoft Copilot | Microsoft | 128,000 | 32,000 | 16,000 |
Why context window size matters
A bigger context window unlocks specific use cases that smaller windows simply can't handle:
- Long document analysis. Drop an entire 300-page contract, research paper, or technical manual into Claude or Gemini and ask cross-referencing questions across the whole document.
- Codebase reasoning. Paste a full repository (or a large module) and ask the AI to refactor across files, find bugs, or explain unfamiliar architecture. Claude Opus 4.7 and Gemini 3 Pro are the standout choices here in 2026.
- Multi-hour transcripts. An 8-hour meeting transcript runs about 64,000 words ≈ 85,000 tokens. Only models with 128K+ windows can take the full transcript without chunking.
- Persistent project memory. Long conversations that accumulate dozens of design decisions, code snippets, and revisions stay coherent only inside a window big enough to hold the whole exchange.
The "lost in the middle" problem
Bigger isn't automatically better. Models with very long context windows have a well-documented weakness: they pay more attention to the beginning and end of a long input than to the middle. This is called the lost-in-the-middle problem. A model can technically read 1,000,000 tokens, but if a critical fact is buried 600,000 tokens deep, it may be missed.
Two 2026 developments mitigate this:
- Improved long-context training. Claude Opus 4.7 and Gemini 3 Pro were specifically trained on retrieval tasks across their full windows. Both score 95%+ on "needle-in-a-haystack" tests up to their maximum lengths.
- Hybrid retrieval-augmented generation (RAG). Many AI products now combine a long context window with retrieval, surfacing only the relevant chunks of a very long document into a smaller working context. This is what most enterprise AI tools do under the hood.
Free vs. paid context: a real example
The free tier of every AI service in 2026 dramatically caps context. The same model that handles 1,000,000 tokens on paid plans typically gets 32,000–100,000 on free. A practical comparison:
- Claude Free = 100,000 token context. Roughly 75,000 words. Fits a short novel.
- Claude Pro ($20/mo) = 1,000,000 tokens. Roughly 750,000 words. Fits the entire Lord of the Rings trilogy plus the Hobbit.
- ChatGPT Free = 32,000 tokens. About 24,000 words. A long blog post.
- ChatGPT Plus ($20/mo) = 256,000 tokens. About 190,000 words. A textbook chapter or two.
- Gemini Free = 1,000,000 tokens (the most generous free tier in 2026). About 750,000 words.
If you regularly work with long documents, the free tier of Gemini 3 Pro is uniquely generous — it's the only major model offering a 1M-token window without payment. Many users keep a Google account specifically for long-context tasks even when they prefer another AI for chat.
How to make the most of any context window
- Put the most important content at the start or end. Mitigates the lost-in-the-middle problem.
- Be explicit about what to keep. In long conversations, restate critical facts every 20–30 messages so the model has a fresh anchor in case earlier content gets summarized.
- Use system prompts for persistent constraints. Models prioritize system messages, so they stay in attention even as the conversation grows.
- Split work into focused sessions. Two 100K-token sessions are usually higher quality than one 200K-token session, because attention is more concentrated.
- Switch models when you hit a limit. A multi-model app like Perspective AI lets you mid-conversation pivot from GPT-5.2 to Claude Opus 4.7 (or Gemini 3 Pro) when you need more room — without losing the thread or re-uploading files.
Bottom line
Context windows in 2026 span two orders of magnitude. For everyday chat, coding snippets, and email drafting, a 128K-token model is plenty. For long document review, codebase work, or multi-hour transcripts, jump to Claude Opus 4.7 or Gemini 3 Pro at 1M tokens. For the extreme tail of long-form research and book-length analysis, only Gemini 3 Ultra's 2M-token window is large enough — and even then, structure your prompts so the most important content isn't buried in the middle.
FAQ
What is an AI context window in simple terms?
An AI context window is the maximum amount of conversation history (your prompts plus the AI's replies plus any attached files) that a model can hold in working memory at once. It's measured in tokens, where one token is roughly three-quarters of an English word. When you exceed the window, the oldest content is dropped, summarized, or replaced — so the AI starts losing track of earlier details.
Which AI has the largest context window in 2026?
Gemini 3 Ultra has the largest context window of any consumer AI in 2026 at 2,000,000 tokens — roughly 1.5 million English words, or about 6,000 pages of text. Gemini 3 Pro offers 1,000,000 tokens on the free tier and the Google AI Pro plan. Claude Sonnet 4.6 and Claude Opus 4.7 both ship with 1,000,000-token windows. GPT-5.2 (ChatGPT) is 256,000 tokens, Grok 4 is 256,000, and most open-source models like DeepSeek V3.2 are 128,000 to 256,000.
How many tokens are in one word?
One English word is roughly 1.3 tokens on average — or said the other way, one token is about three-quarters of a word. A 1,000-word essay is approximately 1,300 tokens. A 100,000-token context window therefore holds roughly 75,000 words, or about 250 standard pages. Non-Latin scripts (Chinese, Japanese, Arabic) consume more tokens per character, so context budgets shrink in those languages.
What happens when you exceed an AI's context window?
Three common behaviors. (1) Hard truncation — the oldest messages are silently dropped and the model loses memory of them. ChatGPT, Claude, and most APIs do this. (2) Sliding window — the model keeps only the most recent N tokens. (3) Auto-summarization — some products (like Claude Projects or GPT custom instructions) compress older context into a summary before evicting it. In all three cases, the model becomes less reliable about facts you mentioned hours or many messages ago.
Do I need a 1M-token context window for everyday use?
For most chat use cases — emails, code snippets, short documents, brainstorming — no. A 128K-token window (~95,000 words) is enough. You only need 1M+ tokens when you upload entire books, long technical manuals, multi-hour transcripts, or full codebases in a single conversation. Cost and latency also go up sharply with longer contexts, so most pros use the largest window only when the task requires it.
Why do AI companies advertise huge context windows if smaller is fine?
Two reasons. First, marketing differentiation — context length is one of the few specs that's easy for customers to compare. Second, real use cases exist for legal review, codebase migration, long-form research, and meeting analysis. The catch is that AI quality often degrades inside very long contexts: the model can technically read 1M tokens but pays less attention to the middle of the document (this is called the 'lost-in-the-middle' problem). Some 2026 models, like Gemini 3 and Claude Opus 4.7, have specifically been trained to mitigate this.
Are context windows the same for input and output?
Usually no. The advertised number is total context — input plus output combined. A 256,000-token window typically caps the response at 16,000-64,000 tokens, leaving most of the budget for input. Claude Opus 4.7's 1M window allows up to 64K-token outputs, GPT-5.2's 256K window allows 16K outputs. If you need a long generated response (a full report, a long story), check the max output limit, not just the total window.
Does Perspective AI let me use the biggest context window in one place?
Yes. Perspective AI gives you access to every model's full context window through a single subscription. Use Gemini 3 Pro's 1M tokens for long document analysis, switch to Claude Opus 4.7's 1M-token reasoning mid-conversation for coding work, and drop to GPT-5.2's 256K for everyday chat — all without re-uploading or re-pasting context. Starts at $14.99/mo.
Need every model's biggest context, in one app?
Perspective AI gives you Claude's 1M-token window, Gemini's 2M-token window, GPT-5.2's 256K, and 50+ more models from $14.99/mo. Switch mid-conversation when you hit a limit. One subscription, every context length.
Try Perspective AI Free →