Table of Contents
Quick Answer
A context window is the total token budget an LLM can process in a single request — prompt + conversation history + generated answer combined.
- GPT-4 Turbo: 128K tokens (~300 pages)
- Claude Sonnet 4.5: 200K tokens (~500 pages)
- Gemini 1.5 Pro: 1M-2M tokens (~1500+ pages)
What Does Context Window Mean?
Think of it as the model's working memory. Anything outside the window is invisible — the model literally cannot see earlier chat messages once they fall off the back.
If your prompt is 100K tokens and the window is 128K, you only have 28K left for the answer. Exceed the limit and the API returns an error or silently truncates input (OpenAI API reference, 2024).
How It Works
Transformers use self-attention, where every token attends to every other token. Memory scales roughly quadratically with window size (O(n squared)) without optimizations. Modern models use techniques like sliding window attention, FlashAttention, and RoPE (rotary position embeddings) to push windows past 1M tokens.
Examples
- Short chat: 500 tokens of history — trivially fits 4K window
- PDF analysis: 80-page contract = ~40K tokens — needs 64K+ window
- Codebase Q&A: Entire repo of 500K tokens — needs 1M window or RAG
- Book summarization: 300-page novel = ~120K tokens
- Long agent loop: 50 tool calls + outputs can balloon past 100K quickly
Context Window vs Memory
"Memory" in AI products often means long-term memory — storing facts across sessions in a database or vector store. Context window is short-term: it resets when the conversation ends.
Long context is not a replacement for RAG. Putting 500K tokens into every request is slow and expensive. RAG retrieves only the relevant 2K-5K tokens.
When to Use Large Context Windows
- Analyzing a single long document in full
- Complex agent tasks with deep tool-use history
- Preserving structure that chunking would break (legal contracts, code files)
- Multi-turn conversations you do not want summarized
FAQs
Does a bigger window mean better recall? Not always. Research on "lost in the middle" (Stanford, 2023) shows models ignore content in the middle of very long prompts.
Is input and output capped together? Yes — the sum cannot exceed the window.
Does the window reset between messages? API-wise, yes — you resend history each request. The model itself is stateless.
Are bigger windows more expensive? Usually yes. Some providers cache input tokens to discount repeated context.
Can I increase a model's context window? No — it is fixed during training. You choose a model variant.
What happens if I exceed the window? Error 400 or automatic truncation of oldest tokens.
Does the system prompt count? Yes — every token counts.
Conclusion
Pick a window that fits your longest expected input with headroom. More is not always better — cost and "lost in the middle" effects matter. Compare models on Misar Blog↗.