opncrafter

The Art of Context: Managing AI Memory

Dec 29, 2025 • 25 min read

An LLM's "Context Window" is its short-term working memory. While GPT-4o has a 128k context window (approx. 300 pages), treating it as an infinite dump is a recipe for high costs, high latency, and poor reasoning.

1. The Core Mental Model: RAM vs Hard Drive

Analogy:
  • Context Window = RAM. It is fast, expensive, and volatile. It is where "thinking" happens.
  • Vector Database (RAG) = Hard Drive. It is slow, cheap, and permanent. It is where "archives" live.
Your job as an AI Engineer is to act as the Operating System: effectively paging data from the Hard Drive to RAM only when needed.

2. The "Lost in the Middle" Phenomenon

Research from Stanford (Liu et al., 2023) defines this clearly: LLMs are great at retrieving information from the beginning (System Prompt) and the end (Latest User Question) of the context.

[Recall Accuracy]
Start: 95% 🟩
Middle: 55% 🟥 (Danger Zone)
End: 95% 🟩

The Takeaway: Do not blindly paste a 50-page contract and ask a question about page 25. The model is statistically likely to hallucinate.

3. Deep Dive: The KV Cache

Why do we pay for "Input tokens"? Why isn't history free?

When you send a request, the LLM must compute the Key (K) and Value (V) matrices for every single token in the history to understand "Attention".

Prompt Caching (New in 2025): Providers like Anthropic and OpenAI now offer "Prompt Caching". If the first 50% of your prompt (e.g., a massive System Instruction) is identical to your last request, they just reuse the cached KV matrices. This reduces cost by up to 90% and latency by 80%.

4. Counting Tokens with Tiktoken

Before you manage context, you must measure it. In Python, use the official tiktoken library.

import tiktoken

def count_tokens(text, model="gpt-4o"):
    encoder = tiktoken.encoding_for_model(model)
    return len(encoder.encode(text))

prompt = "Hello, world!"
print(count_tokens(prompt)) # Output: 3

5. Advanced Strategy: The "Hybrid Context Budget"

Don't just fill the window linearly. Partition it.

The 8k Budget Plan

  • System Instructions (Fixed): 1,000 tokens. "You are helpful..."
  • Long-Term Knowledge (RAG): 3,000 tokens. "Relevant laws from 2024..."
  • Conversation History (Sliding): 2,000 tokens. "User: Hi..."
  • Scratchpad (Thinking Space): 2,000 tokens. Reserved for the answer generation.

6. Code: Implementing a Smart Conversation Manager

Here is a Python class to auto-prune history.

class ContextManager:
    def __init__(self, max_tokens=4000):
        self.history = []
        self.max_tokens = max_tokens

    def add_message(self, role, content):
        self.history.append({"role": role, "content": content})
        self._prune()

    def _prune(self):
        # Rough heuristic: 1 word ~= 1.3 tokens
        while self._estimate_tokens() > self.max_tokens:
            # Remove the oldest message (index 1), preserving System Prompt (index 0)
            if len(self.history) > 1:
                self.history.pop(1) 
    
    def _estimate_tokens(self):
        return sum(len(m["content"]) / 3 for m in self.history)

7. Real World Use Case: The RPG NPC

Imagine a character in a video game (Skyrim AI). How do they remember you killed their brother 50 hours ago?

  • Summarization Chain: Every 10 turns, an LLM summarizes the chat into "The player was rude to me."
  • Entity Extraction: Important facts ("Player Name: Dragonborn") are saved to a JSON profile.
  • Inject on Load: When you talk to them again, the summary and JSON profile are injected into the specific "Context Budget" slot.

8. FAQ

Why not just use 1 Million Token context?

Cost and Speed. Processing 1M tokens takes 60+ seconds and costs $5-10 per request. It's unusable for real-time chat.

Does the model learn from my context?

No. Once the request is done, the model resets. It does not "learn" permanently unless fine-tuned by the provider.

9. Conclusion

Context management is an optimization problem. Your goal is to maximize the signal-to-noise ratio within the token budget. A clean context equals a smarter agent.