opncrafter

Building an AI OS with Agents + Tools + Memory

If you were to build an AI Operating System today — using the best available open-source components — what would its architecture look like? This guide takes a pragmatic engineering approach: designing a system that manages agents, tools, and memory as first-class resources, exposing them through a unified API surface, and running reliably as a background service. Think of it as the minimum viable AI OS — capturing the core OS responsibilities for AI workloads without over-engineering.


Component 1: Model Pool (The CPU Abstraction)

Just as an OS abstracts over CPU cores, an AI OS abstracts over model instances. Applications don't call a specific GPU or CPU core — they request compute. Similarly, applications shouldn't call a specific LLM endpoint — they should request an inference context with specified capability requirements.

# model_pool.py -- The AI OS Model Manager

from dataclasses import dataclass
from enum import Enum
import asyncio

class ModelTier(Enum):
    FAST = "fast"        # Small, low latency (Llama 3.1 8B)
    BALANCED = "balanced"  # Mid-size trade-off (Qwen 2.5 32B)
    CAPABLE = "capable"   # Large, high quality (Llama 3.1 70B / GPT-4o)

@dataclass
class ModelRequest:
    tier: ModelTier
    task_type: str       # "reasoning", "coding", "summarization", "embedding"
    max_tokens: int
    latency_budget_ms: int  # How patient is the caller?

class ModelPool:
    """
    Routes inference requests to the appropriate model instance.
    Manages load balancing, fallback, and cost optimization.
    """
    def __init__(self):
        self.backends = {
            ModelTier.FAST: LocalOllamaBackend("llama3.1:8b"),
            ModelTier.BALANCED: LocalOllamaBackend("qwen2.5:32b"),
            ModelTier.CAPABLE: MultiBackend([
                LocalOllamaBackend("llama3.1:70b"),
                CloudBackend("gpt-4o"),  # fallback if local is busy
            ])
        }

    async def infer(self, request: ModelRequest, messages: list) -> str:
        backend = self.backends[request.tier]
        
        # Cost-aware routing: local first, cloud fallback
        if isinstance(backend, MultiBackend):
            for b in backend.backends:
                if await b.is_available():
                    return await b.complete(messages, max_tokens=request.max_tokens)
        
        return await backend.complete(messages, max_tokens=request.max_tokens)

    async def embed(self, text: str) -> list[float]:
        """Always use local embedding model -- never send to cloud."""
        return await self.backends[ModelTier.FAST].embed(text)

Component 2: Hierarchical Memory (The Storage Abstraction)

# memory.py -- Three-tier memory system

import chromadb
import json
from datetime import datetime

class HierarchicalMemory:
    """
    Three memory tiers (analogous to CPU cache / RAM / disk):
    - Short-term: Active context window (in-process, tokens)
    - Working:    Episodic task memory (in-memory, current session)
    - Long-term:  Persistent vector store (ChromaDB, survives restarts)
    """
    def __init__(self, persist_dir: str = "./ai_os_memory"):
        self.client = chromadb.PersistentClient(path=persist_dir)
        self.long_term = self.client.get_or_create_collection("ltm")
        self.working = {}   # session_id -> list[dict]

    def remember(self, session_id: str, content: str, metadata: dict = None):
        """Store a memory in both working and long-term stores."""
        # Working memory: fast but session-scoped
        if session_id not in self.working:
            self.working[session_id] = []
        self.working[session_id].append({
            "content": content,
            "timestamp": datetime.utcnow().isoformat(),
            **(metadata or {})
        })

        # Long-term memory: persisted, searchable
        self.long_term.add(
            documents=[content],
            metadatas=[{"session": session_id, "ts": datetime.utcnow().isoformat(), **(metadata or {})}],
            ids=[f"{session_id}_{len(self.working[session_id])}"]
        )

    def recall(self, query: str, session_id: str = None, n: int = 5) -> list[str]:
        """Semantic search over long-term memory."""
        where = {"session": session_id} if session_id else None
        results = self.long_term.query(
            query_texts=[query],
            n_results=n,
            where=where,
        )
        return results["documents"][0]

    def get_working_context(self, session_id: str, last_n: int = 10) -> str:
        """Get recent working memory formatted for LLM context injection."""
        memories = self.working.get(session_id, [])[-last_n:]
        return "\n".join(f"[{m['timestamp']}] {m['content']}" for m in memories)

Component 3: Tool Registry (The Driver Model)

# tool_registry.py -- Pluggable tool discovery and management

import inspect
from typing import Callable, Any
import json

class ToolRegistry:
    """
    Manages available tools, their schemas, permissions, and rate limits.
    Analogous to the OS driver model: tools register themselves,
    the OS decides which agents can use which tools.
    """
    def __init__(self):
        self._tools: dict[str, dict] = {}

    def register(self, fn: Callable, permissions: list[str] = None, rate_limit: int = None):
        """Register a Python function as a tool available to agents."""
        schema = self._extract_schema(fn)
        self._tools[fn.__name__] = {
            "fn": fn,
            "schema": schema,
            "permissions": permissions or ["*"],  # who can use this
            "rate_limit": rate_limit,             # calls per minute
            "call_count": 0,
        }
        return fn  # allow use as decorator

    def get_schemas_for_agent(self, agent_permissions: list[str]) -> list[dict]:
        """Return tool schemas visible to an agent based on its permissions."""
        schemas = []
        for name, tool in self._tools.items():
            allowed = tool["permissions"] == ["*"] or any(
                p in agent_permissions for p in tool["permissions"]
            )
            if allowed:
                schemas.append(tool["schema"])
        return schemas

    async def execute(self, tool_name: str, args: dict, caller_permissions: list) -> Any:
        tool = self._tools.get(tool_name)
        if not tool:
            raise ValueError(f"Unknown tool: {tool_name}")
        
        # Permission check
        if tool["permissions"] != ["*"] and not any(
            p in caller_permissions for p in tool["permissions"]
        ):
            raise PermissionError(f"Agent lacks permission to use {tool_name}")
        
        tool["call_count"] += 1
        return await tool["fn"](**args) if inspect.iscoroutinefunction(tool["fn"]) else tool["fn"](**args)

    def _extract_schema(self, fn: Callable) -> dict:
        """Auto-generate OpenAI-compatible tool schema from function signature."""
        # In production: use pydantic or inspect.signature for full schema
        return {
            "type": "function",
            "function": {
                "name": fn.__name__,
                "description": fn.__doc__ or "",
                "parameters": {"type": "object", "properties": {}}
            }
        }

# Usage: register tools at startup
registry = ToolRegistry()

@registry.register(permissions=["*"])
def web_search(query: str) -> str:
    """Search the web for current information."""
    pass

@registry.register(permissions=["admin", "finance"])
def read_financial_data(period: str) -> dict:
    """Read sensitive financial data. Restricted to finance roles."""
    pass

Assembling the AI OS Kernel

# kernel.py -- The AI OS main loop

from fastapi import FastAPI
import uuid

app = FastAPI(title="AI OS Kernel")

# Core OS subsystems
model_pool = ModelPool()
memory = HierarchicalMemory()
tool_registry = ToolRegistry()

@app.post("/session/create")
async def create_session(user_id: str, permissions: list[str]):
    session_id = str(uuid.uuid4())
    memory.remember(session_id, f"Session started for {user_id}", {"type": "system"})
    return {"session_id": session_id}

@app.post("/session/{session_id}/run")
async def run_agent_task(session_id: str, task: str, permissions: list[str]):
    """
    Main AI OS entry point: run a task within a session.
    The OS manages model selection, memory injection, and tool access.
    """
    # 1. Retrieve relevant long-term memory
    past_context = memory.recall(task, session_id=session_id)
    working_ctx = memory.get_working_context(session_id)

    # 2. Get tools this agent is allowed to use
    available_tools = tool_registry.get_schemas_for_agent(permissions)

    # 3. Run inference with full context
    messages = [
        {"role": "system", "content": f"You are an AI assistant. Relevant past context:\n{past_context}\n\nRecent session:\n{working_ctx}"},
        {"role": "user", "content": task},
    ]

    response = await model_pool.infer(
        ModelRequest(tier=ModelTier.BALANCED, task_type="reasoning", max_tokens=2000, latency_budget_ms=10000),
        messages=messages,
    )

    # 4. Store result in memory
    memory.remember(session_id, f"Task: {task} | Result: {response[:200]}", {"type": "task_result"})

    return {"result": response, "session_id": session_id}

Conclusion

The AI OS kernel described here — ModelPool, HierarchicalMemory, ToolRegistry, and a session-aware API — can be built today with open-source components in a few hundred lines of Python. It's not a complete OS, but it's a genuine first approximation of the abstraction layer that AI-first applications need. The gaps — true multi-agent scheduling, security isolation between agents, cross-session memory management — point toward where the research and engineering community is heading. Start with this minimal kernel, instrument it heavily, and let real usage patterns drive what to add next.

Continue Reading

👨‍💻
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK