opncrafter
🦙

LlamaIndex

Connect your AI agents to private knowledge bases with RAG.

Every useful AI application eventually needs to answer questions about data that the LLM wasn't trained on — your company's internal docs, a customer's PDF contracts, a database of product information. That's what RAG (Retrieval-Augmented Generation) solves, and LlamaIndex is the best Python framework for building production RAG systems.

LlamaIndex handles the full RAG pipeline: ingesting documents in any format (PDF, Markdown, SQL, HTML), chunking them into manageable pieces, generating embeddings and storing them in a vector database, and retrieving the most relevant context at query time. The framework abstracts the complexity so you can focus on the retrieval strategy that matters for your use case.

In this track, I cover LlamaIndex from setup to a complete project — a "Chat with PDF" application that correctly answers nuanced questions about the documents you upload. You'll understand embedding strategies, top-k retrieval, reranking, and how to evaluate your RAG system's accuracy using Ragas.

📚 Learning Path

  1. RAG fundamentals: embeddings and retrieval
  2. LlamaIndex ingestion pipelines
  3. Vector database integration
  4. RAG evaluation and accuracy scoring
  5. Build: Chat with PDF end-to-end

2 Guides in This Track

RAG with LlamaIndex

Ingesting and querying your own data.

Read Guide →

Project: Chat with PDF

End-to-end RAG pipeline.

Read Guide →
← Browse all topics