Skip to main content

Vectorless RAG

Retrieval without embedding similarity search. Instead of chunking documents into a vector store, an LLM navigates document structure directly (tree indexes, tables of contents, section hierarchies) or retrieval is driven by BM25, SQL, and agentic traversal.

Prerequisites: Complete Foundations and Naive RAG first.

Content coming soon.