Skip to main content

Architecture

Vectorless transforms documents into hierarchical semantic trees and uses LLM-powered reasoning to navigate them. This page describes the end-to-end pipeline.

High-Level Flow

┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Document │────▶│ Index │────▶│ Storage │
│ (PDF/MD) │ │ Pipeline │ │ (Disk) │
└──────────────┘ └──────────────┘ └──────┬───────┘

┌──────────────┐ ┌──────▼───────┐
│ Result │◀────│ Retrieval │
│ (Evidence) │ │ Pipeline │
└──────────────┘ └──────────────┘

Index Pipeline

The indexing pipeline processes documents through ordered stages:

StagePriorityDescription
Parse10Parse document into raw nodes (Markdown headings, PDF pages)
Build20Construct arena-based tree with thinning and content merge
Validate22Tree integrity checks
Split25Split oversized leaf nodes (>4000 tokens)
Enhance30Generate LLM summaries (Full, Selective, or Lazy strategy)
Enrich40Calculate metadata, page ranges, resolve cross-references
Reasoning Index45Build keyword-to-node mappings, synonym expansion, summary shortcuts
Navigation Index50Build NavEntry + ChildRoute data for agent navigation
Optimize60Final tree optimization

Each stage is independently configurable. The pipeline supports incremental re-indexing via content fingerprinting.

Tree Structure

Each node in the tree contains:

TreeNode
├── title — Section heading
├── content — Raw text (leaf nodes)
├── summary — LLM-generated summary
├── structure — Hierarchical index (e.g., "1.2.3")
├── depth — Tree depth (root = 0)
├── references[] — Resolved cross-references ("see Section 2.1" → NodeId)
├── token_count — Estimated token count
└── page_range — Start/end page (PDF)

Retrieval Pipeline

The retrieval pipeline is a supervisor loop driven entirely by LLM reasoning. Every decision — which documents to query, how to navigate, whether evidence is sufficient — is made by the model, not by heuristics.

Principles

  • Reason, don't vector. — Every retrieval decision is an LLM decision.
  • Model fails, we fail. — No silent degradation. No heuristic fallbacks.
  • No thought, no answer. — Only LLM-reasoned output counts as an answer.

Flow

Engine.ask()
→ Dispatcher
→ Query Understanding (LLM) → QueryPlan (intent, concepts, strategy)
→ Orchestrator (always — single or multi-doc)
→ Analyze (LLM reviews DocCards, selects documents + tasks)
→ Supervisor Loop:
Dispatch Workers → Evaluate (LLM sufficiency check)
→ if insufficient → Replan (LLM) → loop
→ Rerank (dedup → BM25 score → evidence formatting)

Query Understanding

Every query first passes through LLM-based understanding:

FieldDescription
IntentFactual, Analytical, Navigational, or Summary
Strategy Hintfocused, exploratory, comparative, or summary
Key ConceptsLLM-extracted concepts (distinct from keywords)

Orchestrator (Supervisor)

The Orchestrator is the central coordinator. It always runs — even for single-document queries. Its supervisor loop:

  1. Analyze — LLM reviews DocCards (lightweight metadata) and selects relevant documents with specific tasks
  2. Dispatch — Fan-out Workers in parallel (one per document)
  3. Evaluate — LLM checks if collected evidence is sufficient to answer the query
  4. Replan (if insufficient) — LLM identifies missing information and dispatches additional Workers

When the user specifies document IDs directly, the Orchestrator skips the analysis phase and dispatches Workers immediately.

Worker (Evidence Collector)

Each Worker navigates a single document's tree to collect evidence through a command-based loop:

  1. Bird's-eyels the root for an overview
  2. Plan — LLM generates a navigation plan based on keyword index hits
  3. Navigate — Loop: LLM selects command → execute → observe result → repeat
  4. Return — Collected evidence only — no answer synthesis

Available Commands

CommandDescription
lsList children at current position (with summaries and leaf counts)
cd <name>Enter a child node
cd ..Go back to parent
cat <name>Read node content (automatically collected as evidence)
head <name>Preview first N lines (does NOT collect evidence)
find <keyword>Search the document's ReasoningIndex for a keyword
findtree <pattern>Search for nodes by title pattern (case-insensitive)
grep <pattern>Regex search across content in current subtree
wc <name>Show content size (lines, words, chars)
pwdShow current navigation path
checkEvaluate if collected evidence is sufficient
doneEnd navigation

Workers prioritize keyword-based navigation over manual exploration:

  1. When keyword index hits are available, Workers use find with the exact keyword to jump directly to relevant sections
  2. Workers use ls when no keyword hints exist or when discovering unknown structure
  3. Workers use findtree when the section title pattern is known but not the exact name

Dynamic Re-planning

After a check command finds insufficient evidence, the Worker triggers a re-plan — the LLM generates a new navigation plan based on what's missing. This allows the Worker to adapt its strategy mid-navigation.

Rerank Pipeline

After all Workers complete, the Orchestrator runs the final pipeline:

  1. Dedup — Remove duplicate and low-quality evidence
  2. BM25 Scoring — Rank evidence by keyword relevance
  3. Evidence Formatting — Return original document text with source attribution

The system returns raw evidence text — no LLM synthesis or paraphrasing. This ensures the user sees the exact document content that matches their query.

DocCard Catalog

When multiple documents are indexed, Vectorless maintains a lightweight catalog.bin containing DocCard metadata for each document. This allows the Orchestrator to analyze and select relevant documents without loading the full document trees — a significant optimization for workspaces with many documents.

Cross-Document Graph

When multiple documents are indexed, Vectorless automatically builds a relationship graph based on shared keywords and Jaccard similarity. The graph is constructed as a background task after each indexing operation.

Zero Infrastructure

The entire system requires only an LLM API key. No vector database, no embedding models, no additional infrastructure. Trees and metadata are persisted to the local filesystem in the workspace directory.