A knowledge base is a collection of reference documents attached to an agent. When enabled on a metric, the knowledge base is provided as context to the LLM evaluator — allowing it to check whether your agent’s responses are accurate against a specific source of truth rather than relying on the model’s general knowledge.
Why Use a Knowledge Base
Without a knowledge base, an LLM metric can only reason about what it observes in the conversation transcript. With one, it can cross-reference agent responses against your actual documentation — catching factually incorrect answers, hallucinations, and gaps in coverage.
Common use cases:
- Verify that agents answer questions using approved FAQ or policy content
- Detect when an agent contradicts product documentation
- Track accuracy across different knowledge sources (e.g., a billing policy vs. a returns policy)
- Ensure compliance with healthcare, legal, or regulatory information
Adding Knowledge Base Entries
Knowledge base entries are configured per agent, on the agent’s Knowledge Base tab.
- Navigate to your agent’s configuration page
- Open the Knowledge Base tab
- Click Add Knowledge Base Entry
- Choose a source type (see below)
- Give the entry a descriptive name (e.g.,
Hotel FAQ, Returns Policy)
- Optionally add tags for organization
- Click Upload or Save
All entries are associated with the agent and become available as context for metrics.
Supported Source Types
| Type | Description |
|---|
plain_text | Paste or type text content directly |
json | Structured JSON data |
file | Upload a .txt, .pdf, or .docx file — Coval extracts and indexes the content automatically |
web_url | Reference a URL (content is stored as a reference; not fetched/indexed automatically) |
zendesk | Zendesk Help Center article reference |
shelf | Shelf knowledge base reference |
For best results with LLM evaluation, use plain_text or file entries — these are fully chunked and embedded, making them the most reliable source for semantic retrieval during evaluation.
How It Works
When you upload a plain_text, json, or file entry, Coval automatically:
- Extracts the text content (parsing PDFs and DOCX files as needed)
- Splits the content into overlapping chunks
- Embeds each chunk using a vector embedding model
- Stores the chunks indexed to the agent
At evaluation time, when a metric has Knowledge Base enabled, the relevant chunks are retrieved and included in the LLM evaluator’s context alongside the conversation transcript.
Using Knowledge Base in Metrics
Once you’ve added entries to your agent’s knowledge base, you can enable KB context on any LLM Judge metric (Binary, Numerical, or Categorical) or on Composite Evaluation metrics.
To enable for a metric:
- Open the metric configuration
- Scroll to the Knowledge Base toggle
- Enable it
If you don’t enable the Knowledge Base toggle on a metric, the metric will evaluate without KB context and may produce inaccurate results even when the entries are configured on the agent.
See Knowledge Base Metrics for prompt writing guidance and examples.
Generating Test Cases from Knowledge Base Documents
If you upload a Conversation Design Document (CDD) — a structured file describing your agent’s expected conversation flows — Coval can automatically extract test scenarios from it and generate a test set.
After uploading a CDD entry, click Generate Test Cases to run the extraction. Coval uses multiple prompting strategies to pull out scenarios, conditions, and expected responses, then creates a linked test set you can use in simulations.