Skip to main content
A knowledge base is a collection of reference documents attached to an agent. When enabled on a metric, the knowledge base is provided as context to the LLM evaluator — allowing it to check whether your agent’s responses are accurate against a specific source of truth rather than relying on the model’s general knowledge.

Why Use a Knowledge Base

Without a knowledge base, an LLM metric can only reason about what it observes in the conversation transcript. With one, it can cross-reference agent responses against your actual documentation — catching factually incorrect answers, hallucinations, and gaps in coverage. Common use cases:
  • Verify that agents answer questions using approved FAQ or policy content
  • Detect when an agent contradicts product documentation
  • Track accuracy across different knowledge sources (e.g., a billing policy vs. a returns policy)
  • Ensure compliance with healthcare, legal, or regulatory information

Adding Knowledge Base Entries

Knowledge base entries are configured per agent, on the agent’s Knowledge Base tab.
  1. Navigate to your agent’s configuration page
  2. Open the Knowledge Base tab
  3. Click Add Knowledge Base Entry
  4. Choose a source type (see below)
  5. Give the entry a descriptive name (e.g., Hotel FAQ, Returns Policy)
  6. Optionally add tags for organization
  7. Click Upload or Save
All entries are associated with the agent and become available as context for metrics.

Supported Source Types

TypeDescription
plain_textPaste or type text content directly
jsonStructured JSON data
fileUpload a .txt, .pdf, or .docx file — Coval extracts and indexes the content automatically
web_urlReference a URL (content is stored as a reference; not fetched/indexed automatically)
zendeskZendesk Help Center article reference
shelfShelf knowledge base reference
For best results with LLM evaluation, use plain_text or file entries — these are fully chunked and embedded, making them the most reliable source for semantic retrieval during evaluation.

How It Works

When you upload a plain_text, json, or file entry, Coval automatically:
  1. Extracts the text content (parsing PDFs and DOCX files as needed)
  2. Splits the content into overlapping chunks
  3. Embeds each chunk using a vector embedding model
  4. Stores the chunks indexed to the agent
At evaluation time, when a metric has Knowledge Base enabled, the relevant chunks are retrieved and included in the LLM evaluator’s context alongside the conversation transcript.

Using Knowledge Base in Metrics

Once you’ve added entries to your agent’s knowledge base, you can enable KB context on any LLM Judge metric (Binary, Numerical, or Categorical) or on Composite Evaluation metrics. To enable for a metric:
  1. Open the metric configuration
  2. Scroll to the Knowledge Base toggle
  3. Enable it
If you don’t enable the Knowledge Base toggle on a metric, the metric will evaluate without KB context and may produce inaccurate results even when the entries are configured on the agent.
See Knowledge Base Metrics for prompt writing guidance and examples.

Generating Test Cases from Knowledge Base Documents

If you upload a Conversation Design Document (CDD) — a structured file describing your agent’s expected conversation flows — Coval can automatically extract test scenarios from it and generate a test set. After uploading a CDD entry, click Generate Test Cases to run the extraction. Coval uses multiple prompting strategies to pull out scenarios, conditions, and expected responses, then creates a linked test set you can use in simulations.