Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.coval.dev/llms.txt

Use this file to discover all available pages before exploring further.

If your agent already exports traces to LangSmith, Coval can pull those runs into the trace viewer, run trace-based metrics (timings, LLM judges, custom metrics), and populate the transition heatmap — without re-instrumenting your agent. Connect your LangSmith project once in settings, and Coval handles the rest automatically after each simulation.

How it works

  1. Your agent sends runs to LangSmith as it does today (LangChain SDK auto-instrumentation, or any OpenTelemetry exporter pointed at https://api.smith.langchain.com/otel/v1/traces).
  2. When a Coval simulation finishes, Coval resolves your LangSmith project name to its UUID via GET /sessions, then queries POST /runs/query with a start_time window and a has(metadata, ...) filter on simulation_output_id.
  3. For every trace that had at least one direct metadata match, Coval issues a follow-up POST /runs/query filtered by trace_id to pull in the rest of the runs in that trace — so tagging only the root run still imports its descendants. Untagged traces in the window are never fetched.
  4. Each LangSmith Run is normalized to an OpenTelemetry span and written to the same ClickHouse-backed trace store that native OTLP ingestion uses.
  5. The trace viewer, trace metrics, and transition heatmap work against the imported spans exactly as they do for native OTLP traces.
Imported spans are tagged with service.name = langsmith so they are easy to distinguish in the trace viewer.

Prerequisites

  • A Coval account (sign up)
  • A LangSmith account with a project your agent writes runs to
  • A LangSmith API key (Settings → API Keys in the LangSmith UI — lsv2_pt_...)

Connect LangSmith

  1. Open Settings → Integrations in Coval.
  2. Expand the LangSmith Integration panel.
  3. Fill in the required fields and save.
FieldRequiredNotes
API KeyYesLangSmith API key (lsv2_pt_...). Workspace-scoped is fine.
Project NameYesThe LangSmith project name (a.k.a. session name) your agent writes to.
HostYesDefaults to https://api.smith.langchain.com. Use https://eu.api.smith.langchain.com for EU GCP or https://aws.api.smith.langchain.com for AWS US.
Coval stores the API key server-side and never returns it to the browser. To rotate the key, use the Replace key button in the credentials card.

Correlation

To tie runs back to the right Coval simulation, set simulation_output_id (or session_id / coval_simulation_output_id) in the run’s metadata. Coval’s filter expression matches on metadata key/value pairs server-side, then verifies the match client-side as a fail-closed safety net. LangChain SDK:
from langsmith import traceable

@traceable(metadata={"simulation_output_id": simulation_output_id})
def handle_call(payload):
    ...
Or set metadata at trace start:
from langsmith.run_helpers import trace

with trace(
    "conversation",
    metadata={"simulation_output_id": simulation_output_id},
) as run_tree:
    ...
OpenTelemetry SDK exporting to LangSmith’s /otel/v1/traces:
from opentelemetry import trace as otel_trace

tracer = otel_trace.get_tracer("my-agent")

with tracer.start_as_current_span("conversation") as span:
    # LangSmith's OTel ingestion only maps attributes prefixed with
    # `langsmith.metadata.` into the run `metadata` field that
    # `/runs/query` filters on.
    span.set_attribute("langsmith.metadata.simulation_output_id", simulation_output_id)
    ...
When the root run carries the metadata, Coval also pulls in its child runs in the same trace — you don’t need to tag every span individually.
Coval fails closed when a correlation hint is set but no runs match — it returns an empty result rather than importing every run in the time window. This avoids cross-contamination between concurrent simulations on the same project.

Verify runs landed

After a simulation finishes, open the result in Coval and click View Traces. Imported spans appear with service.name = langsmith and the original LangSmith run attributes preserved. LLM runs (run_type = "llm") are normalized to span_name = "llm" (matching Coval’s native metric queries) regardless of the original LangSmith run name. Coval also extracts token counts from the various places LangSmith stores them (outputs.usage_metadata, outputs.llm_output.token_usage, extra.metadata.usage) and exposes them as OTel GenAI semantic conventions:
LangSmith fieldOTel GenAI alias
outputs.usage_metadata.input_tokens (or prompt_tokens)gen_ai.usage.input_tokens
outputs.usage_metadata.output_tokens (or completion_tokens)gen_ai.usage.output_tokens
outputs.usage_metadata.total_tokensgen_ai.usage.total_tokens
extra.metadata.ls_model_name (or extra.invocation_params.model)gen_ai.request.model
inputsinput
outputsoutput
So token usage and LLM-judge metrics work without any extra mapping.

Limits

  • Import runs once per simulation, synchronously, with a 90-second budget that includes a brief retry-on-empty so runs flushed slightly after the simulation ends are still picked up.
  • Up to 500 runs per simulation are imported (100/page × 5 pages).
  • If a simulation already has native OTLP traces, the import is skipped to avoid duplicate spans.
  • LangSmith’s /runs/query endpoint is available on all plans. Bulk export to S3 (Plus/Enterprise only) is not used.

Troubleshooting

SymptomLikely cause
No spans in the viewer, correct time windowCheck the LangSmith Integration card in Settings. If the Configured chip is missing, re-save the credentials. Confirm the Project Name matches the project your agent writes to.
Spans appear in LangSmith but not in CovalSet metadata.simulation_output_id on the root run — see Correlation above.
401 Unauthorized in logsAPI key was rotated or revoked. Click Replace key in Settings.
404 on /sessionsThe configured Host is wrong for your account region. Try https://eu.api.smith.langchain.com (EU) or https://aws.api.smith.langchain.com (AWS US).

See also