If your agent already exports traces to LangSmith, Coval can pull those runs into the trace viewer, run trace-based metrics (timings, LLM judges, custom metrics), and populate the transition heatmap — without re-instrumenting your agent. Connect your LangSmith project once in settings, and Coval handles the rest automatically after each simulation.Documentation Index
Fetch the complete documentation index at: https://docs.coval.dev/llms.txt
Use this file to discover all available pages before exploring further.
How it works
- Your agent sends runs to LangSmith as it does today (LangChain SDK auto-instrumentation, or any OpenTelemetry exporter pointed at
https://api.smith.langchain.com/otel/v1/traces). - When a Coval simulation finishes, Coval resolves your LangSmith project name to its UUID via
GET /sessions, then queriesPOST /runs/querywith astart_timewindow and ahas(metadata, ...)filter onsimulation_output_id. - For every trace that had at least one direct metadata match, Coval issues a follow-up
POST /runs/queryfiltered bytrace_idto pull in the rest of the runs in that trace — so tagging only the root run still imports its descendants. Untagged traces in the window are never fetched. - Each LangSmith Run is normalized to an OpenTelemetry span and written to the same ClickHouse-backed trace store that native OTLP ingestion uses.
- The trace viewer, trace metrics, and transition heatmap work against the imported spans exactly as they do for native OTLP traces.
service.name = langsmith so they are easy to distinguish in the trace viewer.
Prerequisites
- A Coval account (sign up)
- A LangSmith account with a project your agent writes runs to
- A LangSmith API key (Settings → API Keys in the LangSmith UI —
lsv2_pt_...)
Connect LangSmith
- Open Settings → Integrations in Coval.
- Expand the LangSmith Integration panel.
- Fill in the required fields and save.
| Field | Required | Notes |
|---|---|---|
| API Key | Yes | LangSmith API key (lsv2_pt_...). Workspace-scoped is fine. |
| Project Name | Yes | The LangSmith project name (a.k.a. session name) your agent writes to. |
| Host | Yes | Defaults to https://api.smith.langchain.com. Use https://eu.api.smith.langchain.com for EU GCP or https://aws.api.smith.langchain.com for AWS US. |
Coval stores the API key server-side and never returns it to the browser. To rotate the key, use the Replace key button in the credentials card.
Correlation
To tie runs back to the right Coval simulation, setsimulation_output_id (or session_id / coval_simulation_output_id) in the run’s metadata. Coval’s filter expression matches on metadata key/value pairs server-side, then verifies the match client-side as a fail-closed safety net.
LangChain SDK:
/otel/v1/traces:
Coval fails closed when a correlation hint is set but no runs match — it returns an empty result rather than importing every run in the time window. This avoids cross-contamination between concurrent simulations on the same project.
Verify runs landed
After a simulation finishes, open the result in Coval and click View Traces. Imported spans appear withservice.name = langsmith and the original LangSmith run attributes preserved.
LLM runs (run_type = "llm") are normalized to span_name = "llm" (matching Coval’s native metric queries) regardless of the original LangSmith run name. Coval also extracts token counts from the various places LangSmith stores them (outputs.usage_metadata, outputs.llm_output.token_usage, extra.metadata.usage) and exposes them as OTel GenAI semantic conventions:
| LangSmith field | OTel GenAI alias |
|---|---|
outputs.usage_metadata.input_tokens (or prompt_tokens) | gen_ai.usage.input_tokens |
outputs.usage_metadata.output_tokens (or completion_tokens) | gen_ai.usage.output_tokens |
outputs.usage_metadata.total_tokens | gen_ai.usage.total_tokens |
extra.metadata.ls_model_name (or extra.invocation_params.model) | gen_ai.request.model |
inputs | input |
outputs | output |
Limits
- Import runs once per simulation, synchronously, with a 90-second budget that includes a brief retry-on-empty so runs flushed slightly after the simulation ends are still picked up.
- Up to 500 runs per simulation are imported (100/page × 5 pages).
- If a simulation already has native OTLP traces, the import is skipped to avoid duplicate spans.
- LangSmith’s
/runs/queryendpoint is available on all plans. Bulk export to S3 (Plus/Enterprise only) is not used.
Troubleshooting
| Symptom | Likely cause |
|---|---|
| No spans in the viewer, correct time window | Check the LangSmith Integration card in Settings. If the Configured chip is missing, re-save the credentials. Confirm the Project Name matches the project your agent writes to. |
| Spans appear in LangSmith but not in Coval | Set metadata.simulation_output_id on the root run — see Correlation above. |
401 Unauthorized in logs | API key was rotated or revoked. Click Replace key in Settings. |
404 on /sessions | The configured Host is wrong for your account region. Try https://eu.api.smith.langchain.com (EU) or https://aws.api.smith.langchain.com (AWS US). |
See also
- OpenTelemetry Traces — push traces directly to Coval.
- Import Traces from Langfuse — same flow, Langfuse source.
- Import Traces from Arize Phoenix — same flow, Arize/Phoenix source.
- Coval Wizard (Beta) — auto-instrument Pipecat/LiveKit/Vapi agents.

