Skip to main content
If your agent already exports traces to Langfuse, Coval can pull those traces into the trace viewer, run trace-based metrics (timings, LLM judges, custom metrics), and populate the transition heatmap — without re-instrumenting your agent. Connect your Langfuse account once in settings, and Coval handles the rest automatically after each simulation.

How it works

  1. Your agent sends traces to Langfuse as it does today.
  2. When a Coval simulation finishes, Coval fetches the traces that fall inside the simulation’s time window from Langfuse’s Public API.
  3. Langfuse observations are mapped to OpenTelemetry spans and written to the same ClickHouse-backed trace store that native OTLP ingestion uses.
  4. The trace viewer, trace metrics, and transition heatmap work against the imported spans exactly as they do for native OTLP traces.
Imported spans are tagged with service.name = langfuse so they are easy to distinguish in the trace viewer.

Prerequisites

  • A Coval account (sign up)
  • A Langfuse project with at least one completed trace
  • Langfuse Public Key and Secret Key from your Langfuse project settings

Connect Langfuse

  1. Open Settings → Integrations in Coval.
  2. Expand the Langfuse Integration panel.
  3. Paste your Public Key and Secret Key.
  4. Set Host if you self-host Langfuse. Leave it on the default https://us.cloud.langfuse.com for Langfuse Cloud.
  5. Save.
Coval stores the secret key server-side and never returns it to the browser. To rotate the key, use the Replace key button in the credentials card.

Correlation

To tie traces back to the right Coval simulation, Coval first tries to match Langfuse trace metadata on any of:
  • simulation_output_id
  • session_id
  • coval_simulation_output_id
If your agent already sets a session ID equal to the Coval simulation output ID, import is precise. Otherwise, Coval falls back to the simulation’s time window (simulation.start_timesimulation.end_time) and imports every trace in that window. For agents that handle more than one conversation at a time, set one of the metadata keys above to the simulation output ID so imports stay precise.
from langfuse import Langfuse

langfuse = Langfuse()

with langfuse.start_as_current_span(
    name="turn",
    metadata={"simulation_output_id": simulation_output_id},
) as span:
    ...

Verify traces landed

After a simulation finishes, open the result in Coval and click View Traces. Imported spans appear with service.name = langfuse and the original Langfuse attributes preserved under langfuse.* keys. OTel GenAI semantic-convention attributes (gen_ai.request.model, gen_ai.usage.*) are emitted for LLM generations so trace-based metrics work out-of-the-box.

Limits

  • Import runs once per simulation, synchronously, with a 30-second budget.
  • Up to 500 traces per simulation time window are imported (100/page × 5 pages).
  • If a simulation already has native OTLP traces, the import is skipped to avoid duplicate spans.

Troubleshooting

SymptomLikely cause
No spans in the viewer, correct time windowCheck the Langfuse Integration card in Settings. If the Configured chip is missing, re-save the credentials.
Spans appear but don’t match the simulationSet metadata.simulation_output_id on your traces (see Correlation above).
401 Unauthorized in logsKeys were rotated in Langfuse. Click Replace key in Settings.

See also