Skip to main content

Overview

The Coval Reviews API lets you programmatically create review projects, assign reviewers, and submit ground-truth annotations. This is useful for integrating human review into CI/CD pipelines, bulk-labeling workflows, or custom review dashboards.
All requests require an X-API-Key header. See the API Keys guide for setup.

Key Concepts

  • Review Projects group simulations, metrics, and assignees together. Creating a project auto-generates annotations for every (simulation, metric, assignee) combination.
  • Review Annotations are individual review tasks. Each annotation links a simulation output to a metric and an assignee. Providing a ground-truth value auto-completes the annotation.
Using Claude Code? We have skills to support human review in your workflow.

Step-by-Step: Create and Complete a Review Project

1

Create a review project

Link your simulations, metrics, and assignees into a project. This auto-generates one annotation per (simulation, metric, assignee) combination.
Finding your IDs: Retrieve metric IDs via GET /v1/metrics and simulation IDs via GET /v1/simulations. Both endpoints return an id field for each resource.
curl -X POST https://api.coval.dev/v1/review-projects \
  -H "X-API-Key: <your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "display_name": "Q1 Voice Agent Review",
    "description": "Review accuracy and latency for Q1 voice simulations",
    "assignees": ["alice@company.com", "bob@company.com"],
    "linked_simulation_ids": ["sim-output-001", "sim-output-002"],
    "linked_metric_ids": ["metric-accuracy", "metric-latency"],
    "project_type": "PROJECT_COLLABORATIVE",
    "notifications": true
  }'
FieldTypeRequiredDescription
display_namestringYesHuman-readable project name
assigneesstring[]YesReviewer email addresses (at least one)
linked_simulation_idsstring[]YesSimulation output IDs to review
linked_metric_idsstring[]YesMetric IDs to evaluate
descriptionstringNoOptional project description
project_typestringNoPROJECT_COLLABORATIVE (shared queue) or PROJECT_INDIVIDUAL (per-reviewer queues). Defaults to PROJECT_INDIVIDUAL
notificationsbooleanNoEnable email notifications for assignees. Defaults to true
Use PROJECT_COLLABORATIVE when you want one ground-truth label per conversation. Use PROJECT_INDIVIDUAL to measure inter-annotator agreement.
2

List annotations for the project

After creating a project, annotations are auto-generated. List them to see what needs to be reviewed.
curl "https://api.coval.dev/v1/review-annotations?filter=project_id%3D%22<your_project_id>%22" \
  -H "X-API-Key: <your_api_key>"
Filter annotations by status to find pending work:
curl "https://api.coval.dev/v1/review-annotations?filter=project_id%3D%22<your_project_id>%22%20AND%20completion_status%3D%22PENDING%22" \
  -H "X-API-Key: <your_api_key>"
ParameterDescription
filterAIP-160 filter expression. Supports simulation_output_id, metric_id, assignee, status, completion_status, project_id
page_sizeResults per page (1–100, default 50)
page_tokenPagination token from previous response
order_bySort field with optional - prefix for descending. Valid: create_time, update_time, assignee, priority
3

Submit ground-truth values

Update each annotation with the reviewer’s ground-truth assessment. Providing a ground-truth value automatically sets completion_status to COMPLETED.For numeric metrics (e.g., latency, numerical scores):
curl -X PATCH https://api.coval.dev/v1/review-annotations/<your_annotation_id> \
  -H "X-API-Key: <your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "ground_truth_float_value": 0.85,
    "reviewer_notes": "Agent responded accurately but with slight delay"
  }'
For string/categorical metrics (e.g., binary pass/fail, sentiment):
curl -X PATCH https://api.coval.dev/v1/review-annotations/<your_annotation_id> \
  -H "X-API-Key: <your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "ground_truth_string_value": "PASS",
    "reviewer_notes": "Correct greeting and resolution"
  }'
FieldTypeDescription
ground_truth_float_valuenumberGround-truth numeric value (auto-completes annotation)
ground_truth_string_valuestringGround-truth string value (auto-completes annotation)
ground_truth_subvalues_by_timestamparrayGround-truth subvalues keyed by timestamp (for audio region or per-segment metrics)
reviewer_notesstringFree-text reviewer notes
assigneestringReassign to a different reviewer
prioritystringPRIORITY_PRIMARY or PRIORITY_STANDARD
4

Track project progress

Re-fetch the project and its annotations to check completion status.
# Get project details
curl https://api.coval.dev/v1/review-projects/<your_project_id> \
  -H "X-API-Key: <your_api_key>"

# Count completed annotations
curl "https://api.coval.dev/v1/review-annotations?filter=project_id%3D%22<your_project_id>%22%20AND%20completion_status%3D%22COMPLETED%22&page_size=1" \
  -H "X-API-Key: <your_api_key>"
5

Use results to improve metrics

Once annotations are complete, navigate to your metric in the Coval Dashboard to see agreement scores between human labels and AI-generated values. Use the metrics studio to draft and test improved metric prompts.

Managing Review Projects

List Projects

curl "https://api.coval.dev/v1/review-projects?order_by=-create_time&page_size=10" \
  -H "X-API-Key: <your_api_key>"

Update a Project

Add new assignees or link additional simulations:
curl -X PATCH https://api.coval.dev/v1/review-projects/<your_project_id> \
  -H "X-API-Key: <your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "assignees": ["alice@company.com", "bob@company.com", "charlie@company.com"],
    "linked_simulation_ids": ["sim-output-001", "sim-output-002", "sim-output-003"]
  }'

Delete a Project

curl -X DELETE https://api.coval.dev/v1/review-projects/<your_project_id> \
  -H "X-API-Key: <your_api_key>"

Managing Review Annotations

Create a Standalone Annotation

You can create annotations outside of a project for ad-hoc reviews:
curl -X POST https://api.coval.dev/v1/review-annotations \
  -H "X-API-Key: <your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "simulation_output_id": "sim-output-abc123",
    "metric_id": "metric-accuracy-001",
    "assignee": "reviewer@company.com"
  }'

Get a Single Annotation

curl https://api.coval.dev/v1/review-annotations/<your_annotation_id> \
  -H "X-API-Key: <your_api_key>"

Delete an Annotation

curl -X DELETE https://api.coval.dev/v1/review-annotations/<your_annotation_id> \
  -H "X-API-Key: <your_api_key>"
Reviewers can also complete their assignments directly in the Human Review platform.