Overview
The Coval Reviews API lets you programmatically create review projects, assign reviewers, and submit ground-truth annotations. This is useful for integrating human review into CI/CD pipelines, bulk-labeling workflows, or custom review dashboards.All requests require an
X-API-Key header. See the API Keys guide for setup.Key Concepts
- Review Projects group simulations, metrics, and assignees together. Creating a project auto-generates annotations for every (simulation, metric, assignee) combination.
- Review Annotations are individual review tasks. Each annotation links a simulation output to a metric and an assignee. Providing a ground-truth value auto-completes the annotation.
Step-by-Step: Create and Complete a Review Project
Create a review project
Link your simulations, metrics, and assignees into a project. This auto-generates one annotation per (simulation, metric, assignee) combination.
Finding your IDs: Retrieve metric IDs via
GET /v1/metrics and simulation IDs via GET /v1/simulations. Both endpoints return an id field for each resource.| Field | Type | Required | Description |
|---|---|---|---|
display_name | string | Yes | Human-readable project name |
assignees | string[] | Yes | Reviewer email addresses (at least one) |
linked_simulation_ids | string[] | Yes | Simulation output IDs to review |
linked_metric_ids | string[] | Yes | Metric IDs to evaluate |
description | string | No | Optional project description |
project_type | string | No | PROJECT_COLLABORATIVE (shared queue) or PROJECT_INDIVIDUAL (per-reviewer queues). Defaults to PROJECT_INDIVIDUAL |
notifications | boolean | No | Enable email notifications for assignees. Defaults to true |
List annotations for the project
After creating a project, annotations are auto-generated. List them to see what needs to be reviewed.Filter annotations by status to find pending work:
| Parameter | Description |
|---|---|
filter | AIP-160 filter expression. Supports simulation_output_id, metric_id, assignee, status, completion_status, project_id |
page_size | Results per page (1–100, default 50) |
page_token | Pagination token from previous response |
order_by | Sort field with optional - prefix for descending. Valid: create_time, update_time, assignee, priority |
Submit ground-truth values
Update each annotation with the reviewer’s ground-truth assessment. Providing a ground-truth value automatically sets For string/categorical metrics (e.g., binary pass/fail, sentiment):
completion_status to COMPLETED.For numeric metrics (e.g., latency, numerical scores):| Field | Type | Description |
|---|---|---|
ground_truth_float_value | number | Ground-truth numeric value (auto-completes annotation) |
ground_truth_string_value | string | Ground-truth string value (auto-completes annotation) |
ground_truth_subvalues_by_timestamp | array | Ground-truth subvalues keyed by timestamp (for audio region or per-segment metrics) |
reviewer_notes | string | Free-text reviewer notes |
assignee | string | Reassign to a different reviewer |
priority | string | PRIORITY_PRIMARY or PRIORITY_STANDARD |
Use results to improve metrics
Once annotations are complete, navigate to your metric in the Coval Dashboard to see agreement scores between human labels and AI-generated values. Use the metrics studio to draft and test improved metric prompts.

