Skip to main content
Using Claude Code? We have skills to support human review in your workflow.

Review Projects

List Review Projects

coval review-projects list [OPTIONS]
OptionTypeDefaultDescription
--page-sizenumber50Results per page
--order-bystringSort order (e.g., -create_time)
Output columns: ID, NAME, TYPE, ASSIGNEES, SIMULATIONS, METRICS, CREATED
# List all review projects
coval review-projects list

# Sort by most recent
coval review-projects list --order-by "-create_time"

# JSON output
coval review-projects list --format json

Get Review Project

coval review-projects get <project_id>
ArgumentTypeRequiredDescription
project_idstringYesThe review project ID
Returns full project details as JSON including assignees, linked simulations, and linked metrics.
coval review-projects get 01HXYZ1234567890ABCDEF

Create Review Project

coval review-projects create [OPTIONS]
OptionTypeRequiredDescription
--namestringYesDisplay name for the project
--assigneesstringYesComma-separated reviewer email addresses
--simulation-idsstringYesComma-separated simulation output IDs
--metric-idsstringYesComma-separated metric IDs
--descriptionstringNoProject description
--typestringNocollaborative or individual (default: individual)
--notificationsbooleanNoEnable email notifications (default: true)
Creating a project auto-generates review annotations for every (simulation, metric, assignee) combination.
Finding your IDs: Run coval metrics list to get metric IDs and coval simulations list to get simulation IDs.
# Create a collaborative review project
coval review-projects create \
  --name "Q1 Voice Agent Review" \
  --assignees "alice@company.com,bob@company.com" \
  --simulation-ids "sim-output-001,sim-output-002" \
  --metric-ids "metric-accuracy,metric-latency" \
  --type collaborative

# Create with description and notifications disabled
coval review-projects create \
  --name "Internal Audit" \
  --assignees "reviewer@company.com" \
  --simulation-ids "sim-output-003" \
  --metric-ids "metric-accuracy" \
  --description "Spot-check accuracy labels" \
  --notifications false

Update Review Project

coval review-projects update <project_id> [OPTIONS]
ArgumentTypeRequiredDescription
project_idstringYesThe project ID to update
OptionTypeDescription
--namestringUpdated display name
--assigneesstringUpdated comma-separated reviewer emails
--simulation-idsstringUpdated comma-separated simulation IDs
--metric-idsstringUpdated comma-separated metric IDs
--descriptionstringUpdated description
--notificationsbooleanUpdated notification setting
# Add a new assignee
coval review-projects update 01HXYZ1234567890ABCDEF \
  --assignees "alice@company.com,bob@company.com,charlie@company.com"

# Update project name
coval review-projects update 01HXYZ1234567890ABCDEF \
  --name "Q1 Voice Agent Review - Updated"

Delete Review Project

coval review-projects delete <project_id>
ArgumentTypeRequiredDescription
project_idstringYesThe project ID to delete
coval review-projects delete 01HXYZ1234567890ABCDEF

Review Annotations

List Review Annotations

coval review-annotations list [OPTIONS]
OptionTypeDefaultDescription
--filterstringFilter expression (e.g., project_id="abc")
--page-sizenumber50Results per page
--order-bystringSort order (e.g., -create_time)
Supported filter fields: simulation_output_id, metric_id, assignee, status (ACTIVE/ARCHIVED), completion_status (PENDING/COMPLETED), project_id Output columns: ID, SIMULATION, METRIC, ASSIGNEE, STATUS, PRIORITY
# List all annotations
coval review-annotations list

# Filter by project
coval review-annotations list --filter 'project_id="01HXYZ1234567890ABCDEF"'

# Filter pending annotations for a specific assignee
coval review-annotations list \
  --filter 'completion_status="PENDING" AND assignee="alice@company.com"'

# JSON output
coval review-annotations list --format json

Get Review Annotation

coval review-annotations get <annotation_id>
ArgumentTypeRequiredDescription
annotation_idstringYesThe annotation ID
Returns full annotation details as JSON including ground-truth values, reviewer notes, and completion status.
coval review-annotations get abc123def456ghi789jklm

Create Review Annotation

coval review-annotations create [OPTIONS]
OptionTypeRequiredDescription
--simulation-idstringYesSimulation output ID to link
--metric-idstringYesMetric ID to link
--assigneestringYesReviewer email address
--ground-truth-floatnumberNoGround-truth numeric value (auto-completes)
--ground-truth-stringstringNoGround-truth string value (auto-completes)
--notesstringNoReviewer notes
--prioritystringNoprimary or standard (default: standard)
# Create a basic annotation
coval review-annotations create \
  --simulation-id sim-output-abc123 \
  --metric-id metric-accuracy-001 \
  --assignee reviewer@company.com

# Create with ground truth (auto-completes)
coval review-annotations create \
  --simulation-id sim-output-abc123 \
  --metric-id metric-accuracy-001 \
  --assignee reviewer@company.com \
  --ground-truth-float 0.95 \
  --notes "Verified correct response"

Update Review Annotation

coval review-annotations update <annotation_id> [OPTIONS]
ArgumentTypeRequiredDescription
annotation_idstringYesThe annotation ID to update
OptionTypeDescription
--ground-truth-floatnumberGround-truth numeric value (auto-completes)
--ground-truth-stringstringGround-truth string value (auto-completes)
--notesstringReviewer notes
--assigneestringReassign to a different reviewer
--prioritystringprimary or standard
# Submit a ground-truth value
coval review-annotations update abc123def456ghi789jklm \
  --ground-truth-float 0.85 \
  --notes "Agent responded accurately but with slight delay"

# Reassign an annotation
coval review-annotations update abc123def456ghi789jklm \
  --assignee new-reviewer@company.com

Delete Review Annotation

coval review-annotations delete <annotation_id>
ArgumentTypeRequiredDescription
annotation_idstringYesThe annotation ID to delete
coval review-annotations delete abc123def456ghi789jklm

Completion Statuses

StatusDescription
PENDINGAnnotation has not been reviewed yet
COMPLETEDGround-truth value has been submitted

Annotation Priorities

PriorityDescription
PRIORITY_PRIMARYHigh-priority annotation — surfaces first in reviewer queues
PRIORITY_STANDARDDefault priority

Project Types

TypeDescription
collaborativeAll reviewers share a single queue with one annotation per simulation-metric pair
individualEach reviewer gets their own private queue and annotations
Use collaborative projects when building ground-truth datasets. Use individual projects when measuring inter-annotator agreement.