Skip to main content
The /onboard skill guides you through setting up your first Coval evaluation step by step. Your AI coding agent asks questions about your use case, then creates all the resources and launches the evaluation using the Coval CLI.

Quick Start

# 1. Install Coval skills
npx skills add coval-ai/coval-external-skills

# 2. Open your AI coding agent (Claude Code, Cursor, etc.)

# 3. Run the onboarding skill
/onboard
The skill handles everything from there — including installing the CLI and authenticating if you haven’t already.

What Gets Created

The onboarding flow creates a complete evaluation setup:
ResourceWhat It Is
AgentYour AI agent connected to Coval (voice, chat, SMS, or WebSocket)
PersonaA simulated caller with voice, language, and behavior settings
Test Set3 test cases: happy path, edge case, and compliance scenario
MetricsUse-case-specific metrics plus built-in audio and conversation metrics
Run TemplateReusable configuration bundling everything above
Evaluation RunYour first evaluation, launched and monitored

The Flow

The skill walks through 6 phases:
1

Setup

Checks if the Coval CLI is installed and you’re authenticated. Guides installation if needed. Detects any existing resources so you don’t duplicate work.
2

Connect Agent

Asks your agent type (voice, chat, SMS, WebSocket) and connection details (phone number or endpoint URL).
3

Discover Use Case

Asks what your agent does (customer support, insurance, healthcare, sales, etc.) and what language it speaks. Creates a persona tailored to your vertical.
4

Build Test Cases

Generates 3 test cases based on your use case — a happy path, an edge case, and a compliance scenario. Each includes expected behaviors your agent should follow.
5

Select Metrics

Recommends metrics based on your use case and agent type. Includes custom LLM judge metrics, audio quality metrics (for voice), and built-in metrics like latency and sentiment.
6

Launch and Review

Bundles everything into a reusable template, launches the evaluation, watches progress, and presents results with scores per test case.

Supported Verticals

The skill includes templates for these use cases, with pre-built personas, test cases, and metrics for each:
VerticalPersonaCustom Metric
Customer SupportJordanIssue Resolution
Scheduling & BookingTaylorBooking Accuracy
SalesMorganSales Accuracy
Insurance ClaimsSarahIdentity Verification
Healthcare IntakeMichaelHIPAA Compliance
Restaurant OrdersAlexOrder Accuracy
Debt CollectionChrisRegulatory Compliance
IT HelpdeskPatTicket Resolution
If your use case doesn’t match a vertical, the skill uses a general-purpose template and adapts based on your description.

After Onboarding

Once your first evaluation completes, you can:
  • Add more test cases: coval test-cases create --test-set-id {id} --input "..."
  • Schedule recurring runs: coval scheduled-runs create --template-id {id} --schedule "cron(0 9 * * MON)"
  • Listen to recordings: coval simulations audio {sim_id} -o recording.wav
  • Iterate on metrics: Adjust prompts based on what you learned from results
  • View in dashboard: Visit app.coval.dev to see full results with transcripts

Requirements

  • An AI coding agent that supports skills (Claude Code, Cursor, Windsurf, Codex, etc.)
  • An AI agent to evaluate (voice or chat, accessible via phone number or endpoint)
  • A Coval account (sign up at coval.dev)