Skip to main content
Ready to start evaluating your AI agent? This quick start guide will get you running your first simulation in minutes.

Prerequisites

  • Your AI agent (voice or chat) must be accessible via phone number or API endpoint
  • A Coval account

1. Connect Your Agent

Connect your agent to Coval’s simulation platform:
  1. Go to Agents in your dashboard
  2. Click “Add New Agent”
  3. Provide your agent’s connection details:
    • Phone number (for voice agents)
    • API endpoint URL (for chat agents)
  4. Configure additional settings like language preferences
Use Inbound Voice connection type for agents that receive calls (like customer support lines)

Here’s a quickstart guide on connecting your Agent:

2. (Optional) Create a Persona

Define how your simulated users will behave:
  1. Navigate to Personas
  2. Click “Create New Persona”
  3. Configure basic settings:
    • User Type: Customer, Patient, Employee, or Custom
    • Voice & Language: Select from available options
    • Behavior: Set interruption sensitivity and response patterns
Coval offers a set of built-in Personas with different voice & background noise settings.

3. Create Your Test Set

Tell Coval what your simulated users should do:
  1. Go to Test Sets
  2. Click “Create New Test Set”
  3. Add test cases using Scenarios:
    • Simple: "Call to get a refund"
    • Complex: "Ask for PTO from March 21-22, then change to March 20-22, provide email as emily@gmail.com"
Be specific in your scenarios - the more detail, the more accurately our simulated users will follow instructions. For more detail check our Test Set Guide.

4. Choose Your Metrics

Select how to evaluate your agent’s performance: Recommended starter metrics:
  • Call Resolution Success (LLM Judge)
  • Latency (Audio metric)
  • Interruptions (Audio metric)
Navigate to Metrics to create custom metrics or use built-in options.

5. Create a Template

Bundle everything together for easy reuse:
  1. Go to Templates“Create New Template”
  2. Configure:
    • Test Set: Your scenarios
    • Agent: Your connected agent
    • Persona: Your simulated user behavior
    • Metrics: Your evaluation criteria
    • Iterations: Number of conversations to run (start with 1-3)
    • Concurrency: Parallel simulations (start with 1-2)

6. Launch Your First Evaluation

  1. Click “Launch Evaluation”
  2. Select “Use Template” and choose your template
  3. Click “Launch”
  4. Monitor progress in real-time

7. Analyze Results

Once complete, review your evaluation:
  • Overview: Aggregated performance metrics
  • Individual Conversations: Detailed analysis of each simulation
  • Transcript Review: See exactly what was said and where issues occurred
  • Metric Explanations: Understand why each metric passed or failed

Next Steps

Advanced Metrics

Create custom LLM judge metrics for your specific use cases

CI/CD Integration

Automate evaluations with GitHub Actions

Continuous Monitoring

Set up alerts and recurring evaluations

Troubleshooting

Agent not connecting?
  • Verify your phone number or endpoint URL
  • Check if your agent accepts inbound calls/requests
  • Ensure proper authentication if using API endpoints
Simulations not running as expected?
  • Make test case scenarios more specific
  • Adjust persona settings for desired behavior
  • Check agent and persona compatibility
Need help? Contact our support team or check our detailed guides for more advanced configuration options.