Prerequisites
- Your AI agent (voice or chat) must be accessible via phone number or API endpoint
- A Coval account
1. Connect Your Agent
Connect your agent to Coval’s simulation platform:- Go to Agents in your dashboard
- Click “Add New Agent”
- Provide your agent’s connection details:
- Phone number (for voice agents)
- API endpoint URL (for chat agents)
- Configure additional settings like language preferences
Here’s a quickstart guide on connecting your Agent:
2. (Optional) Create a Persona
Define how your simulated users will behave:- Navigate to Personas
- Click “Create New Persona”
- Configure basic settings:
- User Type: Customer, Patient, Employee, or Custom
- Voice & Language: Select from available options
- Behavior: Set interruption sensitivity and response patterns
Coval offers a set of built-in Personas with different voice & background noise settings.
3. Create Your Test Set
Tell Coval what your simulated users should do:- Go to Test Sets
- Click “Create New Test Set”
- Add test cases using Scenarios:
- Simple:
"Call to get a refund" - Complex:
"Ask for PTO from March 21-22, then change to March 20-22, provide email as emily@gmail.com"
- Simple:
Be specific in your scenarios - the more detail, the more accurately our simulated users will follow instructions. For more detail check our Test Set Guide.
4. Choose Your Metrics
Select how to evaluate your agent’s performance: Recommended starter metrics:- Call Resolution Success (LLM Judge)
- Latency (Audio metric)
- Interruptions (Audio metric)
5. Create a Template
Bundle everything together for easy reuse:- Go to Templates → “Create New Template”
- Configure:
- Test Set: Your scenarios
- Agent: Your connected agent
- Persona: Your simulated user behavior
- Metrics: Your evaluation criteria
- Iterations: Number of conversations to run (start with 1-3)
- Concurrency: Parallel simulations (start with 1-2)
6. Launch Your First Evaluation
- Click “Launch Evaluation”
- Select “Use Template” and choose your template
- Click “Launch”
- Monitor progress in real-time
7. Analyze Results
Once complete, review your evaluation:- Overview: Aggregated performance metrics
- Individual Conversations: Detailed analysis of each simulation
- Transcript Review: See exactly what was said and where issues occurred
- Metric Explanations: Understand why each metric passed or failed
Next Steps
Advanced Metrics
Create custom LLM judge metrics for your specific use cases
CI/CD Integration
Automate evaluations with GitHub Actions
Continuous Monitoring
Set up alerts and recurring evaluations
Troubleshooting
Agent not connecting?- Verify your phone number or endpoint URL
- Check if your agent accepts inbound calls/requests
- Ensure proper authentication if using API endpoints
- Make test case scenarios more specific
- Adjust persona settings for desired behavior
- Check agent and persona compatibility

