Quick Guide: Setting Up GitHub Actions

Set up Manual Workflow

1

Create the Workflow File

  1. Navigate to .github/workflows/ in your repository.

  2. Create a new file named run_coval_evaluation.yml.

  3. Paste the following content:

run_coval_evaluation.yml
name: Run Coval Evaluation
on:
  workflow_dispatch:
    inputs:
      organization_id:
        description: "Your organization ID."
        required: true
        type: string
      dataset_id:
        description: "The dataset ID to evaluate."
        required: true
        type: string
      created_by:
        description: "Identifier for who triggered the workflow."
        required: false
        type: string
      test_set_name: "Name of the Test Set to evaluate."
        required: true
        type: string
      config:
        description: "Configuration JSON as a string."
        required: true
        type: string
jobs:
  run-coval-evaluation:
    uses: coval-ai/coval-github-actions/.github/workflows/run_eval.yml@main
    with:
      organization_id: ${{ github.event.inputs.organization_id }}
      dataset_id: ${{ github.event.inputs.dataset_id }}
      created_by: ${{ github.event.inputs.created_by }}
      test_set_name: ${{ github.event.inputs.test_set_name }}
      config: ${{ github.event.inputs.config }}
    secrets:
      COVAL_API_KEY: ${{ secrets.COVAL_API_KEY }}

Learn more about about how to set up the config settings in monitoring section.

2

Add Your Coval API Key

  1. Go to your repository’s Settings > Secrets and variables > Actions.

  2. Click New repository secret.

  3. Name it COVAL_API_KEY and paste your Coval API key.

3

Run the Workflow

  1. Navigate to the Actions tab in your repository.

  2. Select Run Coval Evaluation.

  3. Click Run workflow.

  4. Enter the required inputs:

  • organization_id: Your Coval organization ID.

  • dataset_id: The dataset ID for evaluation.

  • test_set_name: Name of the Test Set to evaluate.

  • config: Configuration parameters in JSON string format.

  1. (Optional) created_by: Specify the creator if needed.

  2. Click Run workflow to start the evaluation and check the results.

Set up Automatic GitHub Action

run_coval_evaluation.yml
name: Run Coval Evaluation

on:
  pull_request:
    branches:
      - main

jobs:
  run-coval-evaluation:
    uses: coval-ai/coval-github-actions/.github/workflows/run_eval.yml@main
    with:
      organization_id: <YOUR_ORGANIZATION_ID>
      dataset_id: <YOUR_DATASET_ID>
      created_by: <USER>
      test_set_name: <TEST-SET-NAME>
      config: |
        {
          "model": {
            "type": "MODEL_TYPE_VOICE",
            "config": {
              "phone_number": <YOUR PHONE NUMBER>
            }
          },
          "metrics": {
            "metric_type_yesnoquestions_hallucination": {
              "submetrics": [],
              "custom_questions": [
                {
                  "tag": "successful_call",
                  "question": "Did the agent successfully complete the objective of the call?"
                }
              ]
            }
          }
        }
    secrets:
      COVAL_API_KEY: ${{ secrets.COVAL_API_KEY }}

This workflow is configured to run an evaluation automatically on every pull request on main. Similar to previous section, follow steps 1 - 2 and trigger automatically our workflow.

Important Notes

  • Configuration JSON: Ensure your config input is a valid JSON string.

  • Secrets: Your COVAL_API_KEY is securely stored and accessed via ${{ secrets.COVAL_API_KEY }}.

  • External Workflow: The action uses a predefined workflow from coval-ai/coval-github-actions.

Need Help?

If you have any questions or need assistance, please contact our support team.