Overview
OpenAI Endpoint connections integrate with any OpenAI-compatible chat completions API, enabling text-based conversational testing.Configuration Requirements
Chat Endpoint
- Field:
chat_endpoint - Type: String (required)
- Purpose: URL endpoint for OpenAI-compatible chat completions API
- Format: Valid HTTPS URL
- Example:
https://api.openai.com/v1/chat/completions
Authentication Token
- Field:
auth_token - Type: String (required)
- Purpose: API key or authentication token for the endpoint
- Format: String (maximum 4KB length)
- Security: Sensitive field - stored encrypted
- Example:
sk-proj-abc123def456...
Model
- Field:
model - Type: String (optional)
- Default:
"gpt-4o" - Purpose: Specify which OpenAI model to use for completions
- GPT-5 Series:
gpt-5,gpt-5-mini,gpt-5-nano,gpt-5-chat - GPT-4.1 Family:
gpt-4.1,gpt-4.1-mini,gpt-4.1-nano - Reasoning Models:
o1,o1-mini,o1-pro,o3-mini,o3-pro,o3,o4-mini - Current Generation:
gpt-4o,gpt-4o-mini,gpt-4-turbo - Legacy Models:
gpt-4,gpt-3.5-turbo
Max Tokens
- Field:
max_tokens - Type: Number (optional)
- Purpose: Maximum number of tokens to generate in responses
- Range: Positive integer ≤ 100,000
- Example:
1000
System Prompt
- Field:
system_prompt - Type: String (optional)
- Purpose: System-level instructions defining agent behavior and constraints
- Use Cases: Role definition, behavior guidelines, response formatting
- Example:
"You are a helpful customer service assistant. Always be polite and provide accurate information."
Temperature
- Field:
temperature - Type: Number (optional)
- Default:
1.0 - Purpose: Controls randomness in AI responses
- Range: 0.0 (deterministic) to 2.0 (very random)
- Example:
0.7
Setup Instructions
- Enter the complete URL for your OpenAI-compatible API
- Input valid API key from your provider
- Select model and configure token limits/temperature
- Define system prompt for agent behavior
Troubleshooting
Common Issues:- Authentication Failures: Verify API key validity and permissions
- Invalid Endpoint: Confirm URL format and API compatibility
- Model Not Available: Check model name and provider support
- Rate Limit Errors: Monitor API usage and implement backoff strategies

