10-Minute API Quickstart
Use ACP as an API in your applications. This guide walks you through authentication, your first request, response interpretation, and practical use cases.
Step 1: Get your API key
ACP uses OpenRouter to access multiple LLM providers through a single key. OpenRouter gives you access to models from OpenAI, Anthropic, Google, Meta, and others.
- Sign up at openrouter.ai
- Navigate to openrouter.ai/keys and create a new API key
- Export the key in your terminal session
# OpenRouter provides access to multiple LLMs through one key
# Sign up at: https://openrouter.ai/keys
export OPENROUTER_API_KEY="your_key_here"Keep your key safe
Never commit your API key to version control or share it publicly. Use environment variables or a secrets manager in production.
Step 2: Make your first request
The consensus endpoint accepts a query and a list of models. It runs iterative consensus rounds until the D-score drops below the threshold or the maximum number of iterations is reached.
curl -X POST https://your-worker.workers.dev/consensus-iterative \
-H "Content-Type: application/json" \
-H "x-openrouter-key: $OPENROUTER_API_KEY" \
-d '{
"query": "What is 2+2?",
"models": ["openai/gpt-5.4", "anthropic/claude-sonnet-4-6"],
"max_iterations": 7
}'Request parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
query | string | Yes | The question or prompt to reach consensus on. |
models | string[] | Yes | Array of OpenRouter model identifiers. Minimum 2. |
max_iterations | number | No | Maximum convergence rounds (default: 7). Lower values are faster but may not reach consensus. |
Step 3: Understand the response
A successful consensus response contains the final answer, convergence metrics, and a full iteration history showing how each model responded at every round.
{
"query": "What is 2+2?",
"final_answer": "4",
"final_D": 0.0,
"consensus_reached": true,
"iterations_used": 1,
"iteration_history": [
{
"iteration": 1,
"D": 0.0,
"responses": [
{ "model": "gpt-5.4", "content": "2+2 equals 4" },
{ "model": "claude-sonnet-4-6", "content": "The sum of 2 and 2 is 4" }
]
}
]
}Response fields
| Field | Type | Description |
|---|---|---|
final_D | number | Divergence score between 0 and 1. A value of 0 means all models produced semantically identical answers. Values below 0.1 indicate strong consensus. |
consensus_reached | boolean | True when the D-score drops below the consensus threshold (0.1 by default). |
final_answer | string | The synthesized consensus answer, derived from all participating models. |
axioms_used | string[] | The axiom levels that were referenced to ground the consensus (when available). |
iterations_used | number | Total number of convergence rounds. Fewer iterations generally means higher initial agreement. |
iteration_history | object[] | Full log of each iteration: the D-score at that round and every model's response. |
Interpreting D-score
A final_D of 0.0 means perfect agreement. For most practical applications, any value below 0.1 represents strong consensus. Values between 0.1 and 0.3 indicate partial agreement with minor differences. Values above 0.5 suggest significant disagreement between models.
Example use cases
Below are ready-to-use examples for common integration patterns. Copy any of these and replace the API key and worker URL with your own.
Fact-checking
Use three or more models to verify factual claims. The consensus result is far more reliable than any single model's answer because hallucinations from one model are corrected by the others.
curl -X POST https://your-worker.workers.dev/consensus-iterative \
-H "Content-Type: application/json" \
-H "x-openrouter-key: $OPENROUTER_API_KEY" \
-d '{
"query": "What is the speed of light in vacuum?",
"models": [
"openai/gpt-5.4",
"anthropic/claude-sonnet-4-6",
"google/gemini-3.1-pro"
]
}'Code review
Submit code snippets for multi-model review. Each model independently analyzes the code, and ACP synthesizes their findings into a unified assessment.
curl -X POST https://your-worker.workers.dev/consensus-iterative \
-H "Content-Type: application/json" \
-H "x-openrouter-key: $OPENROUTER_API_KEY" \
-d '{
"query": "Is this Python code correct?\ndef factorial(n):\n if n == 0: return 1\n return n * factorial(n-1)",
"models": ["openai/gpt-5.4", "anthropic/claude-sonnet-4-6"]
}'Content moderation
Use consensus to make more balanced moderation decisions. Multiple models evaluate content independently, reducing the bias that can come from relying on a single model's judgment.
curl -X POST https://your-worker.workers.dev/consensus-iterative \
-H "Content-Type: application/json" \
-H "x-openrouter-key: $OPENROUTER_API_KEY" \
-d '{
"query": "Evaluate this user comment for policy violations: [comment text here]",
"models": [
"openai/gpt-5.4",
"anthropic/claude-sonnet-4-6",
"google/gemini-3.1-pro"
],
"max_iterations": 3
}'Choosing models
The models you choose affect cost, speed, and consensus quality. Here are recommended combinations for different scenarios:
| Scenario | Models | Cost / query |
|---|---|---|
| Fast prototyping | gpt-5.4-mini + claude-haiku-4-5 | ~$0.02 |
| Production fact-checking | gpt-5.4 + claude-sonnet-4-6 + gemini-3.1-pro | ~$0.10 |
| High-stakes decisions | gpt-5.4 + claude-opus-4-6 + gemini-3.1-pro | ~$0.25 |
| Maximum breadth | 4+ models from different providers | ~$0.15 -- $0.40 |
Cross-provider diversity
For best results, choose models from different providers. ACP's axiom grounding is most effective when models have different training data and architectures, since the axioms serve as shared anchor points that transcend individual model biases.
Next steps
- Full Setup (30 min) -- run ACP locally with the complete frontend, Python engine, and Worker API.
- Worker API Reference -- full documentation of all endpoints, parameters, and response formats.
- Authentication -- detailed guide to API keys, rate limits, and security best practices.
- Code Examples -- Python, JavaScript, and cURL examples for common integration patterns.