Configuration
Complete reference for all Axiom Protocol plugin configuration parameters.
Full Configuration Example
{
"workerUrl": "https://your-worker.workers.dev",
"apiKey": "sk-or-v1-...",
"defaultModels": [
"openai/gpt-5.4-mini",
"anthropic/claude-haiku-4-5",
"google/gemini-2.5-flash"
],
"maxIterations": 7,
"structure": "fugue",
"dThreshold": 0.05
}Parameters Reference
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| workerUrl | string | Yes | — | Axiom Protocol consensus worker endpoint. Deploy your own from ACP-PROJECT. |
| apiKey | string | Yes | env: OPENROUTER_API_KEY | OpenRouter API key. Can be set via environment variable or passed directly. |
| defaultModels | string[] | No | gpt-5.4-mini, claude-haiku-4-5, gemini-2.5-flash | Default model identifiers. Minimum 2 required for consensus. |
| maxIterations | number | No | 7 | Maximum convergence iterations (1–7). 7 iterations = 3.4% remaining disagreement. |
| structure | enum | No | fugue | Consensus structure: fugue (layered analysis), sonata (thesis→antithesis→synthesis), or concert (leader + ensemble). |
| dThreshold | number | No | 0.05 | D-score threshold (0–1). Consensus is reached when D drops below this value. |
Worker URL
The workerUrl points to your Axiom Protocol consensus worker. The plugin sends POST requests to {workerUrl}/consensus-iterative.
Deploy the worker from the ACP-PROJECT repository. The worker runs on Cloudflare Workers and handles the iterative consensus algorithm.
Trailing slashes
The plugin automatically strips trailing slashes from the worker URL. Both https://worker.dev and https://worker.dev/ will work correctly.
API Key
The plugin requires an OpenRouter API key to access LLM models. You can provide it in two ways:
- Environment variable (recommended): Set
OPENROUTER_API_KEYin your environment - Direct config: Pass the key in the
apiKeyfield of the plugin configuration
The key is sent in the x-openrouter-key HTTP header with each consensus request.
Default Models
The defaultModels array specifies which LLMs participate in consensus by default. You need at least 2 models. For best results, use models from different providers to maximize diversity of reasoning.
[
"openai/gpt-5.4-mini",
"anthropic/claude-haiku-4-5",
"google/gemini-2.5-flash"
]Cross-provider diversity
Models from different providers (OpenAI, Anthropic, Google) have different training data and architectures, making axiom grounding more effective.
Max Iterations
Controls how many convergence rounds the algorithm performs. Each iteration reduces the D-score by the golden ratio (φ ≈ 1.618):
- 1 iteration: ~61.8% of original disagreement remains
- 3 iterations: ~23.6% remaining
- 5 iterations: ~9.0% remaining
- 7 iterations (default): ~3.4% remaining
Higher iterations give stronger consensus but increase latency and API costs.
Consensus Structure
| Structure | Pattern | Best For |
|---|---|---|
| fugue | Models build on each other in layers | Complex questions, technical analysis, deep research |
| sonata | Thesis → antithesis → synthesis | Debates, ethical dilemmas, competing approaches |
| concert | One model leads, others refine | Creative tasks, brainstorming, content generation |
D-Score Threshold
The dThreshold determines when consensus is considered reached. When the D-score drops below this value, the algorithm stops iterating and returns the result.
0.05(default) — strong consensus required0.20— high confidence, faster results0.40— moderate agreement acceptable
Timeout
The client uses a default timeout of 120 seconds (120,000ms) per consensus request. This is configurable via the timeoutMs parameter in the client configuration. The timeout uses AbortController for clean cancellation.