SDKs & Client Examples
ACP can be integrated from any language that supports HTTP. This page covers the Python SDK (with two usage patterns), JavaScript/TypeScript fetch examples, and cURL commands for quick testing.
| Language | Method | Best For |
|---|---|---|
| Python | VectorizeClient | Calling the Worker API from Python applications |
| Python | ConsensusEngine | Direct engine access without a running server |
| JavaScript / TypeScript | fetch | Browser and Node.js integrations |
| cURL | CLI | Quick testing and shell scripts |
Python: VectorizeClient
The VectorizeClient is a thin async HTTP client that wraps the Worker API. It handles authentication, serialization, and provides typed methods for each endpoint.
from src.core.vectorize_client import VectorizeClient
import asyncio
async def main():
# Initialize client
client = VectorizeClient(
worker_url="https://your-worker.workers.dev",
openrouter_key="your_openrouter_key"
)
# Check semantic cache before running consensus
result = await client.check_cache("What is 2+2?")
if result.get("hit"):
print(f"Cached answer: {result['result']}")
return
# Search axioms by semantic similarity
axioms = await client.search_axioms(
"machine learning",
top_k=5,
level_filter=[5, 6, 7]
)
for axiom in axioms:
print(f"[L{axiom['level']}] {axiom['statement']}")
# Run consensus via the Worker API
response = await client.request(
"POST",
"/consensus-iterative",
json={
"query": "What is the capital of France?",
"models": [
"openai/gpt-5.4-mini",
"anthropic/claude-haiku-4-5"
]
}
)
print(f"Answer: {response['final_answer']}")
print(f"Confidence: {response['confidence']}")
print(f"Iterations: {response['iterations']}")
asyncio.run(main())VectorizeClient Methods
| Method | Description |
|---|---|
check_cache(query) | Check semantic cache for a similar query |
search_axioms(query, top_k, level_filter) | Search axioms by semantic similarity with optional level filtering |
request(method, path, json) | Make an authenticated HTTP request to any Worker endpoint |
Python: Direct Engine Usage
For local development or embedding ACP into a larger Python application, you can use the ConsensusEngine directly. This bypasses the HTTP layer entirely and gives you full access to the consensus internals.
from src.engine import ConsensusEngine, ConsensusConfig
from src.llm.openrouter_llm import OpenRouterLLM
import asyncio
import os
async def main():
# Initialize engine
engine = ConsensusEngine()
api_key = os.getenv("OPENROUTER_API_KEY")
# Add models
engine.add_model(OpenRouterLLM(
name="gpt4",
model="openai/gpt-5.4-mini",
api_key=api_key
))
engine.add_model(OpenRouterLLM(
name="claude",
model="anthropic/claude-haiku-4-5",
api_key=api_key
))
# Configure consensus parameters
config = ConsensusConfig(
max_iterations=5,
structure="sonata" # "fugue" for 4+ models, "concert" for creative
)
# Run consensus
result = await engine.run(
query="What is the capital of France?",
config=config
)
# Access results
print(f"Answer: {result.final_answer}")
print(f"D-score: {result.final_D:.3f}")
print(f"H-total: {result.final_H_total:.3f}")
print(f"Consensus: {result.consensus_reached}")
print(f"Iterations: {result.iterations_used}")
asyncio.run(main())Run from project root
Python examples that import from src/ must be run from the ACP-PROJECT root directory so that module paths resolve correctly:
cd /path/to/ACP-PROJECT && python your_script.py
Musical Structures
The structure parameter selects a consensus algorithm optimized for different scenarios:
| Structure | Models | Best For |
|---|---|---|
"fugue" | 4+ models | Deep analysis with rigorous multi-voice convergence |
"sonata" | 2-3 models | Conflict resolution between opposing perspectives |
"concert" | 2+ models | Creative tasks where diverse output is valued |
JavaScript / TypeScript
Use the standard fetch API to call ACP from browsers or Node.js. No additional dependencies are needed.
Running Consensus
const response = await fetch(
"https://your-worker.workers.dev/consensus-iterative",
{
method: "POST",
headers: {
"Content-Type": "application/json",
"x-api-key": "your_key",
},
body: JSON.stringify({
query: "What is 2+2?",
models: [
"openai/gpt-5.4-mini",
"anthropic/claude-haiku-4-5",
],
}),
}
);
const data = await response.json();
if (!response.ok) {
console.error(`Error ${data.status}: ${data.message}`);
} else {
console.log(`Answer: ${data.final_answer}`);
console.log(`Confidence: ${data.confidence}`);
console.log(`D-score: ${data.metrics.final_D}`);
}Searching Axioms
const params = new URLSearchParams({
q: "neural networks",
topK: "5",
level: "5,6,7",
minScore: "0.7",
});
const response = await fetch(
`https://your-worker.workers.dev/axioms/search?${params}`,
{
headers: { "x-api-key": "your_key" },
}
);
const { axioms, count } = await response.json();
for (const axiom of axioms) {
console.log(`[L${axiom.level}] ${axiom.axiom} (relevance: ${axiom.relevance})`);
}Checking the Cache
const response = await fetch(
"https://your-worker.workers.dev/cache/check",
{
method: "POST",
headers: {
"Content-Type": "application/json",
"x-api-key": "your_key",
},
body: JSON.stringify({ query: "What is two plus two?" }),
}
);
const cache = await response.json();
if (cache.hit) {
console.log(`Cache hit (similarity: ${cache.similarity})`);
console.log(`Answer: ${cache.result.final_answer}`);
} else {
console.log("Cache miss -- run consensus instead");
}cURL
cURL is the quickest way to test ACP endpoints from the command line.
Run Consensus
curl -X POST https://your-worker.workers.dev/consensus-iterative \
-H "Content-Type: application/json" \
-H "x-api-key: your_key" \
-d '{
"query": "What is 2+2?",
"models": ["openai/gpt-5.4-mini", "anthropic/claude-haiku-4-5"]
}'Health Check
curl https://your-worker.workers.dev/health | jq .Search Axioms
curl "https://your-worker.workers.dev/axioms/search?q=TCP+IP&level=6&topK=3" \
-H "x-api-key: your_key" | jq .Check Cache
curl -X POST https://your-worker.workers.dev/cache/check \
-H "Content-Type: application/json" \
-H "x-api-key: your_key" \
-d '{"query": "What is two plus two?"}'Python API (Synchronous)
curl -X POST http://localhost:8000/consensus/sync \
-H "Content-Type: application/json" \
-d '{
"query": "What is 2+2?",
"models": [
{"provider": "openrouter", "name": "gpt4", "model": "openai/gpt-5.4-mini"},
{"provider": "openrouter", "name": "claude", "model": "anthropic/claude-haiku-4-5"}
],
"max_iterations": 5
}'Pipe to jq
Append | jq . to any cURL command to pretty-print the JSON response. Install jq from stedolan.github.io/jq.