Ecosystem
ACP is not a single application -- it is a distributed system built from three interconnected repositories, each serving a distinct role in the consensus pipeline. Together they form a complete platform for axiom-grounded, multi-model AI consensus.
This section covers the architecture of the ecosystem, the role and contents of each repository, the end-to-end data flow from user query to consensus result, and the infrastructure that ties everything together.
Three-Repository Architecture
The ACP ecosystem is intentionally split into three repositories to enforce separation of concerns. The execution layer, the data layer, and the instruction layer can each be versioned, updated, and deployed independently.
| Repository | Role | Contents | Size |
|---|---|---|---|
| ACP-PROJECT | Execution Layer | Frontend, Worker API, Python Engine | ~50 MB, 350+ files, 15,000+ LOC |
| ACP-DATASETS | Data Layer | Verified axioms across 7 levels | JSON-structured, oracle-verified |
| ACP-PROMPTS | Instruction Layer | 15 system prompts for AI model behavior | ~500 KB, 15 files, ~8,500 tokens |
USER QUERY
"What is the fastest sorting algorithm?"
|
v
+------------------------------------------------------+
| ACP-PROJECT (Execution Layer) |
| +------------+ +--------------+ +--------------+ |
| | Frontend | | Worker API | | Python Engine| |
| | Next.js |->| Cloudflare |<>| Consensus | |
| +------------+ +--------------+ +--------------+ |
+----------------------+-------------------------------+
|
+------+------+
v v
+-----------------+ +----------------+
| ACP-PROMPTS | | ACP-DATASETS |
| (Instructions) | | (Data) |
| | | |
| - Agent prompts | | - Axiom datasets |
| - Tool prompts | | - 7 levels |
| - Workflows | | - Oracles |
| - Reminders | | - Metrics |
+-----------------+ +----------------+
| |
+-------+-------+
v
+----------------+
|CONSENSUS RESULT|
| + PROOF |
+----------------+Complete Data Flow
A consensus request flows through every layer of the ecosystem. The following trace shows the complete lifecycle of a query, from the moment a user types it in the Playground to the final verified result.
Step 1: User Query
The user enters a query in the Playground UI, selects the models to participate (e.g., GPT-4, Claude, Gemini), chooses a musical structure (Fugue, Sonata, or Concert), and clicks "Run Consensus."
Step 2: Worker API
The frontend sends a POST /consensus-iterative request to the Cloudflare Worker. The Worker validates the request and begins orchestrating the consensus process.
Step 3: Prompt Loading (ACP-PROMPTS)
The Worker fetches the appropriate system prompt (e.g., axiom-spiral-orchestrator.md) from the ACP-PROMPTS repository. The prompt defines how the AI model should approach consensus, including iteration behavior and axiom acknowledgment rules.
Step 4: Axiom Retrieval (ACP-DATASETS)
The Worker generates an embedding for the user query and searches Cloudflare Vectorize for the most relevant axioms. For the query "fastest sorting algorithm," it might return axioms like acp-comp-quicksort-avg-v1 (relevance: 0.89) and acp-comp-timsort-python-v1 (relevance: 0.85).
Step 5: Consensus Iteration
The models respond iteratively. In each iteration, models receive the previous responses plus relevant axioms. The D-score is recalculated after each round. Disagreement decreases following D(n) = D(0) / φ^n until the threshold is reached (typically D < 0.05).
Step 6: Result
The final consensus answer is returned with full metadata: D-score convergence path, axioms used, iteration count, model positions, and oracle verification proof. The UI renders the convergence graph, musical intervals, and per-model responses.
Convergence Example
For the query "What is the fastest sorting algorithm?", a typical run converges in 3 iterations: D = 0.35 (iteration 1), D = 0.18 (iteration 2), D = 0.08 (iteration 3 -- consensus reached). The final answer cites QuickSort for average-case O(n log n) and TimSort as Python's standard library implementation, grounded in Level 4 computable axioms.
Integration Points
The three repositories are connected through four primary integration paths. Each path uses a specific mechanism for loading data at runtime.
| Integration | Mechanism | Source |
|---|---|---|
| Worker to ACP-PROMPTS | HTTP fetch from GitHub raw URLs | workers/cloudflare-worker/src/prompts/loader.js |
| Worker to ACP-DATASETS | Vectorize semantic search + KV cache | workers/cloudflare-worker/src/index.js |
| Python Engine to ACP-PROMPTS | File system read from adjacent directory | src/core/prompts.py |
| Python Engine to ACP-DATASETS | HTTP to Worker Vectorize endpoint | src/core/vectorize_client.py |
Which Repository to Use
| Goal | Repository | Action |
|---|---|---|
| Use ACP for consensus | ACP-PROJECT | Clone and install dependencies |
| Contribute a new axiom | ACP-DATASETS | Fork, create JSON file, submit PR |
| Modify AI model behavior | ACP-PROMPTS | Edit .md files; changes picked up automatically |
| Build a custom system on ACP | All three | Clone all repos into the same parent directory |
Explore the Ecosystem
Dive into each component for detailed architecture, file structure, and integration guidance.
ACP-PROJECT
The execution layer -- Next.js frontend, Cloudflare Worker API, and Python consensus engine. Orchestrates queries, runs the phi-spiral, and delivers results.
ACP-DATASETS
Verified axioms across 7 hierarchical levels. JSON-structured, oracle-verified, and vectorized for semantic search via Cloudflare Vectorize.
ACP-PROMPTS
15 specialized system prompts -- agent prompts, tool descriptions, workflows, and system reminders -- that define how AI models behave during consensus.
Infrastructure
Cloudflare Workers edge API, Vectorize embeddings, KV caching, Python FastAPI backend, deployment architecture, and CI/CD pipeline.