ACP-PROMPTS
ACP-PROMPTS is the instruction layer of the ACP ecosystem. It contains 15 specialized system prompts that define how AI models behave during consensus -- from how they interpret axioms to how they structure their responses, verify claims through oracles, and converge toward agreement.
Role in the Ecosystem
System prompts are the invisible hand that guides model behavior during consensus. Without prompts, models would produce unstructured, free-form responses that are difficult to compare and score. The prompts in ACP-PROMPTS instruct models to acknowledge axioms, structure responses consistently, and participate in the iterative φ-spiral convergence process.
Prompts are loaded at runtime by both the Cloudflare Worker (via HTTP fetch from GitHub raw URLs) and the Python engine (via file system read from the adjacent ACP-PROMPTS directory). This separation means prompt changes take effect immediately without redeploying the execution layer.
Repository Structure
ACP-PROMPTS/
└── system-prompts/
├── agent-prompts/ # 6 agent prompts
│ ├── axiom-spiral-orchestrator.md
│ ├── consensus-calculator.md
│ ├── collective-coding-agent.md
│ ├── conclave-mode-agent.md
│ └── omega-truth-arbiter.md
├── tool-descriptions/ # 4 tool descriptions
│ ├── axiom-lookup-tool.md
│ ├── oracle-verification-tool.md
│ ├── semantic-cache-tool.md
│ └── vectorize-search-tool.md
├── workflows/ # 2 workflows
│ ├── consensus-workflow.md
│ └── omega-verification-workflow.md
└── system-reminders/ # 2 reminders
├── oracle-verification-reminder.md
└── dataset-reference-reminder.md| Metric | Value |
|---|---|
| Total Prompts | 15 |
| Total Tokens | ~8,500 |
| Categories | 4 (agent-prompts, tool-descriptions, workflows, system-reminders) |
| Format | Markdown with HTML metadata headers |
Prompt Categories
Agent Prompts (6 prompts)
Agent prompts define the core behavior of AI models during consensus. Each prompt is a specialized system instruction that shapes how a model reasons about queries, processes axioms, and participates in multi-model convergence.
| Prompt | Purpose |
|---|---|
axiom-spiral-orchestrator.md | Primary consensus orchestrator. Instructs models to iterate through axiom levels, acknowledge self-referential truths, and converge via the phi-spiral. |
consensus-calculator.md | D-score and metrics calculation agent. Defines how to compute divergence, harmony, and pairwise similarity between model responses. |
collective-coding-agent.md | Code-specific consensus agent. Instructs models on collaborative code generation, review, and refactoring with AST-based comparison. |
conclave-mode-agent.md | Isolation-mode agent for Conclave Mode. Ensures models form independent positions without seeing other responses. |
omega-truth-arbiter.md | Final verification agent. Reviews consensus results against axioms and oracle outputs to produce a verified truth judgment. |
Axiom Spiral Orchestrator
The axiom-spiral-orchestrator.md prompt is the most critical prompt in the system. It contains the instructions that drive the core consensus loop: present the query, collect responses, calculate D-score, inject relevant axioms, prompt for refinement, and repeat until convergence. Every consensus run uses this prompt.
Tool Descriptions (4 prompts)
Tool description prompts define external capabilities that models can invoke during consensus. They describe the interface, parameters, and expected behavior of each tool.
| Prompt | Purpose |
|---|---|
axiom-lookup-tool.md | Describes how to look up specific axioms by ID, level, or domain from the ACP-DATASETS repository. |
oracle-verification-tool.md | Defines the interface for invoking external oracle verification services (Wolfram Alpha, hash calculators, etc.). |
semantic-cache-tool.md | Describes the semantic cache for checking whether a similar query has already been resolved. |
vectorize-search-tool.md | Defines parameters for semantic search over the axiom Vectorize index, including topK, level filtering, and score thresholds. |
Workflows (2 prompts)
Workflow prompts define multi-step procedures that chain together multiple agents and tools. They specify the order of operations, decision points, and success criteria.
| Prompt | Purpose |
|---|---|
consensus-workflow.md | End-to-end consensus workflow: query parsing, model selection, prompt injection, iterative convergence, and result synthesis. |
omega-verification-workflow.md | Post-consensus verification workflow: cross-reference consensus answer against oracle outputs and axiom proofs. |
System Reminders (2 prompts)
System reminders are injected into the model context at specific points during consensus to reinforce constraints and ensure consistent behavior across iterations.
| Prompt | Purpose |
|---|---|
oracle-verification-reminder.md | Reminds models to verify claims against oracle sources before asserting factual statements. |
dataset-reference-reminder.md | Reminds models to reference and cite specific axioms from ACP-DATASETS when grounding their responses. |
Integration with the Consensus Engine
Prompts are loaded at runtime through two integration paths, depending on whether the Worker or Python engine is executing the consensus.
Worker Integration (JavaScript)
The Cloudflare Worker fetches prompts via HTTP from GitHub raw URLs. Metadata headers (HTML comments at the top of each file) are automatically stripped before injecting the prompt into the model context.
import { getConsensusPrompt } from '../prompts/loader.js';
// Load prompt from ACP-PROMPTS repository
const systemPrompt = await getConsensusPrompt('general');
// The loader fetches from GitHub, strips metadata, and caches
// Fallback to a built-in default if the repository is unavailablePython Engine Integration
The Python engine reads prompts directly from the file system, expecting the ACP-PROMPTS repository to be cloned into the same parent directory as ACP-PROJECT. Metadata is stripped using the same logic.
from core.prompts import get_consensus_prompt
# Load prompt from adjacent ACP-PROMPTS directory
system_prompt = get_consensus_prompt('general')
# Reads from ../ACP-PROMPTS/system-prompts/...
# Strips HTML metadata headers automatically
# Automatic fallback if file not found ACP-PROMPTS Repository (GitHub)
├── system-prompts/
│ ├── agent-prompts/*.md
│ ├── tool-descriptions/*.md
│ ├── workflows/*.md
│ └── system-reminders/*.md
│
├─── HTTP raw URL ──────> Cloudflare Worker
│ (fetch + strip metadata + cache)
│
└─── File system read ──> Python Engine
(read + strip metadata)Conclave Mode
Conclave Mode is a specialized consensus mode enabled by the conclave-mode-agent.md prompt. The term comes from the Latin "cum clave" (with a key) -- referring to the papal conclave where cardinals are isolated until a decision is reached.
In Conclave Mode, each model receives the query in complete isolation. No model can see another model's response until all responses are collected. This prevents conformism and groupthink, ensuring that consensus is genuine rather than the result of one model copying another.
| Property | Standard Mode | Conclave Mode |
|---|---|---|
| Model interaction | Sequential -- models see previous responses | Isolated -- no access to other responses |
| Influence | Models can adjust based on others | Independent position formation |
| Consensus meaning | Iterative convergence | Independent agreement |
| Use case | General queries, iterative refinement | High-stakes decisions, architecture reviews, security audits |
| D-score interpretation | Measures convergence over iterations | Measures independent agreement |
When to Use Conclave Mode
Use Conclave Mode for decisions where independent judgment matters: security-sensitive code review, architectural decisions, technology stack selection, risk assessment, and any scenario where you need to distinguish true consensus from conformism. A low D-score in Conclave Mode is a strong signal -- it means multiple models independently arrived at the same conclusion.
Prompt Versioning and Updates
Prompts are versioned through Git, enabling full history tracking, A/B testing, and rollback. The ACP-PROMPTS repository includes a CHANGELOG with token counts for each prompt, making it easy to track how prompt changes affect token usage and model behavior.
| Capability | Description |
|---|---|
| Git versioning | Full commit history for every prompt change |
| A/B testing | Deploy different prompt versions and compare consensus quality |
| Centralized management | All prompts in one repository, referenced by both Worker and Python |
| Automatic pickup | Worker fetches latest from GitHub; Python reads from file system |
| Cache invalidation | Prompt cache is cleared automatically when changes are detected |
To update a prompt, edit the corresponding .md file in the ACP-PROMPTS repository, commit, and push. The Worker will fetch the updated version on the next request (with cache TTL), and the Python engine will read the new file immediately.