ACP-PROMPTS

ACP-PROMPTS is the instruction layer of the ACP ecosystem. It contains 15 specialized system prompts that define how AI models behave during consensus -- from how they interpret axioms to how they structure their responses, verify claims through oracles, and converge toward agreement.

Role in the Ecosystem

System prompts are the invisible hand that guides model behavior during consensus. Without prompts, models would produce unstructured, free-form responses that are difficult to compare and score. The prompts in ACP-PROMPTS instruct models to acknowledge axioms, structure responses consistently, and participate in the iterative φ-spiral convergence process.

Prompts are loaded at runtime by both the Cloudflare Worker (via HTTP fetch from GitHub raw URLs) and the Python engine (via file system read from the adjacent ACP-PROMPTS directory). This separation means prompt changes take effect immediately without redeploying the execution layer.

Repository Structure

ACP-PROMPTS directory layout
ACP-PROMPTS/
└── system-prompts/
    ├── agent-prompts/          # 6 agent prompts
    │   ├── axiom-spiral-orchestrator.md
    │   ├── consensus-calculator.md
    │   ├── collective-coding-agent.md
    │   ├── conclave-mode-agent.md
    │   └── omega-truth-arbiter.md
    ├── tool-descriptions/      # 4 tool descriptions
    │   ├── axiom-lookup-tool.md
    │   ├── oracle-verification-tool.md
    │   ├── semantic-cache-tool.md
    │   └── vectorize-search-tool.md
    ├── workflows/              # 2 workflows
    │   ├── consensus-workflow.md
    │   └── omega-verification-workflow.md
    └── system-reminders/       # 2 reminders
        ├── oracle-verification-reminder.md
        └── dataset-reference-reminder.md
MetricValue
Total Prompts15
Total Tokens~8,500
Categories4 (agent-prompts, tool-descriptions, workflows, system-reminders)
FormatMarkdown with HTML metadata headers

Prompt Categories

Agent Prompts (6 prompts)

Agent prompts define the core behavior of AI models during consensus. Each prompt is a specialized system instruction that shapes how a model reasons about queries, processes axioms, and participates in multi-model convergence.

PromptPurpose
axiom-spiral-orchestrator.mdPrimary consensus orchestrator. Instructs models to iterate through axiom levels, acknowledge self-referential truths, and converge via the phi-spiral.
consensus-calculator.mdD-score and metrics calculation agent. Defines how to compute divergence, harmony, and pairwise similarity between model responses.
collective-coding-agent.mdCode-specific consensus agent. Instructs models on collaborative code generation, review, and refactoring with AST-based comparison.
conclave-mode-agent.mdIsolation-mode agent for Conclave Mode. Ensures models form independent positions without seeing other responses.
omega-truth-arbiter.mdFinal verification agent. Reviews consensus results against axioms and oracle outputs to produce a verified truth judgment.

Axiom Spiral Orchestrator

The axiom-spiral-orchestrator.md prompt is the most critical prompt in the system. It contains the instructions that drive the core consensus loop: present the query, collect responses, calculate D-score, inject relevant axioms, prompt for refinement, and repeat until convergence. Every consensus run uses this prompt.

Tool Descriptions (4 prompts)

Tool description prompts define external capabilities that models can invoke during consensus. They describe the interface, parameters, and expected behavior of each tool.

PromptPurpose
axiom-lookup-tool.mdDescribes how to look up specific axioms by ID, level, or domain from the ACP-DATASETS repository.
oracle-verification-tool.mdDefines the interface for invoking external oracle verification services (Wolfram Alpha, hash calculators, etc.).
semantic-cache-tool.mdDescribes the semantic cache for checking whether a similar query has already been resolved.
vectorize-search-tool.mdDefines parameters for semantic search over the axiom Vectorize index, including topK, level filtering, and score thresholds.

Workflows (2 prompts)

Workflow prompts define multi-step procedures that chain together multiple agents and tools. They specify the order of operations, decision points, and success criteria.

PromptPurpose
consensus-workflow.mdEnd-to-end consensus workflow: query parsing, model selection, prompt injection, iterative convergence, and result synthesis.
omega-verification-workflow.mdPost-consensus verification workflow: cross-reference consensus answer against oracle outputs and axiom proofs.

System Reminders (2 prompts)

System reminders are injected into the model context at specific points during consensus to reinforce constraints and ensure consistent behavior across iterations.

PromptPurpose
oracle-verification-reminder.mdReminds models to verify claims against oracle sources before asserting factual statements.
dataset-reference-reminder.mdReminds models to reference and cite specific axioms from ACP-DATASETS when grounding their responses.

Integration with the Consensus Engine

Prompts are loaded at runtime through two integration paths, depending on whether the Worker or Python engine is executing the consensus.

Worker Integration (JavaScript)

The Cloudflare Worker fetches prompts via HTTP from GitHub raw URLs. Metadata headers (HTML comments at the top of each file) are automatically stripped before injecting the prompt into the model context.

Prompt loading in the Worker
import { getConsensusPrompt } from '../prompts/loader.js';

// Load prompt from ACP-PROMPTS repository
const systemPrompt = await getConsensusPrompt('general');

// The loader fetches from GitHub, strips metadata, and caches
// Fallback to a built-in default if the repository is unavailable

Python Engine Integration

The Python engine reads prompts directly from the file system, expecting the ACP-PROMPTS repository to be cloned into the same parent directory as ACP-PROJECT. Metadata is stripped using the same logic.

Prompt loading in the Python engine
from core.prompts import get_consensus_prompt

# Load prompt from adjacent ACP-PROMPTS directory
system_prompt = get_consensus_prompt('general')

# Reads from ../ACP-PROMPTS/system-prompts/...
# Strips HTML metadata headers automatically
# Automatic fallback if file not found
Prompt Loading Flow
  ACP-PROMPTS Repository (GitHub)
  ├── system-prompts/
  │   ├── agent-prompts/*.md
  │   ├── tool-descriptions/*.md
  │   ├── workflows/*.md
  │   └── system-reminders/*.md
  │
  ├─── HTTP raw URL ──────> Cloudflare Worker
  │                          (fetch + strip metadata + cache)
  │
  └─── File system read ──> Python Engine
                             (read + strip metadata)

Conclave Mode

Conclave Mode is a specialized consensus mode enabled by the conclave-mode-agent.md prompt. The term comes from the Latin "cum clave" (with a key) -- referring to the papal conclave where cardinals are isolated until a decision is reached.

In Conclave Mode, each model receives the query in complete isolation. No model can see another model's response until all responses are collected. This prevents conformism and groupthink, ensuring that consensus is genuine rather than the result of one model copying another.

PropertyStandard ModeConclave Mode
Model interactionSequential -- models see previous responsesIsolated -- no access to other responses
InfluenceModels can adjust based on othersIndependent position formation
Consensus meaningIterative convergenceIndependent agreement
Use caseGeneral queries, iterative refinementHigh-stakes decisions, architecture reviews, security audits
D-score interpretationMeasures convergence over iterationsMeasures independent agreement

When to Use Conclave Mode

Use Conclave Mode for decisions where independent judgment matters: security-sensitive code review, architectural decisions, technology stack selection, risk assessment, and any scenario where you need to distinguish true consensus from conformism. A low D-score in Conclave Mode is a strong signal -- it means multiple models independently arrived at the same conclusion.

Prompt Versioning and Updates

Prompts are versioned through Git, enabling full history tracking, A/B testing, and rollback. The ACP-PROMPTS repository includes a CHANGELOG with token counts for each prompt, making it easy to track how prompt changes affect token usage and model behavior.

CapabilityDescription
Git versioningFull commit history for every prompt change
A/B testingDeploy different prompt versions and compare consensus quality
Centralized managementAll prompts in one repository, referenced by both Worker and Python
Automatic pickupWorker fetches latest from GitHub; Python reads from file system
Cache invalidationPrompt cache is cleared automatically when changes are detected

To update a prompt, edit the corresponding .md file in the ACP-PROMPTS repository, commit, and push. The Worker will fetch the updated version on the next request (with cache TTL), and the Python engine will read the new file immediately.