Architecture
·
Agentic AI
·
March 2026
From deliberation
to action
Le Corum deliberates. MCP distributes. Execution is pluggable. Why the layer that comes before an agent acts is the most important layer nobody built — until now.
Live today
5-AI systems deliberation
→
Live today
Structured action plan
→
Live today
12 MCP tools
→
Phase 4 — Roadmap
Integrated execution
8 min read
The problem nobody talks about
Every week, thousands of new AI agents are published. knowledge retrieval pipelines. Automation templates. Specialized models fine-tuned for legal analysis, financial modeling, market research. The infrastructure for AI-powered execution has never been richer — or more overwhelming.
But there is a question that precedes every agent invocation, and it is one that no agent can answer for itself: should this action be taken at all?
The AI industry has solved for execution speed. It has not solved for decision quality. Agents that act without deliberation are single points of cognitive failure — they carry no validation, no contradiction, no adversarial pressure-testing of the premise they are executing against. They are fast. And sometimes, precisely because they are fast, they are wrong.
The missing layer in the agentic AI stack is not another agent. It is the validated decision that comes before the first agent acts — the layer where five independent perspectives examine the premise, challenge each other, and produce a structured consensus that execution can safely follow.
That is the layer MyCorum.ai was built to be.
The agent explosion — and why selection is the real problem
3K+
MCP servers published on Smithery.ai today
50K+
Agents and prompts on LangChain Hub
1.2M
Models available on Hugging Face
100K+
Agents projected by end of 2026
The scarcity problem in AI has inverted. The constraint is no longer access to capable models. It is the impossibility of evaluating, selecting, and trusting the right agent for a given decision — when there are a hundred thousand of them, each claiming to be the best tool for your use case.
A professional facing this landscape in 2026 is in the same position as a user in 1995 facing the internet without a search engine. The resource exists. The intelligence to navigate it does not.
MyCorum.ai's deliberation layer addresses this directly. Before an agent is invoked, the Corum can evaluate the premise: is this the right action? Is this the right agent for this action? Are there risks or gaps in the plan that need to surface before execution begins?
That is not a feature. It is a structural guarantee that no single-agent system can offer — because a single-agent system cannot contradict itself.
What the Corum produces today — and what it means for execution
MyCorum.ai's deliberation pipeline already exists in production. Five language models — GPT-5.2, Claude Sonnet 4.6, Gemini 3 Flash, Grok-4, Mistral Large — are dispatched in parallel on every deliberation. They do not share answers before forming their own. They do not converge by default. The system actively suppresses consensus when it appears too quickly — temperature shifts, anonymous presentation of prior analysis, automatic devil's advocate injection if agreement exceeds 90%.
At the end of every deliberation, the synthesis round does not produce narrative text. It produces a structured JSON action plan with a guaranteed schema.
{
"analysis": "Complete multi-perspective narrative",
"recommendation": GO | PIVOT | NO GO,
"decision_matrix": [
{ "dimension": "...", "verdict": "...", "insight": "..." }
],
"next_steps": [
{ "action": "...", "owner": "...", "deadline": "..." }
],
"information_gaps": [
{ "gap": "...", "impact": "...", "critical": true }
],
"confidence_score": 8.5,
"confidence_justification": "..."
}
This is not incidental to the execution story. It is the foundation of it. A structured plan with assigned owners, deadlines, and criticality flags is a plan that external systems can read, route, and act upon. It was designed to be consumed — not just read.
The MCP bridge — open and live
MyCorum.ai runs a production MCP server — a full implementation of the open Model Context Protocol standard — accessible from any MCP-compatible client. any MCP-compatible AI client, or any custom client built on the protocol.
The server exposes 12 tools across five operational categories:
Knowledge — 3 tools
HITL + Discovery — 3 tools
The server runs on Railway as a separate service, with dual transport: stdio for local clients (any MCP-compatible AI client) and streamable-HTTP for remote clients (any MCP-compatible client and custom integrations). Authentication is handled via Clerk JWT — the same auth layer that protects the rest of the platform.
A Claude agent in Cowork can today call deliberate(), wait for the structured synthesis, call get_action_plan(), and then use its own tools — Slack, Jira, email, code execution — to carry out the next steps the Corum has validated. Le Corum does not execute. But it provides the validated blueprint that execution follows.
The architecture — what is live, what we're building
Clarity on this matters. Le Corum is not a marketing layer over a simple API call. And it does not yet do everything we intend it to do. Here is the honest state of the stack.
01
Deliberation Engine — 5 AI systems, adaptive orchestration, anti-convergence
Fan-out async dispatch to 5 models via asyncio.gather(). Adaptive round sequencing (up to 6 rounds for The Dream Team). Semantic entropy monitoring. Temperature drift. Devil's advocate injection. Anonymous round presentation. HITL pauses when confidence is low or critical unknowns surface.
Live in production
02
knowledge retrieval 5-Layer Knowledge — memories, documents, trusted sources, external search, researcher
Five knowledge sources gathered in parallel and routed through a Context Router — each persona receives a filtered, role-specific subset of the available knowledge. The Architect sees cost and ROI signals. The Counsel sees risk and ethics. The synthesis is grounded in sources, not in model hallucination.
Live in production
03
Structured Output + MCP Bridge — JSON action plan, 12 tools, dual transport
Every deliberation terminates in a guaranteed JSON schema: recommendation, decision matrix, next steps with owner and deadline, information gaps, confidence score. Exposed via 12 MCP tools accessible from any MCP-compatible client. The bridge between deliberation and the external execution ecosystem is open today.
Live in production
04
Integrated Execution Layer — tool executors, human gates, agent registry, transaction rollback
Native execution of next_steps from within the MyCorum.ai pipeline. Tool executors for API calls, code execution, communications. Human gate pauses before irreversible actions. Agent Registry for benchmarked external agent selection. Saga-pattern transaction layer with compensating actions. This is what we are building — deliberately, with the care that execution-layer software demands.
Phase 4 — In development
Why single-agent execution without deliberation fails at scale
Without deliberation
An agent that acts on a flawed premise
Single cognitive perspectiveOne model's interpretation of the question shapes all downstream execution. No contradiction. No second opinion. No adversarial check.
No confidence calibrationThe agent acts with equal conviction whether the decision is obvious or genuinely uncertain. Uncertainty is invisible to execution.
No information gap awarenessThe agent does not know what it does not know. Critical unknowns are not surfaced — they are executed around.
No human gateIrreversible actions — sent emails, modified records, committed code — happen at machine speed, without a human checkpoint before the point of no return.
With Corum deliberation
Execution that follows a validated consensus
Five independent perspectivesThe premise is stress-tested before any action is planned. The Contrarian looks for the strongest objection. The synthesis integrates the challenge, not just the agreement.
Calibrated confidence scoreThe action plan carries a confidence score and its justification. Low confidence triggers HITL pause before execution proceeds.
Critical gaps made explicitInformation gaps with impact assessment and criticality flags are part of the structured output. The execution system knows what the Corum does not know.
Human gate nativeThe HITL mechanism is not a manual override — it is a first-class primitive. The pipeline pauses, waits for human context, and resumes only when the gap is filled.
What this means for the 100,000-agent problem
By end of 2026, the number of available AI agents, MCP servers, knowledge retrieval pipelines, and specialized automation tools will be measured in hundreds of thousands. The selection problem — which agent, for which task, with what level of trust — will be the dominant friction point for every professional trying to operationalize AI.
The answer is not a better directory. It is not a smarter single orchestrator that picks for you. The answer is a deliberation layer that evaluates the selection itself — that considers the agent's capabilities, its risk profile, its alignment with the validated plan, and produces a reasoned recommendation on which tools to invoke and in what sequence.
This is the direction Phase 4 points toward. Not just executing the plan — but deliberating on how the plan should be executed, and by what. Le Corum as the governance layer above an open agent ecosystem, rather than a proprietary orchestrator locked to a single vendor's tools.
Closed execution platforms select agents from within their own ecosystem — they cannot be neutral arbiters of a market they participate in. MyCorum.ai's deliberation is model-agnostic, vendor-neutral, and built on an open standard. Le Corum cannot favor any model, because its architecture requires all of them to challenge each other. That structural neutrality is what makes it a viable governance layer for an open agent ecosystem.
The state of play — what you can do right now
The deliberation layer is live. The MCP bridge is open. If you are building on MCP today — whether with any MCP-compatible AI client, or a custom integration — the Corum is available as a native deliberation endpoint.
- Before a strategic decision — Run a Standard or Expert deliberation. Receive a structured GO / PIVOT / NO GO with a full decision matrix and confidence-scored reasoning. Use the action plan as the validated brief your execution follows.
- Inside an agent workflow — Call
deliberate() via MCP before your agent's first action. Let the Corum surface the information gaps and critical assumptions before your automation runs. Use get_action_plan() to feed structured next steps to your workflow engine.
- With your own knowledge base — Upload your documents, connect your trusted sources, let the knowledge retrieval layer ground the deliberation in your specific context. Le Corum's five personas each receive a role-filtered subset of your knowledge — not a single undifferentiated dump.
- With human-in-the-loop — For decisions where the confidence score is low or critical unknowns are flagged, the pipeline pauses. You provide context. The deliberation resumes with your input integrated into the remaining rounds.
The integrated execution layer — where the Corum not only produces the plan but orchestrates the agents that carry it out — is what we are building next. Deliberately. With the human gates, the trust scoring, the rollback mechanisms, and the agent registry that execution-layer infrastructure demands.
We will not rush it. The deliberation layer exists precisely because moving fast without validation is the failure mode we were designed to prevent. The same discipline applies to how we build the execution layer itself.
The deliberation layer
is live and open.
Connect via MCP. Launch a deliberation. Retrieve a structured action plan. The bridge between consensus and execution is already built — use it from any MCP-compatible client today.
Connect via MCP →
Before the agent acts,
the Corum decides.
Five perspectives. Structured output. Open standard. Available now from any MCP-compatible client — or directly in your browser.