The Confidence
Score Explained.

Le Corum produces a confidence score from 0 to 10 with every deliberation. It is not a measure of how certain the AI sounds. It is a measure of how well the analysis holds up.

Why AI confidence is normally invisible

When a generative AI system produces a response, it presents that response with consistent fluency regardless of how well-grounded the underlying analysis is. The tone is the same whether the answer is drawn from extensive verified data or extrapolated from limited information. You cannot tell the difference from the output alone.

This is not a bug — it is a design feature of systems optimized for fluent, helpful responses. But for decisions where the stakes are high and the cost of a wrong answer is real, invisible confidence is a liability.

Le Corum makes confidence visible, specific, and calibrated.

What the score measures

The confidence score from 0 to 10 reflects three distinct factors, combined into a single calibrated number:

01
Agreement across the five minds
How much did the five independent analytical perspectives converge after full deliberation? High agreement on a well-examined question is a genuine signal of robustness. Agreement that arrived too quickly — before adversarial challenge — triggers anti-convergence mechanisms that lower the score.
02
Verified vs estimated claims
What proportion of the factual claims in the synthesis are backed by verified institutional sources — as opposed to reasoned estimates? Every claim in a Corum Synthesis is labelled: [VERIFIED], [ESTIMATED], [CONTEXT], or [ANALYSIS]. The ratio of verified to estimated claims contributes directly to the score.
03
Depth and quality of reasoning chains
Are the conclusions well-supported by the analytical work that preceded them? Shallow reasoning chains — assertions without supporting logic — penalize the score. Deliberations that surface and resolve genuine tensions in the data produce higher-quality reasoning chains and higher scores.

How to read the score

8.0–10
Solid analysis
Le Corum is aligned on verified factual bases. Certain assumptions may merit validation before acting, but the recommendation is well-grounded.
6.5–7.9
Founded analysis
The analysis is well-reasoned, but some hypotheses need external validation before the recommendation can be acted on confidently.
5.0–6.4
Conditional analysis
Insufficient data for high-confidence conclusions. The recommendation is valid but conditional. Gather the identified information gaps before acting.
< 5.0
Limited confidence
Le Corum recommends gathering more information before deciding. Acting on this synthesis carries measurable risk from unresolved unknowns.

Calibrated uncertainty as a professional tool

A confidence score of 6.1/10 is not a failure. It is precise information. It tells a decision-maker: the analysis is founded, but two information gaps need to be closed before this recommendation should be acted on. It identifies exactly which gaps those are. It tells you what evidence would raise the score.

Compare this to a generative AI response on the same question — fluent, comprehensive, and equally confident whether the analysis is grounded or extrapolated. The decision-maker has no way to know which they are looking at.

Calibrated uncertainty is not a weakness. It is the most professionally responsible output an analytical system can produce for a decision that carries real consequences.

MyCorum.ai — Disagree to decide.
Continue reading
Product
The Minority Report
The dissenting position that did not converge — and why it is the most important part of the synthesis.
Analysis
How to Trust Your AI's Answer
A practical framework for knowing when AI output is ready to act on — and when it is not.
Product
What is Le Corum?
The full deliberation architecture that produces the confidence score.