Health insurers use AI to make coverage decisions.
We show whether the clinical reasoning holds up.

Structured clinical reasoning audits for AI-driven insurance decisions. Built for the regulations that are already here.

Texas SB 815 is in effect. Colorado AI Act takes effect June 2026. Your AI needs to explain itself.

FindingFindingConfusable PairDiscriminatesDiscriminates
Pulmonary Embolism
Pneumonia
CT Pulmonary Angiogram
Pleuritic Chest Pain

The black box era is over

300,000 claims denied in 2 months

Cigna's algorithm spent 1.2 seconds per decision. Class action ongoing.

81% of appealed denials overturned

The system is designed to deny first. Most denials can't withstand clinical scrutiny.

5 US states now require AI explainability

California, Texas, Maryland, Nebraska, Arizona. Laws are in effect, not proposed.

Audit every AI decision against structured clinical reasoning

CliniReason takes a coverage decision - diagnosis, proposed treatment, approval or denial - and audits it against a clinical reasoning graph currently covering all 429 conditions required for UK medical licensing, with structured differential diagnosis knowledge.

The output: a human-readable explanation of whether the clinical logic holds up, traceable to specific medical evidence. The kind of explanation that Texas SB 815 now requires in writing.

audit_report_892.log
CLINICAL REASONING AUDIT
─────────────────────────────────────────────────────
Decision:DENIED
Diagnosis:Community-Acquired Pneumonia
Proposed:IV Antibiotics (Inpatient)
AI Rationale:"Outpatient treatment sufficient"
── CliniReason Analysis ──────────────────────────
FINDING: Consolidation on CXR
FINDING: WBC > 15,000
FINDING: Respiratory rate > 30
FINDING: Confusion (new onset)
CURB-65 Score: 3/5 → Hospital admission indicated
VERDICT: Denial is clinically indefensible.
The documented findings meet criteria for inpatient admission under CURB-65 scoring. Outpatient treatment at this severity carries significant mortality risk.
Reasoning chain: 4 steps, fully traceable

Not another LLM. A clinical reasoning graph.

LLMs read medical records and guess. Our graph knows.

CliniReason is built on a structured knowledge graph: 429 medical conditions (all conditions required for UK medical licensing, expanding to the entire ICD-11), millions of clinical findings, investigations, and management pathways. Every condition is linked to its confusable pairs: the conditions that look almost identical but require different treatment.

When an AI system makes a clinical decision, we don't ask another AI if it looks right. We trace the reasoning against structured medical knowledge where every step is auditable.

Confusable Pair Detection

The graph encodes which conditions are commonly confused (PE vs pneumonia, appendicitis vs ectopic pregnancy) and which specific tests discriminate between them. If an AI denial ignores a discriminating investigation, we flag it.

Structured Reasoning Chains

Every audit produces a step-by-step reasoning chain: finding → differential → investigation → discrimination → conclusion. Each step is traceable to published clinical evidence. No hallucination. No black box.

Regulatory-Ready Output

Texas SB 815 requires a "plain-language explanation of how AI influenced the decision." Our audit reports are designed to meet this requirement out of the box. Compliance by construction, not retrofit.

The compliance clock is ticking

Jan 2025
California SB 1120: No AI-only medical necessity decisions
Sep 2025
Texas SB 815: Written AI explanation required
NOW IN EFFECT
Oct 2025
Maryland HB 820: AI systems must be auditable
Jan 2026
CMS-0057: Prior auth reporting requirements begin
Mar 2026
First CMS public reporting deadline
WE ARE HERE
Jun 2026
Colorado AI Act: Explain adverse AI decisions
Jul 2026
Arizona HB 2175: Medical director review required
Jan 2027
CMS-0057 Phase 2: FHIR Prior Auth API mandatory
Sep 2027
nH Predict trial date
Five states have already passed laws. Nearly half have adopted NAIC guidance. CMS reporting starts this month. The question isn't whether you'll need explainable AI - it's whether you'll have it in time.

From audit to engine

Today, CliniReason audits your existing AI's decisions.

Tomorrow, we replace the black box entirely. The same clinical reasoning graph that audits decisions can make them with explainability built in from the ground up. One system for clinical reasoning, compliance, and audit. No bolt-on explainability layer needed.

NOW
Your AI
makes
decisions
CliniReason
audits them
NEXT
CliniReason
IS the
clinical
reasoning
engine
Full clinical reasoning engine - in development

Built on a comprehensive clinical reasoning graph

CliniReason started as a clinical simulation engine modeling complex diagnostic cases. To power realistic clinical scenarios, we built a structured clinical reasoning graph encoding how conditions relate, which ones get confused, and what distinguishes them.

It quickly became clear: this graph isn't just useful for simulation. It's the structured clinical reasoning layer that the entire healthcare AI industry is missing.

The entire technical stack - graph database, backend API, and evaluation infrastructure - is built for enterprise scale and seamless integration.

429
Conditions structured. Currently covers all conditions required for UK medical licensing.
Expanding to cover the entire ICD-11.
Millions
of clinical
relationships encoded
API-First
Integrates directly into your existing AI pipeline
with sub-second latency

Get early access

We're onboarding the first health plans for clinical reasoning audits. Join the waitlist to be first in line.

We'll reach out when we're ready for your team. No spam.