← Back to home

Explainability Research

Four methods for making black-box AI decisions deterministic, auditable, and reproducible.

Independent research conducted in 2026 on personal time and equipment. Each method addresses a gap in existing explainability approaches — where current techniques rely on randomness, these don't. Designed for regulated industries where audit trails matter.

01

Interaction Mapping

When a bank denies a loan, regulators don't ask about one variable. They ask about combinations.

Detects synergy and redundancy between feature groups through combinatorial perturbation. Single-feature methods miss when two inputs are harmless alone but catastrophic together — this catches it.

02

Drift Tracking

A model that was fair in January can be discriminatory by March. Most teams don't find out until it's too late.

Measures how a model's reasoning shifts over time using divergence analysis on sequential decisions. Surfaces silent behavioral changes before they become compliance failures.

03

Counterfactual Pathing

The first question a regulator asks after an adverse decision: what would it have taken to get a different outcome?

Produces the minimal ordered sequence of input changes that would flip a decision. Answers the question regulators actually ask: "What would it have taken to get a different outcome?"

= =
04

Deterministic Reproducibility

Run SHAP twice on the same input. You'll get two different explanations. Try explaining that in court.

Formal guarantees that the same input always produces the same explanation — attribution values, interaction effects, and counterfactual paths. No stochastic sampling. Run it a thousand times, get the same answer a thousand times.

See how these methods come together in the AI Framework →