CauReL: Dynamic Counterfactual Learning for Precision Drug Repurposing in Alzheimer's Disease
A hybrid architecture combining deep counterfactual regression with interpretable causal explanations for personalized treatment recommendations
Two Key Innovations
Counterfactual-level Interpretability
Generate patient-specific explanations by predicting outcomes under both treatment (Y₁) and control (Y₀) conditions, identifying clinical factors driving individual treatment effects through causal contrasts rather than correlations.
Hybrid Deep Learning + Interpretable Trees
Deep representation network handles confounding in observational data while uplift trees provide human-readable decision rules for clinically actionable subgroup recommendations.
Key Features
Counterfactual Outcome Prediction
Y₀, Y₁ for individual treatment effect estimation
Directional Feature Importance
Causal analysis beyond SHAP correlations
CFR-based Uplift Trees
Deep learning with interpretable decision rules
Multiple IPM Balancing
MMD Linear/RBF, Wasserstein methods
Quick Start
Installation
Usage
Clinical Interpretation
ITE Range | Interpretation | Recommendation |
---|---|---|
< -0.10 | Strong benefit | Strongly recommend treatment |
-0.10 to -0.05 | Moderate benefit | Recommend treatment |
-0.05 to 0 | Small benefit | Consider treatment |
0 to 0.05 | Minimal effect | Discuss alternatives |
> 0.05 | Potential harm | Avoid treatment |
Copyright 2025-Present - University of Florida
