FraudSimulator-AI / docs /DECISION_LOGIC.md
Bader Alabddan
Add master prompt compliance: models/, data/, docs/, fraud_engine.py
9d20d0b

A newer version of the Gradio SDK is available: 6.4.0

Upgrade

Decision Logic Documentation

Overview

FraudSimulator-AI implements a multi-stage decision intelligence system for insurance fraud detection. The system answers a single executive decision question:

"Should this insurance claim be investigated or allowed — and what evidence supports that decision?"

Decision Contract

Input

Structured claim data including:

  • Claim metadata (ID, type, amount)
  • Claimant history
  • Policy information
  • Document data
  • Temporal patterns
  • Entity relationships

Output

Binary decision with evidence:

{
  "decision": "investigate | allow",
  "fraud_score": 0.0-1.0,
  "risk_band": "low | medium | high",
  "evidence": ["list of fraud indicators"],
  "confidence": 0.0-1.0,
  "audit_id": "unique identifier",
  "timestamp": "ISO 8601 timestamp"
}

Decision Pipeline

Stage 1: Feature Engineering

Extract and normalize features from raw claim data:

  • Amount features: Claim amount, deviation from average
  • Frequency features: Claim count, time between claims
  • Temporal features: Days since policy inception, claim timing
  • Document features: Document completeness, consistency scores
  • Entity features: Linked entities, relationship networks

Stage 2: Multi-Agent Analysis

Pattern Analysis Agent

Identifies fraud patterns:

  • High Frequency: Claimant has submitted multiple claims in short period
  • Amount Deviation: Claim amount significantly differs from historical average
  • Early Claim: Claim filed shortly after policy inception (< 30 days)

Anomaly Detection Agent

Detects statistical anomalies:

  • Document Anomalies: Missing or inconsistent documentation
  • Entity Linkage: Connections to known suspicious entities
  • Behavioral Anomalies: Unusual claim submission patterns

Risk Scoring Agent

Calculates weighted fraud risk score:

fraud_score = (pattern_score × 0.6) + (anomaly_score × 0.4)

where:
  pattern_score = (frequency × 0.4) + (amount_deviation × 0.3) + (temporal × 0.3)
  anomaly_score = (document × 0.4) + (entity × 0.4) + (behavioral × 0.2)

Stage 3: Decision Threshold

Apply decision threshold to fraud score:

  • fraud_score ≥ 0.65: Recommend "investigate"
  • fraud_score < 0.65: Recommend "allow"

Stage 4: Risk Banding

Classify risk level:

  • High Risk: fraud_score ≥ 0.7
  • Medium Risk: 0.4 ≤ fraud_score < 0.7
  • Low Risk: fraud_score < 0.4

Stage 5: Explainability Generation

Build evidence list from activated indicators:

  • List all indicators with score > 0.1
  • Provide human-readable descriptions
  • Include indicator weights
  • Calculate decision confidence

Stage 6: Governance & Audit

Create audit trail:

  • Generate unique audit ID
  • Log timestamp (UTC)
  • Record claim ID
  • Store decision and evidence
  • Track model version

Decision Confidence

Confidence is calculated based on indicator consistency:

variance = Σ(indicator_value - 0.5)² / n_indicators
confidence = 1.0 - (variance × 0.5)
confidence = max(confidence, 0.5)  // minimum 50% confidence

Higher confidence indicates:

  • Indicators are aligned (all high or all low)
  • Clear fraud pattern or clear legitimate pattern
  • Less ambiguity in decision

Lower confidence indicates:

  • Mixed signals from different indicators
  • Borderline case requiring human review
  • Potential for false positive/negative

Human-in-the-Loop Integration

The system is designed for human oversight:

  1. High-confidence "investigate": Immediate escalation to fraud investigation team
  2. Low-confidence "investigate": Flag for senior adjuster review
  3. High-confidence "allow": Auto-approve with audit trail
  4. Low-confidence "allow": Route to standard claims processing with monitoring

Model Versioning

Current version: 1.0.0

All decisions are tagged with model version for:

  • Reproducibility
  • A/B testing
  • Regulatory compliance
  • Drift detection

Regulatory Alignment

Decision logic complies with:

  • IFRS 17: Insurance contract accounting standards
  • AML Requirements: Anti-money laundering detection
  • Explainability Standards: All decisions are explainable and auditable
  • Bias Monitoring: Regular review of decision patterns across demographics

Performance Metrics

Target metrics:

  • Precision: ≥ 75% (minimize false positives)
  • Recall: ≥ 80% (catch majority of fraud)
  • F1 Score: ≥ 0.77
  • Decision Time: < 2 seconds per claim
  • Explainability Coverage: 100% (all decisions explained)

Continuous Improvement

Decision logic is updated based on:

  • Fraud investigation outcomes
  • False positive/negative analysis
  • Emerging fraud patterns
  • Regulatory changes
  • Stakeholder feedback