final_test / config /README.md
Abdelrahman Almatrooshi
Deploy snapshot from main b7a59b11809483dfc959f196f1930240f2662c49
22a6915

config

Centralised configuration for FocusGuard. Every training script, pipeline, and evaluation tool reads from this package rather than hardcoding values. This ensures reproducibility across experiments and consistent behaviour between training and deployment.

Files

File Purpose
default.yaml Single source of truth for all hyperparameters, thresholds, clipping bounds, and app settings
__init__.py YAML loader with dotted-key access (get("pipeline.mlp_threshold")), ClearML flattener, and project-wide constants
clearml_enrich.py ClearML experiment tracking helpers: environment tags, config/requirements upload, model metadata

Usage

from config import get

lr = get("mlp.lr")                        # 0.001
threshold = get("pipeline.mlp_threshold")  # 0.23
clip_yaw = get("data.clip.yaw")           # [-45, 45]

Override the default path by setting FOCUSGUARD_CONFIG to point at a different YAML file.

Key sections in default.yaml

Section What it controls
app DB path, inference size (640x480), 4 workers, default model
l2cs_boost Boost-mode fusion weights (35% base / 65% L2CS), fused threshold 0.52
mlp 30 epochs, batch 32, lr 0.001, hidden sizes [64, 32]
xgboost 600 trees, depth 8, lr 0.1489, regularisation from 40-trial Optuna sweep
data.clip Physiological clipping: yaw +/-45, pitch +/-30, EAR [0, 0.85], MAR [0, 1.0]
pipeline.geometric max_angle 22 deg, face/eye weights 0.7/0.3, asymmetric EMA (alpha_up=0.55, alpha_down=0.45)
pipeline Production thresholds: MLP 0.23, XGBoost 0.28, hybrid 0.35 (all derived from LOPO Youden's J)
evaluation Seed 42, weight search ranges for geometric alpha and hybrid w_mlp

ClearML enrichment

clearml_enrich.py provides reusable helpers called by all training and evaluation scripts:

  • enrich_task(task, role) adds tags (Python version, OS, torch/CUDA, git SHA)
  • upload_repro_artifacts(task) pins the exact YAML config and requirements.txt
  • attach_output_metrics(model, metrics) surfaces headline metrics on registered model cards
  • task_done_summary(task, summary) sets a human-readable task comment