# Training Details ## Training Data - Open-access chest X-ray datasets (e.g., NIH ChestX-ray14, CheXpert). - Data preprocessing: normalization, resizing, augmentation. ## Training Procedure - **Stage 1**: EfficientNet-B0 for coarse classification (normal vs abnormal). - **Stage 2**: EfficientNet-B2 for expert-level multi-label disease classification. - **Grad-CAM** integrated for visual interpretability. ## Training Hyperparameters - Mixed precision (fp16) - Optimizer: AdamW - Learning rate scheduler: CosineAnnealing - Loss: Weighted BCE with logits --- # Evaluation ## Testing Data - Evaluated on public benchmark datasets (CheXpert, NIH ChestX-ray14). ## Metrics - AUROC (per-class and mean) - F1-score - Sensitivity/Specificity ## Results - Mean AUROC ≈ **0.85–0.90** (depending on dataset and task) - Grad-CAM heatmaps align with radiologically relevant regions ## Model Examination - Grad-CAM visualizations available for each prediction - Two-stage pipeline mirrors clinical workflow --- # Environmental Impact - **Hardware Type**: NVIDIA Tesla V100 (cloud GPU) - **Hours used**: ~60 GPU hours - **Cloud Provider**: Google Cloud - **Compute Region**: US-central - **Carbon Emitted**: Estimated ~25 kg CO2eq --- # Technical Specifications ## Model Architecture - Stage 1: EfficientNet-B0 - Stage 2: EfficientNet-B2 - Hierarchical classification pipeline - Grad-CAM interpretability module ## Compute Infrastructure - Hardware: NVIDIA V100 (16GB) - Software: PyTorch, Hugging Face Transformers, CUDA 11.8 --- # Citation ## BibTeX ```bibtex @article{indabax2025cxrnet, title={Hierarchical CXR-Net: A Two-Stage Framework for Efficient and Interpretable Chest X-Ray Diagnosis}, author={Ssempeebwa, Phillip and IndabaX Uganda AI Research Lab}, year={2025}, journal={Digital Health Africa 2025 Poster Proceedings} }