Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -16,10 +16,9 @@ tags:
|
|
| 16 |
> toolkit for Green SOTA image classification research.
|
| 17 |
|
| 18 |
## Why EDEN?
|
| 19 |
-
As deep learning models scale exponentially, the carbon footprint of training has
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
chasing raw accuracy to optimising *Green SOTA*.
|
| 23 |
|
| 24 |
## Profiling Environment
|
| 25 |
| Component | Specification |
|
|
@@ -29,42 +28,52 @@ chasing raw accuracy to optimising *Green SOTA*.
|
|
| 29 |
| **RAM** | 63.66 GB System RAM |
|
| 30 |
| **OS** | Windows 10 |
|
| 31 |
|
| 32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
### Phase 1 — Zero-Overhead Initialization
|
| 35 |
-
Dataset pre-loaded into **pinned System RAM** before training
|
| 36 |
-
This eliminates disk I/O power spikes that would otherwise inflate energy readings
|
| 37 |
-
and distort EAG comparisons between architectures.
|
| 38 |
|
| 39 |
### Phase 2 — Two-Stage Energy-Aware Training
|
| 40 |
-
1. **Frozen Head Training** — Only the classification head trains for
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
3. **Gradient Accumulation** — Gradients accumulated over N micro-batches,
|
| 45 |
-
simulating large batch sizes without VRAM spikes.
|
| 46 |
-
4. **AMP (Automated Mixed Precision)** — `torch.cuda.amp.autocast()` halves
|
| 47 |
-
bandwidth per backward pass.
|
| 48 |
5. **Sparse L1 Penalty** — `L_total = CrossEntropy + λ·Σ|W_trainable|`
|
| 49 |
-
6. **EAG Early-Exit** —
|
| 50 |
-
epochs, preventing wasted compute.
|
| 51 |
|
| 52 |
### Phase 3 — Hardware-Aware Deployment *(Post-Training)*
|
| 53 |
-
|
| 54 |
-
are pruned.
|
| 55 |
-
- **INT8 Quantization** — Weights converted for edge-deployment readiness.
|
| 56 |
-
- **Dynamic Depth Routing** — Simple images bypass the middle 50 % of layers
|
| 57 |
-
via residual skip connections, slashing inference energy.
|
| 58 |
|
| 59 |
## EAG — The Expert KPI
|
| 60 |
```
|
| 61 |
EAG = ΔAccuracy / ΔJoules
|
| 62 |
```
|
| 63 |
-
|
| 64 |
-
architecture family. A higher EAG = more learning per unit of carbon footprint.
|
| 65 |
|
| 66 |
## Scripts in This Repository
|
|
|
|
|
|
|
|
|
|
| 67 |
- `eden_hf_upload.py`
|
|
|
|
|
|
|
| 68 |
- `test1\Algo_CIFAR_100_EfficientNet.py`
|
| 69 |
- `test1\Algo_CIFAR_100_MobileViTv3.py`
|
| 70 |
- `test1\Algo_CIFAR_100_convneXt.py`
|
|
@@ -103,7 +112,7 @@ architecture family. A higher EAG = more learning per unit of carbon footprint.
|
|
| 103 |
title = {Project EDEN: Energy-Driven Evolution of Networks},
|
| 104 |
author = {EDEN Research Team},
|
| 105 |
year = {2025},
|
| 106 |
-
note = {Hugging Face
|
| 107 |
url = {https://huggingface.co/Shanmuk4622}
|
| 108 |
}
|
| 109 |
```
|
|
|
|
| 16 |
> toolkit for Green SOTA image classification research.
|
| 17 |
|
| 18 |
## Why EDEN?
|
| 19 |
+
As deep learning models scale exponentially, the carbon footprint of training has reached
|
| 20 |
+
unsustainable levels. Project EDEN introduces the **EAG (Energy-to-Accuracy Gradient)** as
|
| 21 |
+
the primary KPI — shifting the paradigm from chasing raw accuracy to optimising *Green SOTA*.
|
|
|
|
| 22 |
|
| 23 |
## Profiling Environment
|
| 24 |
| Component | Specification |
|
|
|
|
| 28 |
| **RAM** | 63.66 GB System RAM |
|
| 29 |
| **OS** | Windows 10 |
|
| 30 |
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## 📊 Collection Overview
|
| 34 |
+
|
| 35 |
+
### Energy vs Accuracy — All Models
|
| 36 |
+
*SOTA Optimized (green) · Baseline (grey) · EDEN Classic (blue)*
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
|
| 40 |
+
### EAG Leaderboard — Ranked by Green Efficiency
|
| 41 |
+

|
| 42 |
+
|
| 43 |
+
### CO₂ Emissions — Baseline vs EDEN Classic
|
| 44 |
+

|
| 45 |
+
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
## The E2AM Algorithm
|
| 49 |
|
| 50 |
### Phase 1 — Zero-Overhead Initialization
|
| 51 |
+
Dataset pre-loaded into **pinned System RAM** before training — eliminates disk I/O power spikes.
|
|
|
|
|
|
|
| 52 |
|
| 53 |
### Phase 2 — Two-Stage Energy-Aware Training
|
| 54 |
+
1. **Frozen Head Training** — Only the classification head trains for `E_unfreeze` epochs.
|
| 55 |
+
2. **Progressive Unfreezing** — All layers unlock at `E_unfreeze`; LR decayed (`×0.1`).
|
| 56 |
+
3. **Gradient Accumulation** — Simulates large batch sizes without VRAM spikes.
|
| 57 |
+
4. **AMP** — `torch.cuda.amp.autocast()` halves bandwidth per backward pass.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
5. **Sparse L1 Penalty** — `L_total = CrossEntropy + λ·Σ|W_trainable|`
|
| 59 |
+
6. **EAG Early-Exit** — Terminates if `EAG < γ_EAG` for 3 consecutive epochs.
|
|
|
|
| 60 |
|
| 61 |
### Phase 3 — Hardware-Aware Deployment *(Post-Training)*
|
| 62 |
+
Saliency-energy pruning · INT8 quantization · Dynamic depth routing
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
## EAG — The Expert KPI
|
| 65 |
```
|
| 66 |
EAG = ΔAccuracy / ΔJoules
|
| 67 |
```
|
| 68 |
+
A higher EAG = more learning per unit of carbon footprint.
|
|
|
|
| 69 |
|
| 70 |
## Scripts in This Repository
|
| 71 |
+
- `eden_chart_push.py`
|
| 72 |
+
- `eden_check_hf.py`
|
| 73 |
+
- `eden_fix_missing_repos.py`
|
| 74 |
- `eden_hf_upload.py`
|
| 75 |
+
- `eden_upload_fast.py`
|
| 76 |
+
- `eden_upload_weights.py`
|
| 77 |
- `test1\Algo_CIFAR_100_EfficientNet.py`
|
| 78 |
- `test1\Algo_CIFAR_100_MobileViTv3.py`
|
| 79 |
- `test1\Algo_CIFAR_100_convneXt.py`
|
|
|
|
| 112 |
title = {Project EDEN: Energy-Driven Evolution of Networks},
|
| 113 |
author = {EDEN Research Team},
|
| 114 |
year = {2025},
|
| 115 |
+
note = {Hugging Face: Shanmuk4622},
|
| 116 |
url = {https://huggingface.co/Shanmuk4622}
|
| 117 |
}
|
| 118 |
```
|