Abdelrahman Almatrooshi commited on
Commit
ab8ff76
Β·
1 Parent(s): 9ab271b

chore: Dockerfile, docs, deps, and comment tweaks

Browse files

- Update Dockerfile, README, requirements, .gitignore
- Small comment/docstring adjustments in main, justify_thresholds, xgboost scripts

.gitignore CHANGED
@@ -42,7 +42,10 @@ htmlcov/
42
  # Project specific
43
  focus_guard.db
44
  test_focus_guard.db
 
 
45
  static/
 
46
  __pycache__/
47
  docs/
48
  docs
 
42
  # Project specific
43
  focus_guard.db
44
  test_focus_guard.db
45
+ <<<<<<< HEAD
46
+ =======
47
  static/
48
+ >>>>>>> feature/integration2.0
49
  __pycache__/
50
  docs/
51
  docs
Dockerfile CHANGED
@@ -2,12 +2,20 @@ FROM python:3.10-slim
2
 
3
  RUN useradd -m -u 1000 user
4
  ENV HOME=/home/user PATH=/home/user/.local/bin:$PATH
 
 
 
 
5
  ENV PYTHONUNBUFFERED=1
6
 
7
  WORKDIR /app
8
 
9
  RUN apt-get update && apt-get install -y --no-install-recommends \
 
10
  libglib2.0-0 libsm6 libxrender1 libxext6 libxcb1 libgl1 libgles2 libegl1 libgomp1 \
 
 
 
11
  ffmpeg libavcodec-dev libavformat-dev libavutil-dev libswscale-dev \
12
  libavdevice-dev libopus-dev libvpx-dev libsrtp2-dev \
13
  build-essential nodejs npm git \
@@ -16,7 +24,9 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
16
  RUN pip install --no-cache-dir torch torchvision --index-url https://download.pytorch.org/whl/cpu
17
 
18
  COPY requirements.txt ./
19
- RUN pip install --no-cache-dir -r requirements.txt
 
 
20
 
21
  COPY . .
22
 
@@ -26,10 +36,24 @@ RUN npm install && npm run build && mkdir -p /app/static && cp -R dist/* /app/st
26
  ENV FOCUSGUARD_CACHE_DIR=/app/.cache/focusguard
27
  RUN python -c "from models.face_mesh import _ensure_model; _ensure_model()"
28
  RUN python download_l2cs_weights.py || echo "[WARN] L2CS weights not downloaded β€” will run without gaze model"
 
 
 
 
 
 
 
 
 
 
29
 
30
  RUN mkdir -p /app/data && chown -R user:user /app
31
 
32
  USER user
33
  EXPOSE 7860
34
 
 
35
  CMD ["bash", "start.sh"]
 
 
 
 
2
 
3
  RUN useradd -m -u 1000 user
4
  ENV HOME=/home/user PATH=/home/user/.local/bin:$PATH
5
+ <<<<<<< HEAD
6
+ =======
7
+
8
+ >>>>>>> feature/integration2.0
9
  ENV PYTHONUNBUFFERED=1
10
 
11
  WORKDIR /app
12
 
13
  RUN apt-get update && apt-get install -y --no-install-recommends \
14
+ <<<<<<< HEAD
15
  libglib2.0-0 libsm6 libxrender1 libxext6 libxcb1 libgl1 libgles2 libegl1 libgomp1 \
16
+ =======
17
+ libglib2.0-0 libsm6 libxrender1 libxext6 libxcb1 libgl1 libgomp1 \
18
+ >>>>>>> feature/integration2.0
19
  ffmpeg libavcodec-dev libavformat-dev libavutil-dev libswscale-dev \
20
  libavdevice-dev libopus-dev libvpx-dev libsrtp2-dev \
21
  build-essential nodejs npm git \
 
24
  RUN pip install --no-cache-dir torch torchvision --index-url https://download.pytorch.org/whl/cpu
25
 
26
  COPY requirements.txt ./
27
+ <<<<<<< HEAD
28
+ RUN pip install --no-cache-dir torch torchvision --index-url https://download.pytorch.org/whl/cpu \
29
+ && pip install --no-cache-dir -r requirements.txt
30
 
31
  COPY . .
32
 
 
36
  ENV FOCUSGUARD_CACHE_DIR=/app/.cache/focusguard
37
  RUN python -c "from models.face_mesh import _ensure_model; _ensure_model()"
38
  RUN python download_l2cs_weights.py || echo "[WARN] L2CS weights not downloaded β€” will run without gaze model"
39
+ =======
40
+ RUN pip install --no-cache-dir -r requirements.txt
41
+
42
+ COPY . .
43
+
44
+ RUN npm install && npm run build && mkdir -p /app/static && cp -R dist/* /app/static/
45
+
46
+ ENV FOCUSGUARD_CACHE_DIR=/app/.cache/focusguard
47
+ RUN python -c "from models.face_mesh import _ensure_model; _ensure_model()"
48
+ >>>>>>> feature/integration2.0
49
 
50
  RUN mkdir -p /app/data && chown -R user:user /app
51
 
52
  USER user
53
  EXPOSE 7860
54
 
55
+ <<<<<<< HEAD
56
  CMD ["bash", "start.sh"]
57
+ =======
58
+ CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860", "--log-level", "info"]
59
+ >>>>>>> feature/integration2.0
README.md CHANGED
@@ -1,15 +1,21 @@
 
1
  ---
2
  title: FocusGuard
3
  sdk: docker
4
  app_port: 7860
5
  ---
6
 
 
 
7
  # FocusGuard
8
 
9
  Webcam-based focus detection: MediaPipe face mesh -> 17 features (EAR, gaze, head pose, PERCLOS, etc.) -> MLP or XGBoost for focused/unfocused. React + FastAPI app with WebSocket video.
10
 
 
 
11
  **Repository:** Add your repo link here (e.g. `https://github.com/your-org/FocusGuard`).
12
 
 
13
  ## Project layout
14
 
15
  ```
@@ -35,10 +41,13 @@ Webcam-based focus detection: MediaPipe face mesh -> 17 features (EAR, gaze, hea
35
  └── package.json
36
  ```
37
 
 
 
38
  ## Config
39
 
40
  Hyperparameters and app settings live in `config/default.yaml` (learning rates, batch size, thresholds, L2CS weights, etc.). Override with env `FOCUSGUARD_CONFIG` pointing to another YAML.
41
 
 
42
  ## Setup
43
 
44
  ```bash
@@ -86,6 +95,8 @@ python -m models.mlp.train
86
  python -m models.xgboost.train
87
  ```
88
 
 
 
89
  ### ClearML experiment tracking
90
 
91
  All training and evaluation config (from `config/default.yaml`) is exposed as ClearML task parameters. Enable logging with `USE_CLEARML=1`; optionally run on a **remote GPU agent** instead of locally:
@@ -104,12 +115,16 @@ clearml-agent daemon --queue gpu
104
 
105
  Logged to ClearML: **parameters** (full flattened config), **scalars** (loss, accuracy, F1, ROC-AUC, per-class precision/recall/F1, dataset sizes and class counts), **artifacts** (best checkpoint, training log JSON), and **plots** (confusion matrix, ROC curves in evaluation).
106
 
 
107
  ## Data
108
 
109
  9 participants, 144,793 samples, 10 features, binary labels. Collect with `python -m models.collect_features --name <name>`. Data lives in `data/collected_<name>/`.
110
 
 
 
111
  **Train/val/test split:** All pooled training and evaluation use the same split for reproducibility. The test set is held out before any preprocessing; `StandardScaler` is fit on the training set only, then applied to val and test. Split ratios and random seed come from `config/default.yaml` (`data.split_ratios`, `mlp.seed`) via `data_preparation.prepare_dataset.get_default_split_config()`. MLP train, XGBoost train, eval_accuracy scripts, and benchmarks all use this single source so reported test accuracy is on the same held-out set.
112
 
 
113
  ## Models
114
 
115
  | Model | What it uses | Best for |
@@ -127,6 +142,8 @@ Logged to ClearML: **parameters** (full flattened config), **scalars** (loss, ac
127
  | XGBoost (600 trees, depth 8) | 95.87% | 0.959 | 0.991 |
128
  | MLP (64->32) | 92.92% | 0.929 | 0.971 |
129
 
 
 
130
  ## Model numbers (LOPO, 9 participants)
131
 
132
  | Model | LOPO AUC | Best threshold (Youden's J) | F1 @ best threshold | F1 @ 0.50 |
@@ -163,6 +180,7 @@ Latest quick feature-selection run (`python -m evaluation.feature_importance --q
163
  Top-5 XGBoost gain features: `s_face`, `ear_right`, `head_deviation`, `ear_avg`, `perclos`.
164
  For full leave-one-feature-out ablation, run `python -m evaluation.feature_importance` (slower).
165
 
 
166
  ## L2CS Gaze Tracking
167
 
168
  L2CS-Net predicts where your eyes are looking, not just where your head is pointed. This catches the scenario where your head faces the screen but your eyes wander.
 
1
+ <<<<<<< HEAD
2
  ---
3
  title: FocusGuard
4
  sdk: docker
5
  app_port: 7860
6
  ---
7
 
8
+ =======
9
+ >>>>>>> feature/integration2.0
10
  # FocusGuard
11
 
12
  Webcam-based focus detection: MediaPipe face mesh -> 17 features (EAR, gaze, head pose, PERCLOS, etc.) -> MLP or XGBoost for focused/unfocused. React + FastAPI app with WebSocket video.
13
 
14
+ <<<<<<< HEAD
15
+ =======
16
  **Repository:** Add your repo link here (e.g. `https://github.com/your-org/FocusGuard`).
17
 
18
+ >>>>>>> feature/integration2.0
19
  ## Project layout
20
 
21
  ```
 
41
  └── package.json
42
  ```
43
 
44
+ <<<<<<< HEAD
45
+ =======
46
  ## Config
47
 
48
  Hyperparameters and app settings live in `config/default.yaml` (learning rates, batch size, thresholds, L2CS weights, etc.). Override with env `FOCUSGUARD_CONFIG` pointing to another YAML.
49
 
50
+ >>>>>>> feature/integration2.0
51
  ## Setup
52
 
53
  ```bash
 
95
  python -m models.xgboost.train
96
  ```
97
 
98
+ <<<<<<< HEAD
99
+ =======
100
  ### ClearML experiment tracking
101
 
102
  All training and evaluation config (from `config/default.yaml`) is exposed as ClearML task parameters. Enable logging with `USE_CLEARML=1`; optionally run on a **remote GPU agent** instead of locally:
 
115
 
116
  Logged to ClearML: **parameters** (full flattened config), **scalars** (loss, accuracy, F1, ROC-AUC, per-class precision/recall/F1, dataset sizes and class counts), **artifacts** (best checkpoint, training log JSON), and **plots** (confusion matrix, ROC curves in evaluation).
117
 
118
+ >>>>>>> feature/integration2.0
119
  ## Data
120
 
121
  9 participants, 144,793 samples, 10 features, binary labels. Collect with `python -m models.collect_features --name <name>`. Data lives in `data/collected_<name>/`.
122
 
123
+ <<<<<<< HEAD
124
+ =======
125
  **Train/val/test split:** All pooled training and evaluation use the same split for reproducibility. The test set is held out before any preprocessing; `StandardScaler` is fit on the training set only, then applied to val and test. Split ratios and random seed come from `config/default.yaml` (`data.split_ratios`, `mlp.seed`) via `data_preparation.prepare_dataset.get_default_split_config()`. MLP train, XGBoost train, eval_accuracy scripts, and benchmarks all use this single source so reported test accuracy is on the same held-out set.
126
 
127
+ >>>>>>> feature/integration2.0
128
  ## Models
129
 
130
  | Model | What it uses | Best for |
 
142
  | XGBoost (600 trees, depth 8) | 95.87% | 0.959 | 0.991 |
143
  | MLP (64->32) | 92.92% | 0.929 | 0.971 |
144
 
145
+ <<<<<<< HEAD
146
+ =======
147
  ## Model numbers (LOPO, 9 participants)
148
 
149
  | Model | LOPO AUC | Best threshold (Youden's J) | F1 @ best threshold | F1 @ 0.50 |
 
180
  Top-5 XGBoost gain features: `s_face`, `ear_right`, `head_deviation`, `ear_avg`, `perclos`.
181
  For full leave-one-feature-out ablation, run `python -m evaluation.feature_importance` (slower).
182
 
183
+ >>>>>>> feature/integration2.0
184
  ## L2CS Gaze Tracking
185
 
186
  L2CS-Net predicts where your eyes are looking, not just where your head is pointed. This catches the scenario where your head faces the screen but your eyes wander.
evaluation/justify_thresholds.py CHANGED
@@ -80,7 +80,6 @@ def _plot_roc(fpr, tpr, auc, opt_thresh, opt_idx, title, path, clearml_title=Non
80
  ax.legend(loc="lower right")
81
  fig.tight_layout()
82
 
83
- # Log to ClearML before closing the figure
84
  if _logger and clearml_title:
85
  _logger.report_matplotlib_figure(
86
  title=clearml_title, series="ROC", figure=fig, iteration=0
@@ -158,7 +157,6 @@ def analyse_model_thresholds(results):
158
  print(f" {label}: AUC={auc:.4f}, optimal threshold={opt_t:.3f} "
159
  f"(F1={f1_opt:.4f}), F1@0.50={f1_50:.4f}")
160
 
161
- # Log scalars to ClearML
162
  if _logger:
163
  _logger.report_single_value(f"{label} Optimal Threshold", opt_t)
164
  _logger.report_single_value(f"{label} AUC", auc)
@@ -212,7 +210,6 @@ def run_geo_weight_search():
212
  ha="center", va="bottom", fontsize=8)
213
  fig.tight_layout()
214
 
215
- # Log to ClearML before closing
216
  if _logger:
217
  _logger.report_matplotlib_figure(
218
  title="Geo Weight Search", series="F1 vs Alpha", figure=fig, iteration=0
@@ -226,7 +223,6 @@ def run_geo_weight_search():
226
  print(f" Best alpha (face weight) = {best_alpha:.1f}, "
227
  f"mean LOPO F1 = {mean_f1[best_alpha]:.4f}")
228
 
229
- # Log scalars to ClearML
230
  if _logger:
231
  _logger.report_single_value("Geo Best Alpha", best_alpha)
232
  for i, a in enumerate(sorted(alphas)):
@@ -302,7 +298,6 @@ def run_hybrid_weight_search(lopo_results):
302
  ha="center", va="bottom", fontsize=8)
303
  fig.tight_layout()
304
 
305
- # Log to ClearML before closing
306
  if _logger:
307
  _logger.report_matplotlib_figure(
308
  title="Hybrid Weight Search", series="F1 vs w_mlp", figure=fig, iteration=0
@@ -315,7 +310,6 @@ def run_hybrid_weight_search(lopo_results):
315
 
316
  print(f" Best w_mlp = {best_w:.1f}, mean LOPO F1 = {mean_f1[best_w]:.4f}")
317
 
318
- # Log scalars to ClearML
319
  if _logger:
320
  _logger.report_single_value("Hybrid Best w_mlp", best_w)
321
  for i, w in enumerate(sorted(w_mlps)):
@@ -366,7 +360,6 @@ def plot_distributions():
366
  ax.legend(fontsize=8)
367
  fig_ear.tight_layout()
368
 
369
- # Log to ClearML before closing
370
  if _logger:
371
  _logger.report_matplotlib_figure(
372
  title="EAR Distribution", series="by class", figure=fig_ear, iteration=0
@@ -388,7 +381,6 @@ def plot_distributions():
388
  ax.legend(fontsize=8)
389
  fig_mar.tight_layout()
390
 
391
- # Log to ClearML before closing
392
  if _logger:
393
  _logger.report_matplotlib_figure(
394
  title="MAR Distribution", series="by class", figure=fig_mar, iteration=0
@@ -550,7 +542,6 @@ def main():
550
 
551
  write_report(model_stats, geo_f1, best_alpha, hybrid_f1, best_w, dist_stats)
552
 
553
- # Close ClearML task
554
  if _task:
555
  from config.clearml_enrich import task_done_summary
556
 
 
80
  ax.legend(loc="lower right")
81
  fig.tight_layout()
82
 
 
83
  if _logger and clearml_title:
84
  _logger.report_matplotlib_figure(
85
  title=clearml_title, series="ROC", figure=fig, iteration=0
 
157
  print(f" {label}: AUC={auc:.4f}, optimal threshold={opt_t:.3f} "
158
  f"(F1={f1_opt:.4f}), F1@0.50={f1_50:.4f}")
159
 
 
160
  if _logger:
161
  _logger.report_single_value(f"{label} Optimal Threshold", opt_t)
162
  _logger.report_single_value(f"{label} AUC", auc)
 
210
  ha="center", va="bottom", fontsize=8)
211
  fig.tight_layout()
212
 
 
213
  if _logger:
214
  _logger.report_matplotlib_figure(
215
  title="Geo Weight Search", series="F1 vs Alpha", figure=fig, iteration=0
 
223
  print(f" Best alpha (face weight) = {best_alpha:.1f}, "
224
  f"mean LOPO F1 = {mean_f1[best_alpha]:.4f}")
225
 
 
226
  if _logger:
227
  _logger.report_single_value("Geo Best Alpha", best_alpha)
228
  for i, a in enumerate(sorted(alphas)):
 
298
  ha="center", va="bottom", fontsize=8)
299
  fig.tight_layout()
300
 
 
301
  if _logger:
302
  _logger.report_matplotlib_figure(
303
  title="Hybrid Weight Search", series="F1 vs w_mlp", figure=fig, iteration=0
 
310
 
311
  print(f" Best w_mlp = {best_w:.1f}, mean LOPO F1 = {mean_f1[best_w]:.4f}")
312
 
 
313
  if _logger:
314
  _logger.report_single_value("Hybrid Best w_mlp", best_w)
315
  for i, w in enumerate(sorted(w_mlps)):
 
360
  ax.legend(fontsize=8)
361
  fig_ear.tight_layout()
362
 
 
363
  if _logger:
364
  _logger.report_matplotlib_figure(
365
  title="EAR Distribution", series="by class", figure=fig_ear, iteration=0
 
381
  ax.legend(fontsize=8)
382
  fig_mar.tight_layout()
383
 
 
384
  if _logger:
385
  _logger.report_matplotlib_figure(
386
  title="MAR Distribution", series="by class", figure=fig_mar, iteration=0
 
542
 
543
  write_report(model_stats, geo_f1, best_alpha, hybrid_f1, best_w, dist_stats)
544
 
 
545
  if _task:
546
  from config.clearml_enrich import task_done_summary
547
 
main.py CHANGED
@@ -110,7 +110,6 @@ async def lifespan(app):
110
 
111
  app = FastAPI(title="Focus Guard API", lifespan=lifespan)
112
 
113
- # Add CORS middleware
114
  app.add_middleware(
115
  CORSMiddleware,
116
  allow_origins=["*"],
@@ -119,7 +118,6 @@ app.add_middleware(
119
  allow_headers=["*"],
120
  )
121
 
122
- # Global variables
123
  pcs = set()
124
  _cached_model_name = get("app.default_model") or "mlp"
125
  _l2cs_boost_enabled = False
@@ -198,7 +196,6 @@ class VideoTransformTrack(VideoStreamTrack):
198
  "model": model_name,
199
  }
200
 
201
- # Draw face mesh + HUD on the video frame
202
  h_f, w_f = img.shape[:2]
203
  lm = out.get("landmarks")
204
  eye_gaze_enabled = _l2cs_boost_enabled or model_name == "l2cs"
@@ -302,7 +299,6 @@ _BOOST_VETO = 0.38 # L2CS below this -> forced not-focused
302
 
303
 
304
  def _process_frame_with_l2cs_boost(base_pipeline, frame, base_model_name):
305
- # run base model
306
  with _pipeline_locks[base_model_name]:
307
  base_out = base_pipeline.process_frame(frame)
308
 
@@ -311,7 +307,6 @@ def _process_frame_with_l2cs_boost(base_pipeline, frame, base_model_name):
311
  base_out["boost_active"] = False
312
  return base_out
313
 
314
- # run L2CS
315
  with _pipeline_locks["l2cs"]:
316
  l2cs_out = l2cs_pipe.process_frame(frame)
317
 
@@ -804,7 +799,7 @@ async def import_sessions(sessions: List[dict]):
804
  async def clear_history():
805
  try:
806
  async with aiosqlite.connect(db_path) as db:
807
- # Delete events first (foreign key good practice)
808
  await db.execute("DELETE FROM focus_events")
809
  await db.execute("DELETE FROM focus_sessions")
810
  await db.commit()
 
110
 
111
  app = FastAPI(title="Focus Guard API", lifespan=lifespan)
112
 
 
113
  app.add_middleware(
114
  CORSMiddleware,
115
  allow_origins=["*"],
 
118
  allow_headers=["*"],
119
  )
120
 
 
121
  pcs = set()
122
  _cached_model_name = get("app.default_model") or "mlp"
123
  _l2cs_boost_enabled = False
 
196
  "model": model_name,
197
  }
198
 
 
199
  h_f, w_f = img.shape[:2]
200
  lm = out.get("landmarks")
201
  eye_gaze_enabled = _l2cs_boost_enabled or model_name == "l2cs"
 
299
 
300
 
301
  def _process_frame_with_l2cs_boost(base_pipeline, frame, base_model_name):
 
302
  with _pipeline_locks[base_model_name]:
303
  base_out = base_pipeline.process_frame(frame)
304
 
 
307
  base_out["boost_active"] = False
308
  return base_out
309
 
 
310
  with _pipeline_locks["l2cs"]:
311
  l2cs_out = l2cs_pipe.process_frame(frame)
312
 
 
799
  async def clear_history():
800
  try:
801
  async with aiosqlite.connect(db_path) as db:
802
+ # events reference sessions via FK
803
  await db.execute("DELETE FROM focus_events")
804
  await db.execute("DELETE FROM focus_sessions")
805
  await db.commit()
models/xgboost/add_accuracy.py CHANGED
@@ -16,7 +16,6 @@ X_val, y_val = splits["X_val"], splits["y_val"]
16
  csv_path = 'models/xgboost/sweep_results_all_40.csv'
17
  df = pd.read_csv(csv_path)
18
 
19
- # We will calculate accuracy for each row
20
  accuracies = []
21
 
22
  print(f"Re-evaluating {len(df)} configurations for accuracy. This will take a few minutes...")
@@ -34,7 +33,6 @@ for idx, row in df.iterrows():
34
  "eval_metric": "logloss"
35
  }
36
 
37
- # Train the exact same model quickly
38
  model = XGBClassifier(**params)
39
  model.fit(X_train, y_train)
40
 
@@ -46,12 +44,10 @@ for idx, row in df.iterrows():
46
  if (idx + 1) % 5 == 0:
47
  print(f"Processed {idx + 1}/{len(df)} trials...")
48
 
49
- # Add accuracy column and save back to CSV
50
  df.insert(2, 'val_accuracy', accuracies)
51
  df.to_csv(csv_path, index=False)
52
 
53
  print(f"\nDone! Updated {csv_path} with 'val_accuracy'.")
54
- # Display the top 5 by accuracy now just to see
55
  top5_acc = df.nlargest(5, 'val_accuracy')[['task_id', 'val_accuracy', 'val_f1', 'val_loss']]
56
  print("\nTop 5 Trials by Accuracy:")
57
  print(top5_acc.to_string(index=False))
 
16
  csv_path = 'models/xgboost/sweep_results_all_40.csv'
17
  df = pd.read_csv(csv_path)
18
 
 
19
  accuracies = []
20
 
21
  print(f"Re-evaluating {len(df)} configurations for accuracy. This will take a few minutes...")
 
33
  "eval_metric": "logloss"
34
  }
35
 
 
36
  model = XGBClassifier(**params)
37
  model.fit(X_train, y_train)
38
 
 
44
  if (idx + 1) % 5 == 0:
45
  print(f"Processed {idx + 1}/{len(df)} trials...")
46
 
 
47
  df.insert(2, 'val_accuracy', accuracies)
48
  df.to_csv(csv_path, index=False)
49
 
50
  print(f"\nDone! Updated {csv_path} with 'val_accuracy'.")
 
51
  top5_acc = df.nlargest(5, 'val_accuracy')[['task_id', 'val_accuracy', 'val_f1', 'val_loss']]
52
  print("\nTop 5 Trials by Accuracy:")
53
  print(top5_acc.to_string(index=False))
models/xgboost/sweep_local.py CHANGED
@@ -25,6 +25,7 @@ DATA_SPLITS, SEED = get_default_split_config()
25
 
26
  # ── Search Space ──────────────────────────────────────────────────────────────
27
  def objective(trial):
 
28
  # 1. Sample hyperparameters
29
  params = {
30
  "n_estimators": trial.suggest_categorical("n_estimators", [100, 200, 400, 600]),
 
25
 
26
  # ── Search Space ──────────────────────────────────────────────────────────────
27
  def objective(trial):
28
+ # 1. Sample hyperparameters
29
  # 1. Sample hyperparameters
30
  params = {
31
  "n_estimators": trial.suggest_categorical("n_estimators", [100, 200, 400, 600]),
requirements.txt CHANGED
@@ -16,11 +16,17 @@ httpx>=0.27.0
16
  aiosqlite>=0.19.0
17
  psutil>=5.9.0
18
  pydantic>=2.0.0
 
 
19
  PyYAML>=6.0
 
20
  xgboost>=2.0.0
21
  clearml>=2.0.2
22
  pytest>=9.0.0
23
  pytest-cov>=5.0.0
24
  face_detection @ git+https://github.com/elliottzheng/face-detection
25
  gdown>=5.0.0
 
26
  huggingface_hub
 
 
 
16
  aiosqlite>=0.19.0
17
  psutil>=5.9.0
18
  pydantic>=2.0.0
19
+ <<<<<<< HEAD
20
+ =======
21
  PyYAML>=6.0
22
+ >>>>>>> feature/integration2.0
23
  xgboost>=2.0.0
24
  clearml>=2.0.2
25
  pytest>=9.0.0
26
  pytest-cov>=5.0.0
27
  face_detection @ git+https://github.com/elliottzheng/face-detection
28
  gdown>=5.0.0
29
+ <<<<<<< HEAD
30
  huggingface_hub
31
+ =======
32
+ >>>>>>> feature/integration2.0