tianhaowang commited on
Commit
6719953
·
1 Parent(s): 4384295
Development/Plan/ui-self-explainability-plan-2025-09-29.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Implementation Plan: UI Self-Explainability Enhancements
2
+ Date: 2025-09-29
3
+ Author: Codex (AI Assistant)
4
+
5
+ ## Objective
6
+ Reorganize the experiment configuration UI into clearly labeled specification blocks and introduce speech recognition task support, ensuring every control has explicit guidance and task-aware options for datasets, base models, metrics, and scaling targets.
7
+
8
+ ## Background & Research
9
+ - The current `gr.Blocks` layout in `app.py` renders all controls in a single column with limited labeling, making the flow ambiguous to new users.
10
+ - Task-aware behavior exists only for metrics and candidate datasets; classes and mappings live in `app.py` and load from `catalog/candidates.json`.
11
+ - Speech recognition is not represented yet. Adding it requires augmenting the catalog and defining base model and benchmark presets directly in the UI layer.
12
+ - Gradio supports semantic grouping via `gr.Group`, Markdown headings, and inline helper text, which can deliver the requested “block” presentation without architectural changes.
13
+
14
+ ## Technical Approach
15
+ ### Architecture Overview
16
+ - Extend task metadata dictionaries in `app.py` to cover speech recognition metrics, base models, and benchmark datasets.
17
+ - Load speech recognition candidate datasets via the existing catalog mechanism by appending new entries to `catalog/candidates.json`.
18
+ - Restructure `build_interface()` to group related inputs under three labeled sections using `gr.Group` (or nested `gr.Column`) and helper `gr.Markdown` text.
19
+ - Enhance the `on_task_change()` callback or introduce a new orchestrator to update metrics, candidate datasets, base models, benchmark choices, and scaling label simultaneously when the task changes.
20
+ - Adjust submission wiring to pass through new benchmark selections without introducing silent defaults.
21
+
22
+ ### Step-by-Step Implementation
23
+ 1. **Catalog Update**: Append the two speech recognition training datasets to `catalog/candidates.json`, ensuring each entry includes `task: "speech_recognition"` and minimal column metadata where applicable.
24
+ 2. **Task Metadata Maps**: In `app.py`, define new constants for
25
+ - `TASK_MODEL_CHOICES`
26
+ - `TASK_BENCHMARK_CHOICES`
27
+ - Update `TASK_METRIC_CHOICES` / `TASK_METRIC_DEFAULT` to include speech recognition with `"loss"` and `"Word Error Rate (WER)"` (determine default explicitly).
28
+ 3. **UI Block Layout**: Within `build_interface()`, wrap training, evaluation, and scaling controls in dedicated groups:
29
+ - Add Markdown headings (e.g., `gr.Markdown("### Training task specifications")`).
30
+ - Place the instruction sentences (training/test uploads) right above their respective upload widgets.
31
+ - Rename component labels per new spec (e.g., `label="Task type"`, `label="Available external datasets for you to choose"`).
32
+ 4. **New Benchmark Selector**: Add a `gr.CheckboxGroup` (or `gr.Dropdown` if single choice is desired) for public benchmarks under the evaluation block, defaulting to empty. Ensure its choices update with the task.
33
+ 5. **Dynamic Task Handling**: Expand `on_task_change()` (or replace with a new handler) to update:
34
+ - Metric choices + defaults
35
+ - Candidate dataset choices
36
+ - Base model dropdown options
37
+ - Benchmark selector options
38
+ - Scaling number label (`gr.update(label=...)`) to append “(hours)” for speech recognition.
39
+ 6. **Submission Flow Adjustments**: Modify callback wiring so benchmark selections feed into `submit_with_feedback()` / `submit_experiments()`:
40
+ - Ensure mutually exclusive handling between manual test upload/id and benchmark pick (e.g., raise if both provided to avoid hidden fallback).
41
+ - When a benchmark dataset is chosen, pass its identifier as the test dataset source.
42
+ 7. **Validation Hooks**: Update or add unit coverage under `tests/` (likely `tests/test_app.py` or new module) to exercise `metrics_for_task`, base model mapping, and new task change logic, focusing on the speech recognition branch.
43
+
44
+ ### Sample Code
45
+ ```python
46
+ # app.py
47
+ TASK_MODEL_CHOICES = {
48
+ "classification": [DEFAULT_MODEL],
49
+ "qa": [DEFAULT_MODEL],
50
+ "pretraining": [DEFAULT_MODEL],
51
+ "speech_recognition": [
52
+ "anton-l/emformer-base-librispeech",
53
+ "train from scratch",
54
+ ],
55
+ }
56
+
57
+ TASK_BENCHMARK_CHOICES = {
58
+ "speech_recognition": [
59
+ "sanchit-gandhi/tedlium-data.test",
60
+ "openslr/librispeech_asr.test.clean",
61
+ ],
62
+ # other tasks populate if/when needed
63
+ }
64
+
65
+ def on_task_change(selected_task: str):
66
+ metric_choices, metric_defaults = metrics_for_task(selected_task)
67
+ return (
68
+ gr.update(choices=metric_choices, value=metric_defaults),
69
+ gr.update(choices=candidate_choices_for_task(selected_task), value=[]),
70
+ gr.update(choices=TASK_MODEL_CHOICES[selected_task], value=None),
71
+ gr.update(choices=TASK_BENCHMARK_CHOICES.get(selected_task, []), value=[]),
72
+ gr.update(label=_target_label_for_task(selected_task)),
73
+ )
74
+ ```
75
+
76
+ ## Dependencies
77
+ - `gradio` for UI layout modifications.
78
+ - Existing `utils` helpers for candidate dataset loading and submission validation.
79
+ - Hugging Face hub access for dataset identifiers (no new runtime dependencies).
80
+
81
+ ## Risk Assessment
82
+ - **UI Regression**: Reworking the layout may inadvertently detach components from callbacks; thorough manual verification required.
83
+ - **State Synchronization**: Updating multiple components on task change increases the chance of inconsistent state if any mapping is missing; mitigate by validating dictionaries cover all tasks during initialization.
84
+ - **Benchmark/Test Conflicts**: Introducing public benchmark selection alongside manual uploads could create ambiguous submission behavior; enforce validation to avoid silent precedence.
85
+ - **Future Task Expansion**: Hard-coded mappings will need maintenance; consider extracting to structured config if task set grows.
86
+
87
+ ## Success Criteria
88
+ - The rendered UI shows three clearly labeled sections with explanatory text for uploads.
89
+ - Selecting “speech recognition” updates task type options, available external datasets, base models, metrics, benchmark datasets, and scaling label (with “(hours)”).
90
+ - Submission logic honors speech recognition selections without relying on fallback defaults, raising explicit errors for conflicting inputs.
91
+ - Automated tests cover the new task metadata paths and pass successfully.
app.py CHANGED
@@ -82,18 +82,83 @@ def environment_diagnostics() -> Tuple[Dict[str, Any], Dict[str, Any]]:
82
  DEFAULT_MODEL = "meta-llama/Llama-3.1-8B-Instruct"
83
  DEFAULT_SIZES = [5000, 10000, 20000]
84
 
 
 
 
 
 
 
 
 
 
 
85
  TASK_METRIC_CHOICES: Dict[str, List[str]] = {
86
  "classification": ["loss", "f1", "exact_match"],
87
  "qa": ["loss", "f1", "exact_match"],
88
  "pretraining": ["loss", "perplexity"],
 
89
  }
90
 
91
  TASK_METRIC_DEFAULT: Dict[str, List[str]] = {
92
  "classification": ["f1"],
93
  "qa": ["f1"],
94
  "pretraining": ["perplexity"],
 
95
  }
96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
  def _coerce_int_list(values: Iterable[Any] | None) -> List[int]:
99
  if values is None:
@@ -127,11 +192,20 @@ def metrics_for_task(task: str) -> Tuple[List[str], List[str]]:
127
  return choices, defaults
128
 
129
 
130
- def on_task_change(selected_task: str) -> Tuple[Dict[str, Any], Dict[str, Any]]:
131
- metric_choices, metric_defaults = metrics_for_task(selected_task)
 
 
 
 
 
 
132
  return (
133
  gr.update(choices=metric_choices, value=metric_defaults),
134
- gr.update(choices=candidate_choices_for_task(selected_task), value=[]),
 
 
 
135
  )
136
 
137
 
@@ -146,12 +220,16 @@ def submit_experiments(
146
  target_size: float,
147
  test_files: Optional[List[Any]],
148
  test_id: str,
 
149
  profile: Optional[gr.OAuthProfile] = None,
150
  oauth: Optional[gr.OAuthToken] = None,
151
  ) -> List[Dict[str, Any]]:
152
  if CONFIG_ERROR:
153
  raise RuntimeError(f"Configuration error: {CONFIG_ERROR}")
154
  assert CONFIG is not None
 
 
 
155
  try:
156
  CONFIG.require_service_token()
157
  except ConfigError as exc:
@@ -160,13 +238,13 @@ def submit_experiments(
160
  "in the Space settings before retrying."
161
  ) from exc
162
 
163
- metric_choices, _ = metrics_for_task(task)
164
  if not metrics:
165
  raise ValueError("Select at least one metric for the chosen task.")
166
  invalid_metrics = [metric for metric in metrics if metric not in metric_choices]
167
  if invalid_metrics:
168
  invalid = ", ".join(invalid_metrics)
169
- raise ValueError(f"Unsupported metric(s) for task '{task}': {invalid}.")
170
  selected_metrics = list(metrics)
171
 
172
  selected_sizes = _coerce_int_list(sizes)
@@ -222,7 +300,7 @@ def submit_experiments(
222
  "--model",
223
  model,
224
  "--task",
225
- task,
226
  "--d0",
227
  d0_repo,
228
  "--dk",
@@ -262,6 +340,7 @@ def submit_experiments(
262
  "url": getattr(job, "url", ""),
263
  "status": job.status,
264
  "artifacts": "",
 
265
  }
266
  )
267
  return jobs
@@ -277,21 +356,24 @@ def submit_with_feedback(
277
  dk_list: List[str],
278
  sizes: List[Any],
279
  target_size: float,
280
- test_files: Optional[List[Any]],
281
- test_id: str,
 
282
  profile: Optional[gr.OAuthProfile] = None,
283
  oauth: Optional[gr.OAuthToken] = None,
284
  ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
 
285
  try:
286
  jobs = submit_experiments(
287
  d0_files=d0_files,
288
  d0_id=d0_id,
289
- task=task,
290
  model=model,
291
  metrics=metrics,
292
  dk_list=dk_list,
293
  sizes=sizes,
294
  target_size=target_size,
 
295
  test_files=test_files,
296
  test_id=test_id,
297
  profile=profile,
@@ -354,34 +436,63 @@ def build_interface() -> gr.Blocks:
354
  visible=True,
355
  )
356
  status_banner = gr.Markdown("", visible=False)
357
- with gr.Row():
358
- d0_files = gr.File(label="Upload D₀ (.csv/.jsonl/.zip)", file_count="multiple")
359
- d0_id = gr.Textbox(label="Hub dataset id (user/dataset)")
360
- with gr.Row():
361
- test_files = gr.File(label="Optional test set upload", file_count="multiple")
362
- test_id = gr.Textbox(label="Test dataset id (user/dataset[:split])")
363
- task = gr.Radio(
364
- choices=["classification", "qa", "pretraining"],
365
- value="classification",
366
- label="Task",
367
- )
368
- model = gr.Dropdown(choices=[DEFAULT_MODEL], value=DEFAULT_MODEL, label="Model")
369
- metric_choices, metric_defaults = metrics_for_task("classification")
370
- metrics = gr.CheckboxGroup(
371
- choices=metric_choices,
372
- value=metric_defaults,
373
- label="Metrics",
374
- )
375
- dk = gr.CheckboxGroup(
376
- choices=candidate_choices_for_task("classification"),
377
- label="Candidate datasets",
378
- )
379
- sizes = gr.CheckboxGroup(
380
- choices=[str(size) for size in DEFAULT_SIZES],
381
- value=[str(DEFAULT_SIZES[0]), str(DEFAULT_SIZES[1])],
382
- label="Mixture sizes",
383
- )
384
- target_size = gr.Number(value=200000, label="Target size for prediction")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
385
 
386
  run_btn = gr.Button("Run experiments")
387
  refresh_btn = gr.Button("Refresh status")
@@ -393,7 +504,11 @@ def build_interface() -> gr.Blocks:
393
  wrap=True,
394
  )
395
 
396
- task.change(fn=on_task_change, inputs=task, outputs=[metrics, dk])
 
 
 
 
397
 
398
  run_btn.click(
399
  fn=submit_with_feedback,
@@ -409,6 +524,7 @@ def build_interface() -> gr.Blocks:
409
  target_size,
410
  test_files,
411
  test_id,
 
412
  ],
413
  outputs=[jobs_state, status_banner],
414
  )
 
82
  DEFAULT_MODEL = "meta-llama/Llama-3.1-8B-Instruct"
83
  DEFAULT_SIZES = [5000, 10000, 20000]
84
 
85
+ TASK_OPTIONS: List[Tuple[str, str]] = [
86
+ ("classification", "classification"),
87
+ ("qa", "qa"),
88
+ ("pretraining", "language model pretraining"),
89
+ ("speech_recognition", "speech recognition"),
90
+ ]
91
+
92
+ TASK_LABEL_TO_VALUE: Dict[str, str] = {label: value for value, label in TASK_OPTIONS}
93
+ TASK_VALUE_TO_LABEL: Dict[str, str] = {value: label for value, label in TASK_OPTIONS}
94
+
95
  TASK_METRIC_CHOICES: Dict[str, List[str]] = {
96
  "classification": ["loss", "f1", "exact_match"],
97
  "qa": ["loss", "f1", "exact_match"],
98
  "pretraining": ["loss", "perplexity"],
99
+ "speech_recognition": ["loss", "Word Error Rate (WER)"],
100
  }
101
 
102
  TASK_METRIC_DEFAULT: Dict[str, List[str]] = {
103
  "classification": ["f1"],
104
  "qa": ["f1"],
105
  "pretraining": ["perplexity"],
106
+ "speech_recognition": ["Word Error Rate (WER)"],
107
  }
108
 
109
+ TASK_MODEL_CHOICES: Dict[str, List[str]] = {
110
+ "classification": [DEFAULT_MODEL],
111
+ "qa": [DEFAULT_MODEL],
112
+ "pretraining": [DEFAULT_MODEL],
113
+ "speech_recognition": [
114
+ "anton-l/emformer-base-librispeech",
115
+ "train from scratch",
116
+ ],
117
+ }
118
+
119
+ TASK_BENCHMARK_CHOICES: Dict[str, List[str]] = {
120
+ "speech_recognition": [
121
+ "sanchit-gandhi/tedlium-data.test",
122
+ "openslr/librispeech_asr.test.clean",
123
+ ]
124
+ }
125
+
126
+
127
+ def _task_value_from_label(label: str) -> str:
128
+ try:
129
+ return TASK_LABEL_TO_VALUE[label]
130
+ except KeyError as exc:
131
+ raise ValueError(f"Unsupported task label '{label}'.") from exc
132
+
133
+
134
+ def _task_label_from_value(value: str) -> str:
135
+ try:
136
+ return TASK_VALUE_TO_LABEL[value]
137
+ except KeyError as exc:
138
+ raise ValueError(f"Unsupported task '{value}'.") from exc
139
+
140
+
141
+ def _normalize_task_value(task: str) -> str:
142
+ if task in TASK_VALUE_TO_LABEL:
143
+ return task
144
+ return _task_value_from_label(task)
145
+
146
+
147
+ def _model_choices_for_task(task: str) -> List[str]:
148
+ try:
149
+ choices = TASK_MODEL_CHOICES[task]
150
+ except KeyError as exc:
151
+ raise ValueError(f"Unsupported task '{task}'.") from exc
152
+ if not choices:
153
+ raise ValueError(f"No base models configured for task '{task}'.")
154
+ return choices
155
+
156
+
157
+ def _target_label_for_task(task: str) -> str:
158
+ if task == "speech_recognition":
159
+ return "Target dataset size for full-scale training (hours)"
160
+ return "Target dataset size for full-scale training"
161
+
162
 
163
  def _coerce_int_list(values: Iterable[Any] | None) -> List[int]:
164
  if values is None:
 
192
  return choices, defaults
193
 
194
 
195
+ def on_task_change(
196
+ selected_task_label: str,
197
+ ) -> Tuple[Dict[str, Any], Dict[str, Any], Dict[str, Any], Dict[str, Any], Dict[str, Any]]:
198
+ task_value = _task_value_from_label(selected_task_label)
199
+ metric_choices, metric_defaults = metrics_for_task(task_value)
200
+ candidate_choices = candidate_choices_for_task(task_value)
201
+ model_choices = _model_choices_for_task(task_value)
202
+ benchmark_choices = TASK_BENCHMARK_CHOICES.get(task_value, [])
203
  return (
204
  gr.update(choices=metric_choices, value=metric_defaults),
205
+ gr.update(choices=candidate_choices, value=[]),
206
+ gr.update(choices=model_choices, value=model_choices[0]),
207
+ gr.update(choices=benchmark_choices, value=[]),
208
+ gr.update(label=_target_label_for_task(task_value)),
209
  )
210
 
211
 
 
220
  target_size: float,
221
  test_files: Optional[List[Any]],
222
  test_id: str,
223
+ public_benchmarks: Optional[List[str]] = None,
224
  profile: Optional[gr.OAuthProfile] = None,
225
  oauth: Optional[gr.OAuthToken] = None,
226
  ) -> List[Dict[str, Any]]:
227
  if CONFIG_ERROR:
228
  raise RuntimeError(f"Configuration error: {CONFIG_ERROR}")
229
  assert CONFIG is not None
230
+ task_value = _normalize_task_value(task)
231
+ task_label = _task_label_from_value(task_value)
232
+ selected_public_benchmarks = list(public_benchmarks or [])
233
  try:
234
  CONFIG.require_service_token()
235
  except ConfigError as exc:
 
238
  "in the Space settings before retrying."
239
  ) from exc
240
 
241
+ metric_choices, _ = metrics_for_task(task_value)
242
  if not metrics:
243
  raise ValueError("Select at least one metric for the chosen task.")
244
  invalid_metrics = [metric for metric in metrics if metric not in metric_choices]
245
  if invalid_metrics:
246
  invalid = ", ".join(invalid_metrics)
247
+ raise ValueError(f"Unsupported metric(s) for task '{task_label}': {invalid}.")
248
  selected_metrics = list(metrics)
249
 
250
  selected_sizes = _coerce_int_list(sizes)
 
300
  "--model",
301
  model,
302
  "--task",
303
+ task_value,
304
  "--d0",
305
  d0_repo,
306
  "--dk",
 
340
  "url": getattr(job, "url", ""),
341
  "status": job.status,
342
  "artifacts": "",
343
+ "benchmarks": selected_public_benchmarks,
344
  }
345
  )
346
  return jobs
 
356
  dk_list: List[str],
357
  sizes: List[Any],
358
  target_size: float,
359
+ test_files: Optional[List[Any]] = None,
360
+ test_id: str = "",
361
+ public_benchmarks: Optional[List[str]] = None,
362
  profile: Optional[gr.OAuthProfile] = None,
363
  oauth: Optional[gr.OAuthToken] = None,
364
  ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
365
+ task_value = _normalize_task_value(task)
366
  try:
367
  jobs = submit_experiments(
368
  d0_files=d0_files,
369
  d0_id=d0_id,
370
+ task=task_value,
371
  model=model,
372
  metrics=metrics,
373
  dk_list=dk_list,
374
  sizes=sizes,
375
  target_size=target_size,
376
+ public_benchmarks=public_benchmarks,
377
  test_files=test_files,
378
  test_id=test_id,
379
  profile=profile,
 
436
  visible=True,
437
  )
438
  status_banner = gr.Markdown("", visible=False)
439
+
440
+ initial_task_value = "classification"
441
+ initial_task_label = _task_label_from_value(initial_task_value)
442
+ metric_choices, metric_defaults = metrics_for_task(initial_task_value)
443
+ candidate_choices = candidate_choices_for_task(initial_task_value)
444
+ model_choices = _model_choices_for_task(initial_task_value)
445
+ benchmark_choices = TASK_BENCHMARK_CHOICES.get(initial_task_value, [])
446
+
447
+ with gr.Group():
448
+ gr.Markdown("### Training task specifications")
449
+ task = gr.Radio(
450
+ choices=[label for _, label in TASK_OPTIONS],
451
+ value=initial_task_label,
452
+ label="Task type",
453
+ )
454
+ gr.Markdown("If you have any existing training data, please upload")
455
+ with gr.Row():
456
+ d0_files = gr.File(label="Upload D₀ (.csv/.jsonl/.zip)", file_count="multiple")
457
+ d0_id = gr.Textbox(label="Hub dataset id (user/dataset)")
458
+ dk = gr.CheckboxGroup(
459
+ choices=candidate_choices,
460
+ label="Available external datasets for you to choose",
461
+ )
462
+ model = gr.Dropdown(
463
+ choices=model_choices,
464
+ value=model_choices[0],
465
+ label="Base model",
466
+ )
467
+ sizes = gr.CheckboxGroup(
468
+ choices=[str(size) for size in DEFAULT_SIZES],
469
+ value=[str(DEFAULT_SIZES[0]), str(DEFAULT_SIZES[1])],
470
+ label="Mixture sizes",
471
+ )
472
+
473
+ with gr.Group():
474
+ gr.Markdown("### Evaluation specifications")
475
+ metrics = gr.CheckboxGroup(
476
+ choices=metric_choices,
477
+ value=metric_defaults,
478
+ label="Eval Metric",
479
+ )
480
+ gr.Markdown("If you have any existing benchmark dataset, please upload")
481
+ with gr.Row():
482
+ test_files = gr.File(label="Optional test set upload", file_count="multiple")
483
+ test_id = gr.Textbox(label="Test dataset id (user/dataset[:split])")
484
+ public_benchmarks = gr.CheckboxGroup(
485
+ choices=benchmark_choices,
486
+ value=[],
487
+ label="Available public benchmark datasets",
488
+ )
489
+
490
+ with gr.Group():
491
+ gr.Markdown("### Scaling prediction specifications")
492
+ target_size = gr.Number(
493
+ value=200000,
494
+ label=_target_label_for_task(initial_task_value),
495
+ )
496
 
497
  run_btn = gr.Button("Run experiments")
498
  refresh_btn = gr.Button("Refresh status")
 
504
  wrap=True,
505
  )
506
 
507
+ task.change(
508
+ fn=on_task_change,
509
+ inputs=task,
510
+ outputs=[metrics, dk, model, public_benchmarks, target_size],
511
+ )
512
 
513
  run_btn.click(
514
  fn=submit_with_feedback,
 
524
  target_size,
525
  test_files,
526
  test_id,
527
+ public_benchmarks,
528
  ],
529
  outputs=[jobs_state, status_banner],
530
  )
catalog/candidates.json CHANGED
@@ -28,5 +28,19 @@
28
  "license": "cc-by-4.0",
29
  "size_hint": "365M",
30
  "columns": {"text": "text"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  }
32
  ]
 
28
  "license": "cc-by-4.0",
29
  "size_hint": "365M",
30
  "columns": {"text": "text"}
31
+ },
32
+ {
33
+ "id": "sanchit-gandhi/tedlium-data.train",
34
+ "task": "speech_recognition",
35
+ "license": "unknown",
36
+ "size_hint": "unknown",
37
+ "columns": {}
38
+ },
39
+ {
40
+ "id": "openslr/librispeech_asr.train.clean.100",
41
+ "task": "speech_recognition",
42
+ "license": "unknown",
43
+ "size_hint": "100 hours",
44
+ "columns": {}
45
  }
46
  ]
utils/__pycache__/config.cpython-310.pyc CHANGED
Binary files a/utils/__pycache__/config.cpython-310.pyc and b/utils/__pycache__/config.cpython-310.pyc differ