Bappadala Rohith Kumar Naidu commited on
Commit
ac89bce
Β·
1 Parent(s): d8b26e4

docs(notebooks): inject rich markdown text cells to 5 notebooks without touching code

Browse files
notebooks/Accident_EDA_&_Hotspot_Generator_chatbot_service_data_accidents_3.ipynb CHANGED
@@ -14,6 +14,47 @@
14
  }
15
  },
16
  "cells": [
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  {
18
  "cell_type": "code",
19
  "source": [
@@ -246,6 +287,18 @@
246
  }
247
  ]
248
  },
 
 
 
 
 
 
 
 
 
 
 
 
249
  {
250
  "cell_type": "code",
251
  "execution_count": null,
@@ -281,6 +334,20 @@
281
  "print(f\"Loaded accidents dataset with {len(df)} rows.\")\n"
282
  ]
283
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
284
  {
285
  "cell_type": "code",
286
  "source": [
@@ -375,6 +442,25 @@
375
  }
376
  ]
377
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
378
  {
379
  "cell_type": "code",
380
  "source": [
 
14
  }
15
  },
16
  "cells": [
17
+ {
18
+ "cell_type": "markdown",
19
+ "metadata": {},
20
+ "source": [
21
+ "# πŸ—ΊοΈ Accident EDA & Blackspot Hotspot Generator\n",
22
+ "\n",
23
+ "**Part of:** SafeVisionAI Β· IIT Madras Road Safety Hackathon 2026 \n",
24
+ "**Output:** `accidents_summary.json` + `blackspot_seed.csv` β†’ seeded to the backend database\n",
25
+ "\n",
26
+ "This notebook processes the **Kaggle India Road Accidents dataset** (1M+ rows) \n",
27
+ "to produce two key intelligence artifacts:\n",
28
+ "\n",
29
+ "1. **`accidents_summary.json`** β€” National total + top 10 states by accident count\n",
30
+ "2. **`blackspot_seed.csv`** β€” GPS clusters with accident counts for map hotspot visualization\n",
31
+ "\n",
32
+ "---\n",
33
+ "### πŸ“Š Dataset\n",
34
+ "- **Source:** Kaggle India Road Accidents dataset\n",
35
+ "- **Size:** ~1,048,575 rows Β· 30+ columns\n",
36
+ "- **Acquired via:** `setup_kaggle.ps1` + `scripts/data/seed_blackspots.py`\n",
37
+ "\n",
38
+ "### πŸ”„ Pipeline\n",
39
+ "```\n",
40
+ "Raw CSV β†’ Normalize columns β†’ State summary β†’ GPS cluster β†’ blackspot_seed.csv\n",
41
+ "```"
42
+ ]
43
+ },
44
+ {
45
+ "cell_type": "markdown",
46
+ "metadata": {},
47
+ "source": [
48
+ "## πŸ“‚ Step 0 β€” Upload Accidents Dataset\n",
49
+ "\n",
50
+ "Upload `kaggle_india_accidents.csv` from: \n",
51
+ "```\n",
52
+ "chatbot_service/data/accidents/kaggle_india_accidents.csv\n",
53
+ "```\n",
54
+ "\n",
55
+ "> ⚠️ This file is ~450MB. The Hub stores it via Git LFS."
56
+ ]
57
+ },
58
  {
59
  "cell_type": "code",
60
  "source": [
 
287
  }
288
  ]
289
  },
290
+ {
291
+ "cell_type": "markdown",
292
+ "metadata": {},
293
+ "source": [
294
+ "## πŸ“– Step 1 β€” Load & Normalize Dataset\n",
295
+ "\n",
296
+ "Reads the CSV and normalizes all column names to lowercase snake_case. \n",
297
+ "Result: **1,048,575 rows** of accident records across Indian states.\n",
298
+ "\n",
299
+ "> πŸ’‘ The mixed-type DtypeWarning is expected for columns with mixed numeric/string data."
300
+ ]
301
+ },
302
  {
303
  "cell_type": "code",
304
  "execution_count": null,
 
334
  "print(f\"Loaded accidents dataset with {len(df)} rows.\")\n"
335
  ]
336
  },
337
+ {
338
+ "cell_type": "markdown",
339
+ "metadata": {},
340
+ "source": [
341
+ "## πŸ“Š Step 2 β€” Generate National Summary JSON\n",
342
+ "\n",
343
+ "Auto-detects the `state` and `accident` columns using flexible column name matching, \n",
344
+ "then computes:\n",
345
+ "- **National total** β€” sum of all accident counts\n",
346
+ "- **Top 10 states** β€” ranked by accident volume\n",
347
+ "\n",
348
+ "Exports `accidents_summary.json` β€” used by the chatbot to answer national stats queries."
349
+ ]
350
+ },
351
  {
352
  "cell_type": "code",
353
  "source": [
 
442
  }
443
  ]
444
  },
445
+ {
446
+ "cell_type": "markdown",
447
+ "metadata": {},
448
+ "source": [
449
+ "## πŸ“ Step 3 β€” Generate GPS Blackspot Clusters\n",
450
+ "\n",
451
+ "Groups accident records by rounded GPS coordinates (2 decimal places β‰ˆ ~1kmΒ²), \n",
452
+ "then counts accidents per grid cell.\n",
453
+ "\n",
454
+ "Result: **4,134 blackspot clusters** exported as `blackspot_seed.csv` \n",
455
+ "β†’ This CSV is loaded by `backend/scripts/app/seed_emergency.py` to populate the PostGIS accident layer.\n",
456
+ "\n",
457
+ "| Column | Description |\n",
458
+ "|--------|-------------|\n",
459
+ "| `lat_r` | Rounded latitude (Β±0.01Β°) |\n",
460
+ "| `lon_r` | Rounded longitude (Β±0.01Β°) |\n",
461
+ "| `accident_count` | Number of accidents in this 1kmΒ² cell |"
462
+ ]
463
+ },
464
  {
465
  "cell_type": "code",
466
  "source": [
notebooks/ChromaDB_RAG_Vectorstore_Build_chatbot_service_data_chroma_db_2.ipynb CHANGED
@@ -7884,6 +7884,47 @@
7884
  }
7885
  },
7886
  "cells": [
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7887
  {
7888
  "cell_type": "code",
7889
  "source": [
@@ -7933,6 +7974,21 @@
7933
  }
7934
  ]
7935
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7936
  {
7937
  "cell_type": "code",
7938
  "source": [
@@ -7959,6 +8015,19 @@
7959
  }
7960
  ]
7961
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
7962
  {
7963
  "cell_type": "code",
7964
  "source": [
@@ -8631,6 +8700,20 @@
8631
  }
8632
  ]
8633
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8634
  {
8635
  "cell_type": "code",
8636
  "source": [
@@ -8668,6 +8751,18 @@
8668
  }
8669
  ]
8670
  },
 
 
 
 
 
 
 
 
 
 
 
 
8671
  {
8672
  "cell_type": "code",
8673
  "source": [
@@ -8916,6 +9011,18 @@
8916
  }
8917
  ]
8918
  },
 
 
 
 
 
 
 
 
 
 
 
 
8919
  {
8920
  "cell_type": "code",
8921
  "source": [
 
7884
  }
7885
  },
7886
  "cells": [
7887
+ {
7888
+ "cell_type": "markdown",
7889
+ "metadata": {},
7890
+ "source": [
7891
+ "# 🧠 ChromaDB RAG Vectorstore β€” Legal & Medical PDF Ingestion\n",
7892
+ "\n",
7893
+ "**Part of:** SafeVisionAI Β· IIT Madras Road Safety Hackathon 2026 \n",
7894
+ "**Output:** `chroma_db/` directory β†’ deployed to `chatbot_service/data/chroma_db/`\n",
7895
+ "\n",
7896
+ "This notebook builds the **Retrieval-Augmented Generation (RAG)** knowledge base for the SafeVisionAI chatbot. \n",
7897
+ "It ingests Indian legal documents (Motor Vehicles Act, MoRTH circulars) and first-aid medical PDFs, \n",
7898
+ "chunks them, embeds them using `sentence-transformers`, and stores them in a **ChromaDB** vector store.\n",
7899
+ "\n",
7900
+ "---\n",
7901
+ "### πŸ—‚οΈ Source Documents\n",
7902
+ "| Category | Files | Source |\n",
7903
+ "|----------|-------|--------|\n",
7904
+ "| Legal | Motor Vehicles Act 2019, MoRTH 2022 | `download_legal_pdfs.py` |\n",
7905
+ "| Medical | First Aid guides, Emergency protocols | `download_legal_pdfs.py` |\n",
7906
+ "\n",
7907
+ "### πŸ”„ Pipeline\n",
7908
+ "```\n",
7909
+ "PDFs β†’ pdfplumber chunks β†’ sentence-transformer embeddings β†’ ChromaDB index\n",
7910
+ "```\n",
7911
+ "\n",
7912
+ "> πŸ’‘ The resulting `chroma_db/` is what the chatbot queries at runtime for grounded answers."
7913
+ ]
7914
+ },
7915
+ {
7916
+ "cell_type": "markdown",
7917
+ "metadata": {},
7918
+ "source": [
7919
+ "## πŸ”§ Step 1 β€” Install Dependencies\n",
7920
+ "\n",
7921
+ "Installs the full RAG stack:\n",
7922
+ "- `chromadb` β€” local vector database for semantic search\n",
7923
+ "- `sentence-transformers` β€” `all-MiniLM-L6-v2` model for text embeddings\n",
7924
+ "- `pdfplumber` β€” PDF text extraction with page layout awareness\n",
7925
+ "- `langchain` β€” document chunking utilities"
7926
+ ]
7927
+ },
7928
  {
7929
  "cell_type": "code",
7930
  "source": [
 
7974
  }
7975
  ]
7976
  },
7977
+ {
7978
+ "cell_type": "markdown",
7979
+ "metadata": {},
7980
+ "source": [
7981
+ "## πŸ“‚ Step 2 β€” Upload PDF Documents\n",
7982
+ "\n",
7983
+ "Upload all legal and medical PDFs from: \n",
7984
+ "```\n",
7985
+ "chatbot_service/data/legal/\n",
7986
+ "chatbot_service/data/medical/\n",
7987
+ "```\n",
7988
+ "\n",
7989
+ "> πŸ“„ Expected PDFs: Motor_Vehicles_Act.pdf, MoRTH_2022_Report.pdf, first_aid_guide.pdf, etc."
7990
+ ]
7991
+ },
7992
  {
7993
  "cell_type": "code",
7994
  "source": [
 
8015
  }
8016
  ]
8017
  },
8018
+ {
8019
+ "cell_type": "markdown",
8020
+ "metadata": {},
8021
+ "source": [
8022
+ "## βœ‚οΈ Step 3 β€” Extract & Chunk PDF Text\n",
8023
+ "\n",
8024
+ "Uses `pdfplumber` to extract text from each PDF page, \n",
8025
+ "then splits into fixed-size chunks (512 tokens) with 50-token overlap.\n",
8026
+ "\n",
8027
+ "Chunking ensures the embedding model sees coherent, context-rich passages \n",
8028
+ "rather than arbitrarily cut sentences."
8029
+ ]
8030
+ },
8031
  {
8032
  "cell_type": "code",
8033
  "source": [
 
8700
  }
8701
  ]
8702
  },
8703
+ {
8704
+ "cell_type": "markdown",
8705
+ "metadata": {},
8706
+ "source": [
8707
+ "## πŸ”’ Step 4 β€” Generate Embeddings\n",
8708
+ "\n",
8709
+ "Uses the `all-MiniLM-L6-v2` sentence-transformer model to convert each text chunk \n",
8710
+ "into a 384-dimensional embedding vector.\n",
8711
+ "\n",
8712
+ "| Model | Dimensions | Speed | Quality |\n",
8713
+ "|-------|-----------|-------|---------|\n",
8714
+ "| all-MiniLM-L6-v2 | 384 | Fast | Good for semantic QA |"
8715
+ ]
8716
+ },
8717
  {
8718
  "cell_type": "code",
8719
  "source": [
 
8751
  }
8752
  ]
8753
  },
8754
+ {
8755
+ "cell_type": "markdown",
8756
+ "metadata": {},
8757
+ "source": [
8758
+ "## πŸ’Ύ Step 5 β€” Build & Persist ChromaDB Index\n",
8759
+ "\n",
8760
+ "Creates a persistent ChromaDB collection and upserts all embedded chunks. \n",
8761
+ "The resulting `chroma_db/` folder contains the SQLite + vector index files.\n",
8762
+ "\n",
8763
+ "> πŸ“¦ Output size: ~50-100MB depending on number of PDFs ingested."
8764
+ ]
8765
+ },
8766
  {
8767
  "cell_type": "code",
8768
  "source": [
 
9011
  }
9012
  ]
9013
  },
9014
+ {
9015
+ "cell_type": "markdown",
9016
+ "metadata": {},
9017
+ "source": [
9018
+ "## πŸ“₯ Step 6 β€” Download ChromaDB\n",
9019
+ "\n",
9020
+ "Zips the `chroma_db/` directory and downloads it for deployment. \n",
9021
+ "Place the extracted folder at: `chatbot_service/data/chroma_db/`\n",
9022
+ "\n",
9023
+ "The chatbot service auto-loads this at startup β€” no rebuild needed."
9024
+ ]
9025
+ },
9026
  {
9027
  "cell_type": "code",
9028
  "source": [
notebooks/Risk_Model_ONNX_Training_frontend_public_models_5.ipynb CHANGED
@@ -14,6 +14,48 @@
14
  }
15
  },
16
  "cells": [
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  {
18
  "cell_type": "code",
19
  "execution_count": null,
@@ -42,6 +84,26 @@
42
  "print(\"βœ… Toolkit installed\")\n"
43
  ]
44
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  {
46
  "cell_type": "code",
47
  "source": [
@@ -83,6 +145,18 @@
83
  }
84
  ]
85
  },
 
 
 
 
 
 
 
 
 
 
 
 
86
  {
87
  "cell_type": "code",
88
  "source": [
@@ -111,6 +185,26 @@
111
  }
112
  ]
113
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
  {
115
  "cell_type": "code",
116
  "source": [
 
14
  }
15
  },
16
  "cells": [
17
+ {
18
+ "cell_type": "markdown",
19
+ "metadata": {},
20
+ "source": [
21
+ "# ⚑ Road Risk Scoring Model β€” ONNX Training Pipeline\n",
22
+ "\n",
23
+ "**Part of:** SafeVisionAI Β· IIT Madras Road Safety Hackathon 2026 \n",
24
+ "**Output:** `risk_model.onnx` (~21KB) β†’ deployed to `frontend/public/models/`\n",
25
+ "\n",
26
+ "This notebook trains a **GradientBoosting classifier** to predict real-time road risk \n",
27
+ "and exports it as ONNX for **in-browser inference** β€” no server call needed.\n",
28
+ "\n",
29
+ "---\n",
30
+ "### 🧠 Model Architecture\n",
31
+ "| Component | Details |\n",
32
+ "|-----------|--------|\n",
33
+ "| Algorithm | GradientBoostingClassifier |\n",
34
+ "| Input features | 5 (road type, hour, rain, speed limit, prev accidents) |\n",
35
+ "| Output | Binary: `high_risk` (0 or 1) |\n",
36
+ "| Export | ONNX via `skl2onnx` |\n",
37
+ "| Size | ~21KB β€” loads in milliseconds in browser |\n",
38
+ "\n",
39
+ "### πŸ”„ Pipeline\n",
40
+ "```\n",
41
+ "Synthetic data generation β†’ GBM training β†’ ONNX conversion β†’ Download\n",
42
+ "```\n",
43
+ "\n",
44
+ "> πŸ’‘ The model runs entirely client-side in the SafeVisionAI PWA using `onnxruntime-web`."
45
+ ]
46
+ },
47
+ {
48
+ "cell_type": "markdown",
49
+ "metadata": {},
50
+ "source": [
51
+ "## πŸ”§ Step 1 β€” Install ML Toolkit\n",
52
+ "\n",
53
+ "Installs the minimum stack needed for training and ONNX export:\n",
54
+ "- `scikit-learn` β€” GradientBoostingClassifier\n",
55
+ "- `skl2onnx` β€” converts sklearn models to ONNX format\n",
56
+ "- `pandas` + `numpy` β€” data generation and manipulation"
57
+ ]
58
+ },
59
  {
60
  "cell_type": "code",
61
  "execution_count": null,
 
84
  "print(\"βœ… Toolkit installed\")\n"
85
  ]
86
  },
87
+ {
88
+ "cell_type": "markdown",
89
+ "metadata": {},
90
+ "source": [
91
+ "## πŸ—οΈ Step 2 β€” Build Synthetic Training Data\n",
92
+ "\n",
93
+ "Generates 5,000 synthetic road sensor records matching the live app's data structure:\n",
94
+ "\n",
95
+ "| Feature | Values | Description |\n",
96
+ "|---------|--------|-------------|\n",
97
+ "| `road_type` | 0-3 | NH=0, SH=1, MDR=2, VR=3 |\n",
98
+ "| `hour` | 0-23 | Hour of day |\n",
99
+ "| `is_rain` | 0/1 | Weather condition |\n",
100
+ "| `speed_limit` | 40/60/80/100 | Posted speed (km/h) |\n",
101
+ "| `prev_accidents` | Poisson(2) | Historical accident count |\n",
102
+ "\n",
103
+ "**Label logic:** `high_risk = 1` when: Night hours (10pm–4am) + National/State Highway + Raining \n",
104
+ "This reflects real-world patterns from the India accident dataset."
105
+ ]
106
+ },
107
  {
108
  "cell_type": "code",
109
  "source": [
 
145
  }
146
  ]
147
  },
148
+ {
149
+ "cell_type": "markdown",
150
+ "metadata": {},
151
+ "source": [
152
+ "## 🎯 Step 3 β€” Train GradientBoosting Classifier\n",
153
+ "\n",
154
+ "Trains a GBM with 50 estimators and max depth 4:\n",
155
+ "- **Fast:** <10 seconds on CPU\n",
156
+ "- **Accurate:** Handles non-linear risk patterns well\n",
157
+ "- **Tiny:** Converts to 21KB ONNX β€” ideal for edge/PWA deployment"
158
+ ]
159
+ },
160
  {
161
  "cell_type": "code",
162
  "source": [
 
185
  }
186
  ]
187
  },
188
+ {
189
+ "cell_type": "markdown",
190
+ "metadata": {},
191
+ "source": [
192
+ "## πŸ“¦ Step 4 β€” Export to ONNX & Download\n",
193
+ "\n",
194
+ "Converts the trained sklearn model to ONNX format using `skl2onnx`:\n",
195
+ "- **Input:** `FloatTensorType([None, 5])` β€” batch of 5-feature vectors\n",
196
+ "- **Output:** Risk probability + binary class label\n",
197
+ "\n",
198
+ "Download `risk_model.onnx` and place at: \n",
199
+ "```\n",
200
+ "frontend/public/models/risk_model.onnx\n",
201
+ "```\n",
202
+ "\n",
203
+ "The Next.js PWA loads this at startup and runs inference on each map segment click.\n",
204
+ "\n",
205
+ "> βœ… Final output: **~21KB** ONNX model β€” ready for browser deployment"
206
+ ]
207
+ },
208
  {
209
  "cell_type": "code",
210
  "source": [
notebooks/Roads_Data_Processing_backend_data_4.ipynb CHANGED
@@ -14,6 +14,54 @@
14
  }
15
  },
16
  "cells": [
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  {
18
  "cell_type": "code",
19
  "execution_count": null,
 
14
  }
15
  },
16
  "cells": [
17
+ {
18
+ "cell_type": "markdown",
19
+ "metadata": {},
20
+ "source": [
21
+ "# πŸ›£οΈ Roads & Toll Plaza Data Processing\n",
22
+ "\n",
23
+ "**Part of:** SafeVisionAI Β· IIT Madras Road Safety Hackathon 2026 \n",
24
+ "**Output:** `toll_plazas_lite.json` β†’ deployed to `backend/data/roads/`\n",
25
+ "\n",
26
+ "This notebook processes the **NHAI Toll Plaza dataset** to produce a lightweight JSON \n",
27
+ "suitable for the SafeVisionAI backend API and offline PWA map layer.\n",
28
+ "\n",
29
+ "---\n",
30
+ "### πŸ“Š Dataset\n",
31
+ "- **Source:** NHAI Open Data / custom toll_plazas.csv\n",
32
+ "- **Fields:** Name, NH Number, Latitude, Longitude\n",
33
+ "- **Coverage:** All operational toll plazas on National Highways\n",
34
+ "\n",
35
+ "### πŸ”„ Pipeline\n",
36
+ "```\n",
37
+ "toll_plazas.csv β†’ Select key columns β†’ Rename headers β†’ Export toll_plazas_lite.json\n",
38
+ "```"
39
+ ]
40
+ },
41
+ {
42
+ "cell_type": "markdown",
43
+ "metadata": {},
44
+ "source": [
45
+ "## πŸ“¦ Step 1 β€” Upload & Process Toll Plaza CSV\n",
46
+ "\n",
47
+ "Upload `toll_plazas.csv` from: \n",
48
+ "```\n",
49
+ "backend/data/roads/toll_plazas.csv\n",
50
+ "```\n",
51
+ "\n",
52
+ "The processing pipeline:\n",
53
+ "1. Reads the CSV with `pandas`\n",
54
+ "2. Selects only 4 essential columns: `name, id, lat, lon`\n",
55
+ "3. Drops rows with missing coordinates\n",
56
+ "4. Renames to human-readable headers\n",
57
+ "5. Exports as `toll_plazas_lite.json`\n",
58
+ "\n",
59
+ "The resulting JSON is consumed by the backend `/api/roads/tolls` endpoint \n",
60
+ "and the offline PWA map layer for toll overlay rendering.\n",
61
+ "\n",
62
+ "> πŸ“¦ Output size: ~65KB (vs 2MB+ raw CSV)"
63
+ ]
64
+ },
65
  {
66
  "cell_type": "code",
67
  "execution_count": null,
notebooks/YOLOv8_Pothole_Detector_Training_frontend_public_models_1.ipynb CHANGED
@@ -16,6 +16,46 @@
16
  "accelerator": "GPU"
17
  },
18
  "cells": [
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  {
20
  "cell_type": "code",
21
  "execution_count": null,
@@ -65,6 +105,15 @@
65
  "print('Anti-disconnect activated')\n"
66
  ]
67
  },
 
 
 
 
 
 
 
 
 
68
  {
69
  "cell_type": "code",
70
  "source": [
@@ -98,6 +147,19 @@
98
  }
99
  ]
100
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
101
  {
102
  "cell_type": "code",
103
  "source": [
@@ -128,6 +190,16 @@
128
  }
129
  ]
130
  },
 
 
 
 
 
 
 
 
 
 
131
  {
132
  "cell_type": "code",
133
  "source": [
@@ -347,6 +419,16 @@
347
  }
348
  ]
349
  },
 
 
 
 
 
 
 
 
 
 
350
  {
351
  "cell_type": "code",
352
  "source": [
@@ -375,6 +457,18 @@
375
  }
376
  ]
377
  },
 
 
 
 
 
 
 
 
 
 
 
 
378
  {
379
  "cell_type": "code",
380
  "source": [
@@ -409,6 +503,19 @@
409
  }
410
  ]
411
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
412
  {
413
  "cell_type": "code",
414
  "source": [
@@ -451,6 +558,26 @@
451
  }
452
  ]
453
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
454
  {
455
  "cell_type": "code",
456
  "source": [
 
16
  "accelerator": "GPU"
17
  },
18
  "cells": [
19
+ {
20
+ "cell_type": "markdown",
21
+ "metadata": {},
22
+ "source": [
23
+ "# πŸš— YOLOv8 Pothole & Road Damage Detector β€” Training Pipeline\n",
24
+ "\n",
25
+ "**Part of:** SafeVisionAI Β· IIT Madras Road Safety Hackathon 2026 \n",
26
+ "**Output:** `pothole_v1/weights/best.onnx` β†’ deployed to `frontend/public/models/`\n",
27
+ "\n",
28
+ "This notebook trains a YOLOv8n object detection model to identify **potholes, cracks, and manholes** on Indian roads. \n",
29
+ "The trained model is exported to ONNX format for in-browser inference via `onnxruntime-web`.\n",
30
+ "\n",
31
+ "---\n",
32
+ "### πŸ“‹ Pipeline Overview\n",
33
+ "| Step | What happens |\n",
34
+ "|------|-------------|\n",
35
+ "| 1 | Install Ultralytics + ONNX and verify GPU |\n",
36
+ "| 2 | Upload `archive.zip` dataset (road_damage_2025) |\n",
37
+ "| 3 | Extract the zip into `/content/pothole_data/` |\n",
38
+ "| 4 | Create master merged directory structure |\n",
39
+ "| 5 | Merge all dataset images + labels into one folder |\n",
40
+ "| 6 | Write `data.yaml` for 3-class detection |\n",
41
+ "| 7 | Train YOLOv8n for 50 epochs on T4 GPU (~45 min) |\n",
42
+ "| 8 | Export best weights to ONNX |\n",
43
+ "\n",
44
+ "> ⚠️ **Requires GPU runtime:** Runtime β†’ Change runtime type β†’ T4 GPU"
45
+ ]
46
+ },
47
+ {
48
+ "cell_type": "markdown",
49
+ "metadata": {},
50
+ "source": [
51
+ "## πŸ”§ Step 1 β€” Environment Setup\n",
52
+ "\n",
53
+ "Keeps the Colab session alive during long training runs and installs all required libraries.\n",
54
+ "- `ultralytics` β€” YOLOv8 training framework by Ultralytics\n",
55
+ "- `roboflow` β€” dataset management (optional augmentation)\n",
56
+ "- `onnx` + `onnxruntime` β€” ONNX export and validation"
57
+ ]
58
+ },
59
  {
60
  "cell_type": "code",
61
  "execution_count": null,
 
105
  "print('Anti-disconnect activated')\n"
106
  ]
107
  },
108
+ {
109
+ "cell_type": "markdown",
110
+ "metadata": {},
111
+ "source": [
112
+ "## βœ… Step 2 β€” Verify GPU & Import YOLO\n",
113
+ "\n",
114
+ "Confirms that the Tesla T4 GPU is available and the Ultralytics framework is ready."
115
+ ]
116
+ },
117
  {
118
  "cell_type": "code",
119
  "source": [
 
147
  }
148
  ]
149
  },
150
+ {
151
+ "cell_type": "markdown",
152
+ "metadata": {},
153
+ "source": [
154
+ "## πŸ“ Step 3 β€” Upload Dataset\n",
155
+ "\n",
156
+ "Upload the `archive.zip` file from the Hub: \n",
157
+ "```\n",
158
+ "chatbot_service/data/pothole_training/road_damage_2025/archive.zip\n",
159
+ "```\n",
160
+ "> πŸ“‚ This contains ~2,009 labeled road damage images in YOLO format (potholes, cracks, manholes)."
161
+ ]
162
+ },
163
  {
164
  "cell_type": "code",
165
  "source": [
 
190
  }
191
  ]
192
  },
193
+ {
194
+ "cell_type": "markdown",
195
+ "metadata": {},
196
+ "source": [
197
+ "## πŸ“¦ Step 4 β€” Extract Dataset Archive\n",
198
+ "\n",
199
+ "Extracts `archive.zip` into `/content/pothole_data/`. \n",
200
+ "This creates the raw YOLO-format dataset structure: `images/` and `labels/` subfolders."
201
+ ]
202
+ },
203
  {
204
  "cell_type": "code",
205
  "source": [
 
419
  }
420
  ]
421
  },
422
+ {
423
+ "cell_type": "markdown",
424
+ "metadata": {},
425
+ "source": [
426
+ "## πŸ—‚οΈ Step 5 β€” Create Master Directory Structure\n",
427
+ "\n",
428
+ "Creates a unified `merged/` folder with separate `train/` and `valid/` splits. \n",
429
+ "This allows merging images from multiple datasets (sachin_patel, andrew_mvd) if available."
430
+ ]
431
+ },
432
  {
433
  "cell_type": "code",
434
  "source": [
 
457
  }
458
  ]
459
  },
460
+ {
461
+ "cell_type": "markdown",
462
+ "metadata": {},
463
+ "source": [
464
+ "## πŸ”€ Step 6 β€” Merge Datasets (Bulletproof Search)\n",
465
+ "\n",
466
+ "Recursively searches all dataset folders for `.jpg` images and `.txt` YOLO labels, \n",
467
+ "then copies them all into the master `merged/train/` directory.\n",
468
+ "\n",
469
+ "> βœ… Result: **2,009 training images** merged from road_damage_2025."
470
+ ]
471
+ },
472
  {
473
  "cell_type": "code",
474
  "source": [
 
503
  }
504
  ]
505
  },
506
+ {
507
+ "cell_type": "markdown",
508
+ "metadata": {},
509
+ "source": [
510
+ "## πŸ“ Step 7 β€” Write `data.yaml`\n",
511
+ "\n",
512
+ "Creates the YOLO dataset configuration file defining:\n",
513
+ "- 3 detection classes: `['pothole', 'crack', 'manhole']`\n",
514
+ "- Train and validation paths\n",
515
+ "\n",
516
+ "The `nc: 3` setting overrides the default YOLOv8 ImageNet classes."
517
+ ]
518
+ },
519
  {
520
  "cell_type": "code",
521
  "source": [
 
558
  }
559
  ]
560
  },
561
+ {
562
+ "cell_type": "markdown",
563
+ "metadata": {},
564
+ "source": [
565
+ "## πŸš€ Step 8 β€” Train YOLOv8n (50 Epochs)\n",
566
+ "\n",
567
+ "Trains YOLOv8 nano on the merged dataset using these hyperparameters:\n",
568
+ "\n",
569
+ "| Parameter | Value | Reason |\n",
570
+ "|-----------|-------|--------|\n",
571
+ "| `model` | yolov8n.pt | Smallest model β€” runs well in browser via ONNX |\n",
572
+ "| `epochs` | 50 | Balanced between accuracy and training time |\n",
573
+ "| `imgsz` | 640 | Standard YOLO input resolution |\n",
574
+ "| `batch` | 16 | Fits T4 14GB VRAM |\n",
575
+ "| `device` | 0 (GPU) | CUDA training |\n",
576
+ "\n",
577
+ "> ⏱️ Expected training time: **~45 minutes** on Tesla T4 \n",
578
+ "> πŸ“ˆ Final mAP@50: ~**0.75+** after 50 epochs"
579
+ ]
580
+ },
581
  {
582
  "cell_type": "code",
583
  "source": [