Commit
·
d44b5cc
1
Parent(s):
de2e404
Better readme
Browse files- README.md +35 -22
- readme_assets/blender_view_cropped.png +0 -0
- readme_assets/bottle_1.png +0 -0
- readme_assets/bottle_2.png +0 -0
README.md
CHANGED
@@ -12,58 +12,71 @@ viewer: false
|
|
12 |
|
13 |
Repository: https://huggingface.co/datasets/Leandro4002/LEANDRONE_V2
|
14 |
Download zip: https://public.saraivam.ch/static/LEANDRONE_V2.zip
|
15 |
-
Project using this dataset: https://gitlab.com/Leandro4002/drone_race
|
16 |
|
17 |
## Description
|
18 |
|
19 |
-
The **LEANDRONE_V2** dataset is a collection of
|
20 |
<img src="readme_assets/crazyflie_nanodrone_2.1.png" width="300"/>
|
21 |
<img src="readme_assets/bitcraze_aideck1.1.png" width="300"/>
|
22 |
|
23 |
-
This dataset is aimed to create a machine learning model for an autonomous line-following drone. A secondary objective
|
24 |
-
The images are monochrome, 320x320
|
25 |
|
26 |
-
Each iteration has random variations of the line, number of bottles and bottles positions. For each iteration, there is images from the "drone" going around the track. There are
|
27 |
-
|
28 |
-
The camera of the AI-deck sets it's light sensitivity at the start, so the images captured by the camera will be different depending on the camera's exposure to light when starting the drone.
|
29 |
-
Here is a comparison when the drone is started with the camera covered and white the camera looking at a light source:
|
30 |
-
<img src="readme_assets/comparison_light_sensitivity.gif" width=450/>
|
31 |
-
The images in this dataset are generated for a camera that has been covered at the start.
|
32 |
|
33 |
## File structure
|
34 |
```
|
35 |
|
|
36 |
|
|
37 |
|---render
|
38 |
-
| "Black and white images, named from '
|
39 |
|
|
40 |
|
|
41 |
|
|
42 |
-
|---labels.
|
43 |
-
"
|
44 |
-
|
45 |
-
headed, indexed by image number."
|
46 |
```
|
47 |
|
48 |
## Example of sample
|
49 |
<img src="render/00_22.png"/>
|
50 |
|
51 |
-
Extract from labels.
|
52 |
-
```
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
```
|
59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
Here is a comparison between generated in the dataset and actual photos taken by the drone:
|
61 |
<img src="readme_assets/comparison_dataset_actual.gif"/>
|
62 |
Small light artifacts may appear on the dataset's images.
|
63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
A 3d model of the room has been made with photogrammetry using Meshroom 2023.3.0:
|
65 |
<img src="readme_assets/photogrammetry.png" width=600/>
|
66 |
|
|
|
|
|
|
|
67 |
## Author
|
68 |
|
69 |
Made in Blender 4.1.1 by Leandro SARAIVA MAIA
|
|
|
12 |
|
13 |
Repository: https://huggingface.co/datasets/Leandro4002/LEANDRONE_V2
|
14 |
Download zip: https://public.saraivam.ch/static/LEANDRONE_V2.zip
|
|
|
15 |
|
16 |
## Description
|
17 |
|
18 |
+
The **LEANDRONE_V2** dataset is a collection of 240 labelled images triying to mimic the photo taken by the front camera of a Bitcraze AI deck 1.1 mounted on a Crazyflie 2.1 nanodrone. The camera model is a Himax HM01B0.
|
19 |
<img src="readme_assets/crazyflie_nanodrone_2.1.png" width="300"/>
|
20 |
<img src="readme_assets/bitcraze_aideck1.1.png" width="300"/>
|
21 |
|
22 |
+
This dataset is aimed to create a machine learning model for an autonomous line-following drone. A secondary objective was to count the number of bottles along the track, but it has been abandonned and the number of bottles is not indicated in the labels. The track is approximately 3.5m x 1.8m.
|
23 |
+
The images are monochrome, 324x244 (The specs of the camera says it is 320x320, but in reality it is 324x244). The images are generated ~15-30 cm from the ground.
|
24 |
|
25 |
+
Each iteration has random variations of the line, number of bottles and bottles positions. For each iteration, there is images from the "drone" going around the track. There are 6 iterations with each 40 images which gives us a total of 240 images. Every even iteration, the track is done clockwise, and every odd iteration it is done anti-clockwise. In the image name, the first 2 digits is the iteration number and the 2 next digits is for the image number. They are separated by a '_'.
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## File structure
|
28 |
```
|
29 |
|
|
30 |
|
|
31 |
|---render
|
32 |
+
| "Black and white images, named from '00_00.png' to '05_39.png'"
|
33 |
|
|
34 |
|
|
35 |
|
|
36 |
+
|---labels.csv
|
37 |
+
"csv list containing 2 values per row: The name of the image and the angle where the drone should turn next.
|
38 |
+
A negative angle indicate a left turn and a positive angle indicates a right turn."
|
|
|
39 |
```
|
40 |
|
41 |
## Example of sample
|
42 |
<img src="render/00_22.png"/>
|
43 |
|
44 |
+
Extract from labels.csv:
|
45 |
+
```csv
|
46 |
+
image;angle
|
47 |
+
00_00.png;-0.04881754187845043
|
48 |
+
00_01.png;0.04906715233908746
|
49 |
+
00_02.png;0.08485470738964143
|
50 |
+
00_03.png;0.08043066958386594
|
51 |
```
|
52 |
|
53 |
+
## Light sensitivity
|
54 |
+
|
55 |
+
The camera of the AI-deck sets it's light sensitivity at the start, so the images captured by the camera will be different depending on the camera's exposure to light when starting the drone.
|
56 |
+
Here is a comparison when the drone is started with the camera covered and white the camera looking at a light source:
|
57 |
+
<img src="readme_assets/comparison_light_sensitivity.gif" width=450/>
|
58 |
+
The images in this dataset are generated for a camera that has been covered at the start.
|
59 |
+
|
60 |
+
## Comparison generated vs actual
|
61 |
+
|
62 |
Here is a comparison between generated in the dataset and actual photos taken by the drone:
|
63 |
<img src="readme_assets/comparison_dataset_actual.gif"/>
|
64 |
Small light artifacts may appear on the dataset's images.
|
65 |
|
66 |
+
## Counting bottles
|
67 |
+
|
68 |
+
There has been an attempt to track the number of bottles around the track using the yolov8n model. This model assigns an id to an object when it is confident enough to have identify it correctly. In order to test the model, we try applying this model to the images of an iteration of the track and converting them into a video. In a video containing 7 bottles, the model could detect only 2:
|
69 |
+
<img src="readme_assets/bottle_1.png"/><img src="readme_assets/bottle_2.png"/>
|
70 |
+
This is probably due to the fact that this model is trained on bottles with standard shapes, and there are sometimes energy drinks and thermos flasks in the dataset, which disrupted the model. Also. the image resolution isn't very high, it's only in monochrome and the 3D model's rendering engine can't handle transparency and simply displays gray. As a result, a lot of detail is lost, and the model is no longer able to detect bottles accurately.
|
71 |
+
|
72 |
+
## How it has been made
|
73 |
+
|
74 |
A 3d model of the room has been made with photogrammetry using Meshroom 2023.3.0:
|
75 |
<img src="readme_assets/photogrammetry.png" width=600/>
|
76 |
|
77 |
+
Then, we import this model in Blender 4.1 and we move the camera along the track and compute the angle between the camera and the nex points in order to obtain the label and the generated image from the point of view of the camera at this point:
|
78 |
+
<img src="readme_assets/blender_view_cropped.png" width=600/>
|
79 |
+
|
80 |
## Author
|
81 |
|
82 |
Made in Blender 4.1.1 by Leandro SARAIVA MAIA
|
readme_assets/blender_view_cropped.png
ADDED
![]() |
readme_assets/bottle_1.png
ADDED
![]() |
readme_assets/bottle_2.png
ADDED
![]() |