John6666 commited on
Commit
a9c28d9
verified
1 Parent(s): 1f03703

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -146
README.md CHANGED
@@ -1,146 +1,146 @@
1
- ---
2
- title: "Map It Anywhere (MIA): Empowering Bird鈥檚 Eye View Mapping using Large-scale Public Data"
3
- emoji: 馃實
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: docker
7
- pinned: true
8
- app_port: 7860
9
- ---
10
- <p align="center">
11
- <h1 align="center">Map It Anywhere (MIA): Empowering Bird鈥檚 Eye View Mapping using Large-scale Public Data</h1>
12
-
13
- <p align="center">
14
- <a href="https://cherieho.com/"><strong>Cherie Ho*</strong></a>
15
-
16
- <a href="https://www.linkedin.com/in/tonyjzou/"><strong>Jiaye (Tony) Zou*</strong></a>
17
-
18
- <a href="https://www.linkedin.com/in/omaralama/"><strong>Omar Alama*</strong></a>
19
- <br>
20
- <a href="https://smj007.github.io/"><strong>Sai Mitheran Jagadesh Kumar</strong></a>
21
-
22
- <a href="https://github.com/chychiang"><strong>Benjamin Chiang</strong></a>
23
-
24
- <a href="https://www.linkedin.com/in/taneesh-gupta/"><strong>Taneesh Gupta</strong></a>
25
-
26
- <a href="https://sairlab.org/team/chenw/"><strong>Chen Wang</strong></a>
27
- <br>
28
- <a href="https://nik-v9.github.io/"><strong>Nikhil Keetha</strong></a>
29
-
30
- <a href="https://www.cs.cmu.edu/~./katia/"><strong>Katia Sycara</strong></a>
31
-
32
- <a href="https://theairlab.org/team/sebastian/"><strong>Sebastian Scherer</strong></a>
33
- <br>
34
- </p>
35
-
36
- </p>
37
-
38
- ![Map It Anywhere (MIA)](/assets/mia_pull_fig.png "Map It Anywhere (MIA)")
39
-
40
- ## Table of Contents
41
- - [Using the MIA Data Engine](#using-the-mia-data-engine)
42
- - [Downloading the MIA dataset](#downloading-the-mia-dataset)
43
- - [Training](#training)
44
- - [Evaluation](#evaluation)
45
- - [Acknowledgement](#acknowledgement)
46
-
47
-
48
- ## Using the MIA data engine
49
-
50
- ### 0. Setting up the environment
51
- 0. Install docker by following the instructions on their [website](https://www.docker.com/get-started/)
52
- 1. Build the docker image `mia/Dockerfile` by running:
53
-
54
- docker build -t mia:release mia
55
- 2. Launch the container while mounting this repository to the container file system.
56
-
57
- docker run -v <PATH_TO_THIS_REPO>:/home/MapItAnywhere --network=bridge -it mia:release
58
-
59
- ### 1. Getting FPVs
60
-
61
- The first stage of the MIA data engine is to get the first person images.
62
- First, if you want to pull your own locations, copy the example configuration from `mia/conf/example.yaml` and edit the cities list to specify the cities you want. Feel free to explore the other well-documented FPV options in the configuration file.
63
-
64
- Once configuration is done simply run the following from inside your docker container with working dir set to this repo:
65
-
66
- python3.9 -m mia.fpv.get_fpv --cfg mia/conf/<YOUR_CONFIG>.yaml
67
-
68
- That's it ! The engine will now automatically fetch, filter, and process your FPV images. You may get a few errors specifying that some images were unable to be fetched due to permission limitations. That is normal and the engine will continue.
69
-
70
- Once all your locations have been downloaded, you will see that parquet files, images, and raw_images, have been populated in your `dataset_dir` for each location. You can now move on to getting BEVs.
71
-
72
- ### 2. Getting BEVs
73
- Once you have the FPV parquet dataframes downloaded, you are now ready to fetch and generate the BEV smenatic maps.
74
-
75
- Edit the documented bev options in your configuration file to suit your use case. The defaults are tuned to what we used to produce the MIA datasets and you can use them as is.
76
-
77
- Once configuration is done simply run the following from inside your docker container with working dir set to this repo:
78
-
79
- python3.9 -m mia.bev.get_bev
80
-
81
- The data engine will now fetch, process, and save the semantic masks.
82
-
83
- You now have FPV-BEV pairs with associated metadata and camera parameters !
84
-
85
- **Note** to get satellite imagery for comparison you must first download it by toggling the store_sat option in the configuration
86
-
87
- ### 3. (Optional) Visualize your data
88
- You can visualize a few samples using the tool `mia/misc_tools/vis_samples.py`.
89
-
90
- From inside the container with working dir set to this repo, run:
91
-
92
- python3.9 -m mia/misc_tools/vis_samples --dataset_dir /home/mia_dataset_release --locations <LOCATION_OF_INTEREST>
93
-
94
- If successful, the script will generate a PDF called `compare.pdf` in the pittsburgh directory. Upon openning you should see the metadata, FPVs, and BEVs of a few samples of the dataset.
95
-
96
-
97
- ## Downloading the MIA dataset
98
- Refer to [mia/dataset.md](mia/dataset.md) for instructions.
99
-
100
- ## Training
101
-
102
- ### Pre-train with MIA Dataset
103
- To pretrain using our paper configuration simply run:
104
-
105
- python -m mapper.mapper data.split=<PATH TO SPLIT FILE> data.data_dir=<PATH TO MIA DATASET>
106
-
107
- ### Finetune with NuScenes Dataset
108
- To finetune using NuScenes Dataset with our paper configuration, run:
109
-
110
- python -m mapper.mapper -cn mapper_nuscenes training.checkpoint=<PATH TO PRETRAINED MODEL> data.data_dir=<PATH TO NUSCENES DATA> data.map_dir=<PATH TO GENERATED NUSCENES MAP>
111
-
112
- ## Reproduction
113
- #### Dataset Setup
114
- **MIA**: Follow download instructions in [Downloading the MIA Dataset](#downloading-the-mia-dataset)
115
-
116
- **NuScenes**: Follow the data generation instructions in [Mono-Semantic-Maps](https://github.com/tom-roddick/mono-semantic-maps?tab=readme-ov-file#nuscenes). To match the newest available information, we use v1.3 of the NuScenes' map expansion pack.
117
-
118
- **KITTI360-BEV**: Follow the KITTI360-BEV dataset instructions in [SkyEye](https://github.com/robot-learning-freiburg/SkyEye?tab=readme-ov-file#skyeye-datasets)
119
-
120
- #### Inference
121
- To generate MIA dataset prediction results(on test split), use:
122
-
123
- python -m mapper.mapper data.split=<PATH TO SPLIT FILE> data.data_dir=<PATH TO MIA DATASET> training.checkpoint=<TRAINED WEIGHTS> training.eval=true
124
- *To specify location, add `data.scenes` in the argument. For example, for held-out cities `data.scenes="[pittsburgh, houston]"`*
125
-
126
- To Generate NuScenes dataset prediction results(on validation split), use:
127
-
128
- python -m mapper.mapper -cn mapper_nuscenes training.checkpoint=<PATH TO PRETRAINED MODEL> data.data_dir=<PATH TO NUSCENES DATA> data.map_dir=<PATH TO GENERATED NUSCENES MAP> training.eval=true
129
-
130
- To Generate KITTI360-BEV dataset prediction results (on validation split), use:
131
-
132
- python -m mapper.mapper -cn mapper_kitti training.checkpoint=<PATH TO PRETRAINED MODEL> data.seam_root_dir=<PATH TO SEAM ROOT> data.dataset_root_dir=<PATH TO KITTI DATASET> training.eval=true
133
-
134
-
135
- ## License
136
- The FPVs were curated and processed from Mapillary and have the same CC by SA license. These include all images files, parquet dataframes, and dump.json. The BEVs were curated and processed from OpenStreetMap and has the same Open Data Commons Open Database (ODbL) License. These include all semantic masks and flood masks. The rest of the data is licensed under CC by SA license.
137
-
138
- Code is licensed under CC by SA license.
139
-
140
- ## Acknowledgement
141
- We thank the authors of the following repositories for their open-source code:
142
- - [OrienterNet](https://github.com/facebookresearch/OrienterNet)
143
- - [Map Machine](https://github.com/enzet/map-machine)
144
- - [Mono-Semantic-Maps](https://github.com/tom-roddick/mono-semantic-maps)
145
- - [Translating Images Into Maps](https://github.com/avishkarsaha/translating-images-into-maps)
146
- - [SkyEye](https://github.com/robot-learning-freiburg/SkyEye)
 
1
+ ---
2
+ title: "Map It Anywhere (MIA): Empowering Bird鈥檚 Eye View Mapping using Large-scale Public Data"
3
+ emoji: 馃實
4
+ colorFrom: green
5
+ colorTo: blue
6
+ sdk: docker
7
+ pinned: false
8
+ app_port: 7860
9
+ ---
10
+ <p align="center">
11
+ <h1 align="center">Map It Anywhere (MIA): Empowering Bird鈥檚 Eye View Mapping using Large-scale Public Data</h1>
12
+
13
+ <p align="center">
14
+ <a href="https://cherieho.com/"><strong>Cherie Ho*</strong></a>
15
+
16
+ <a href="https://www.linkedin.com/in/tonyjzou/"><strong>Jiaye (Tony) Zou*</strong></a>
17
+
18
+ <a href="https://www.linkedin.com/in/omaralama/"><strong>Omar Alama*</strong></a>
19
+ <br>
20
+ <a href="https://smj007.github.io/"><strong>Sai Mitheran Jagadesh Kumar</strong></a>
21
+
22
+ <a href="https://github.com/chychiang"><strong>Benjamin Chiang</strong></a>
23
+
24
+ <a href="https://www.linkedin.com/in/taneesh-gupta/"><strong>Taneesh Gupta</strong></a>
25
+
26
+ <a href="https://sairlab.org/team/chenw/"><strong>Chen Wang</strong></a>
27
+ <br>
28
+ <a href="https://nik-v9.github.io/"><strong>Nikhil Keetha</strong></a>
29
+
30
+ <a href="https://www.cs.cmu.edu/~./katia/"><strong>Katia Sycara</strong></a>
31
+
32
+ <a href="https://theairlab.org/team/sebastian/"><strong>Sebastian Scherer</strong></a>
33
+ <br>
34
+ </p>
35
+
36
+ </p>
37
+
38
+ ![Map It Anywhere (MIA)](/assets/mia_pull_fig.png "Map It Anywhere (MIA)")
39
+
40
+ ## Table of Contents
41
+ - [Using the MIA Data Engine](#using-the-mia-data-engine)
42
+ - [Downloading the MIA dataset](#downloading-the-mia-dataset)
43
+ - [Training](#training)
44
+ - [Evaluation](#evaluation)
45
+ - [Acknowledgement](#acknowledgement)
46
+
47
+
48
+ ## Using the MIA data engine
49
+
50
+ ### 0. Setting up the environment
51
+ 0. Install docker by following the instructions on their [website](https://www.docker.com/get-started/)
52
+ 1. Build the docker image `mia/Dockerfile` by running:
53
+
54
+ docker build -t mia:release mia
55
+ 2. Launch the container while mounting this repository to the container file system.
56
+
57
+ docker run -v <PATH_TO_THIS_REPO>:/home/MapItAnywhere --network=bridge -it mia:release
58
+
59
+ ### 1. Getting FPVs
60
+
61
+ The first stage of the MIA data engine is to get the first person images.
62
+ First, if you want to pull your own locations, copy the example configuration from `mia/conf/example.yaml` and edit the cities list to specify the cities you want. Feel free to explore the other well-documented FPV options in the configuration file.
63
+
64
+ Once configuration is done simply run the following from inside your docker container with working dir set to this repo:
65
+
66
+ python3.9 -m mia.fpv.get_fpv --cfg mia/conf/<YOUR_CONFIG>.yaml
67
+
68
+ That's it ! The engine will now automatically fetch, filter, and process your FPV images. You may get a few errors specifying that some images were unable to be fetched due to permission limitations. That is normal and the engine will continue.
69
+
70
+ Once all your locations have been downloaded, you will see that parquet files, images, and raw_images, have been populated in your `dataset_dir` for each location. You can now move on to getting BEVs.
71
+
72
+ ### 2. Getting BEVs
73
+ Once you have the FPV parquet dataframes downloaded, you are now ready to fetch and generate the BEV smenatic maps.
74
+
75
+ Edit the documented bev options in your configuration file to suit your use case. The defaults are tuned to what we used to produce the MIA datasets and you can use them as is.
76
+
77
+ Once configuration is done simply run the following from inside your docker container with working dir set to this repo:
78
+
79
+ python3.9 -m mia.bev.get_bev
80
+
81
+ The data engine will now fetch, process, and save the semantic masks.
82
+
83
+ You now have FPV-BEV pairs with associated metadata and camera parameters !
84
+
85
+ **Note** to get satellite imagery for comparison you must first download it by toggling the store_sat option in the configuration
86
+
87
+ ### 3. (Optional) Visualize your data
88
+ You can visualize a few samples using the tool `mia/misc_tools/vis_samples.py`.
89
+
90
+ From inside the container with working dir set to this repo, run:
91
+
92
+ python3.9 -m mia/misc_tools/vis_samples --dataset_dir /home/mia_dataset_release --locations <LOCATION_OF_INTEREST>
93
+
94
+ If successful, the script will generate a PDF called `compare.pdf` in the pittsburgh directory. Upon openning you should see the metadata, FPVs, and BEVs of a few samples of the dataset.
95
+
96
+
97
+ ## Downloading the MIA dataset
98
+ Refer to [mia/dataset.md](mia/dataset.md) for instructions.
99
+
100
+ ## Training
101
+
102
+ ### Pre-train with MIA Dataset
103
+ To pretrain using our paper configuration simply run:
104
+
105
+ python -m mapper.mapper data.split=<PATH TO SPLIT FILE> data.data_dir=<PATH TO MIA DATASET>
106
+
107
+ ### Finetune with NuScenes Dataset
108
+ To finetune using NuScenes Dataset with our paper configuration, run:
109
+
110
+ python -m mapper.mapper -cn mapper_nuscenes training.checkpoint=<PATH TO PRETRAINED MODEL> data.data_dir=<PATH TO NUSCENES DATA> data.map_dir=<PATH TO GENERATED NUSCENES MAP>
111
+
112
+ ## Reproduction
113
+ #### Dataset Setup
114
+ **MIA**: Follow download instructions in [Downloading the MIA Dataset](#downloading-the-mia-dataset)
115
+
116
+ **NuScenes**: Follow the data generation instructions in [Mono-Semantic-Maps](https://github.com/tom-roddick/mono-semantic-maps?tab=readme-ov-file#nuscenes). To match the newest available information, we use v1.3 of the NuScenes' map expansion pack.
117
+
118
+ **KITTI360-BEV**: Follow the KITTI360-BEV dataset instructions in [SkyEye](https://github.com/robot-learning-freiburg/SkyEye?tab=readme-ov-file#skyeye-datasets)
119
+
120
+ #### Inference
121
+ To generate MIA dataset prediction results(on test split), use:
122
+
123
+ python -m mapper.mapper data.split=<PATH TO SPLIT FILE> data.data_dir=<PATH TO MIA DATASET> training.checkpoint=<TRAINED WEIGHTS> training.eval=true
124
+ *To specify location, add `data.scenes` in the argument. For example, for held-out cities `data.scenes="[pittsburgh, houston]"`*
125
+
126
+ To Generate NuScenes dataset prediction results(on validation split), use:
127
+
128
+ python -m mapper.mapper -cn mapper_nuscenes training.checkpoint=<PATH TO PRETRAINED MODEL> data.data_dir=<PATH TO NUSCENES DATA> data.map_dir=<PATH TO GENERATED NUSCENES MAP> training.eval=true
129
+
130
+ To Generate KITTI360-BEV dataset prediction results (on validation split), use:
131
+
132
+ python -m mapper.mapper -cn mapper_kitti training.checkpoint=<PATH TO PRETRAINED MODEL> data.seam_root_dir=<PATH TO SEAM ROOT> data.dataset_root_dir=<PATH TO KITTI DATASET> training.eval=true
133
+
134
+
135
+ ## License
136
+ The FPVs were curated and processed from Mapillary and have the same CC by SA license. These include all images files, parquet dataframes, and dump.json. The BEVs were curated and processed from OpenStreetMap and has the same Open Data Commons Open Database (ODbL) License. These include all semantic masks and flood masks. The rest of the data is licensed under CC by SA license.
137
+
138
+ Code is licensed under CC by SA license.
139
+
140
+ ## Acknowledgement
141
+ We thank the authors of the following repositories for their open-source code:
142
+ - [OrienterNet](https://github.com/facebookresearch/OrienterNet)
143
+ - [Map Machine](https://github.com/enzet/map-machine)
144
+ - [Mono-Semantic-Maps](https://github.com/tom-roddick/mono-semantic-maps)
145
+ - [Translating Images Into Maps](https://github.com/avishkarsaha/translating-images-into-maps)
146
+ - [SkyEye](https://github.com/robot-learning-freiburg/SkyEye)