Cherie Ho
commited on
Commit
·
66b84aa
1
Parent(s):
5855c27
README updates: docker file, add another dataset download link.
Browse files- README.md +1 -1
- mia/dataset.md +2 -0
README.md
CHANGED
@@ -42,7 +42,7 @@
|
|
42 |
0. Install docker by following the instructions on their [website](https://www.docker.com/get-started/)
|
43 |
1. Build the docker image `mia/Dockerfile` by running:
|
44 |
|
45 |
-
docker build -t mia:release mia
|
46 |
2. Launch the container while mounting this repository to the container file system.
|
47 |
|
48 |
docker run -v <PATH_TO_THIS_REPO>:/home/MapItAnywhere --network=bridge -it mia:release
|
|
|
42 |
0. Install docker by following the instructions on their [website](https://www.docker.com/get-started/)
|
43 |
1. Build the docker image `mia/Dockerfile` by running:
|
44 |
|
45 |
+
docker build -t mia:release mia
|
46 |
2. Launch the container while mounting this repository to the container file system.
|
47 |
|
48 |
docker run -v <PATH_TO_THIS_REPO>:/home/MapItAnywhere --network=bridge -it mia:release
|
mia/dataset.md
CHANGED
@@ -16,6 +16,8 @@
|
|
16 |
The Map It Anywhere (MIA) dataset contains large-scale map-prediction-ready data curated from public datasets.
|
17 |
Specifically, the dataset empowers Bird's Eye View (BEV) map prediction given First Person View (FPV) RGB images, by providing diversity in location and cameras beyond current datasets. The dataset contains 1.2 million high quality first-person-view (FPV) and bird's eye view (BEV) map pairs covering 470 squared kilometers, which to the best of our knowledge provides 6x more coverage than the closest publicly available map prediction dataset, thereby facilitating future map prediction research on generalizability and robustness. The dataset is curated using our MIA data engine [code](https://github.com/MapItAnywhere/MapItAnywhere) to sample from six urban-centered location: New York, Chicago, Houston, Los Angeles, Pittsburgh, and San Francisco.
|
18 |
|
|
|
|
|
19 |
## Data
|
20 |
### Dataset Structure
|
21 |
|
|
|
16 |
The Map It Anywhere (MIA) dataset contains large-scale map-prediction-ready data curated from public datasets.
|
17 |
Specifically, the dataset empowers Bird's Eye View (BEV) map prediction given First Person View (FPV) RGB images, by providing diversity in location and cameras beyond current datasets. The dataset contains 1.2 million high quality first-person-view (FPV) and bird's eye view (BEV) map pairs covering 470 squared kilometers, which to the best of our knowledge provides 6x more coverage than the closest publicly available map prediction dataset, thereby facilitating future map prediction research on generalizability and robustness. The dataset is curated using our MIA data engine [code](https://github.com/MapItAnywhere/MapItAnywhere) to sample from six urban-centered location: New York, Chicago, Houston, Los Angeles, Pittsburgh, and San Francisco.
|
18 |
|
19 |
+
Dataset download links are available [here]([Download the dataset](https://cmu.box.com/s/6tnlvikg1rcsai0ve7t8kgdx9ago9x9q).) Please refer to [Getting Started](#getting-started) page on how to use.
|
20 |
+
|
21 |
## Data
|
22 |
### Dataset Structure
|
23 |
|