Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license: cc-by-4.0
|
|
5 |
|
6 |
<img src="map_bubbles.png" alt="many small air bubbles containing colorful maps arising with light rays under the ocean (AI-generated image)" width="256"/>
|
7 |
|
8 |
-
|
9 |
|
10 |
## How is the data structured?
|
11 |
|
@@ -99,11 +99,11 @@ if __name__ == "__main__":
|
|
99 |
multiprocessing.freeze_support()
|
100 |
main()
|
101 |
```
|
102 |
-
As the Internet is constantly changing, about two thirds of the original images (= 48 million) are still downloadable.
|
103 |
|
104 |
## How was this dataset created?
|
105 |
|
106 |
-
MapPool has been created by classifying the image embeddings included in [CommonPool](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge) which have been generated by two pre-trained vision transformers (ViTs). The [L/14 model](https://github.com/mlfoundations/open_clip) with more parameters and outputting 768-dimensional embeddings has been considered
|
107 |
|
108 |
| Model | Accuracy
|
109 |
|----------------------------------------------------------|----------
|
@@ -119,23 +119,23 @@ With the Support Vector Machine, 500,000 image embeddings could be classified wi
|
|
119 |
|
120 |
A qualitative inspection of the detected maps looks promising; however, it is not known what the actual accuracy is. Especially the false negative rate is hard to estimate due to the high number of non-maps among the CommonPool images. Mixtures between natural images and maps (e.g., a map printed on a bag, a map in a park) have not been further examined.
|
121 |
|
122 |
-
Textual embeddings have not been considered in the separation process so far. The training dataset for the map classifier has a
|
123 |
|
124 |
## What are future research directions?
|
125 |
|
126 |
-
A detailed analysis of the content and metadata of maps in MapPool, potentially resulting in a search engine, is subject of future work. Additionally, the visual and textual embedding space may be explored to refine the map classifier and to detect duplicates among the images. It can be examined whether training with map-only images leads to better results for
|
127 |
|
128 |
Feel free to contact [me](https://people.epfl.ch/raimund.schnurer) in case you like to collaborate!
|
129 |
|
130 |
-
|
131 |
|
132 |
-
The
|
133 |
|
134 |
-
|
135 |
|
136 |
The dataset is published under the Creative Commons Attribution 4.0 license. Please respect the copyright of the original images when making use of MapPool.
|
137 |
|
138 |
-
|
139 |
|
140 |
```
|
141 |
@inproceedings{Schnürer_MapPool_2024, title={MapPool - Bubbling up an extremely large corpus of maps for AI}, author={Schnürer, Raimund}, year={2024}}
|
|
|
5 |
|
6 |
<img src="map_bubbles.png" alt="many small air bubbles containing colorful maps arising with light rays under the ocean (AI-generated image)" width="256"/>
|
7 |
|
8 |
+
MapPool is a dataset of 75 million potential maps and textual captions. It has been derived from [CommonPool](https://www.datacomp.ai/), a dataset consisting of 12 billion text-image pairs from the Internet. The images have been encoded by a vision transformer and classified into maps and non-maps by a support vector machine. This approach outperforms previous models and yields a validation accuracy of 98.5%. The MapPool dataset may help to train data-intensive architectures in order to establish vision and language foundation models specialized in maps. The analysis of the dataset and the exploration of the embedding space offers a large potential for future work.
|
9 |
|
10 |
## How is the data structured?
|
11 |
|
|
|
99 |
multiprocessing.freeze_support()
|
100 |
main()
|
101 |
```
|
102 |
+
As the Internet is constantly changing, about two thirds of the original images (= 48 million) are still downloadable. 6TB of space are required to store them in their original formats and 100GB of space are needed when creating 128x128px thumbnails in the webm format with 60% quality. Downloading the images took 40 hours with 24 CPUs, 30GB RAM, and 40MB/s of network traffic on average.
|
103 |
|
104 |
## How was this dataset created?
|
105 |
|
106 |
+
MapPool has been created by classifying the image embeddings included in [CommonPool](https://huggingface.co/datasets/mlfoundations/datacomp_xlarge), which have been generated by two pre-trained vision transformers (ViTs). The [L/14 model](https://github.com/mlfoundations/open_clip) with more parameters and outputting 768-dimensional embeddings has been considered since it has achieved higher classification accuracies. In this work, different map classifiers (Table 1) from [scikit-learn](https://scikit-learn.org/) with the [Intel Extension](https://intel.github.io/scikit-learn-intelex) have been trained on the embeddings of 1,860 maps and 1,860 non-maps, and have been evaluated on 1,240 maps and 1,240 non-maps ([Schnürer et al. 2021](https://doi.org/10.1080/00087041.2020.1738112)). Only simple classification models have been considered due to their efficiency and as meaningful embeddings have already been created by the vision transformer.
|
107 |
|
108 |
| Model | Accuracy
|
109 |
|----------------------------------------------------------|----------
|
|
|
119 |
|
120 |
A qualitative inspection of the detected maps looks promising; however, it is not known what the actual accuracy is. Especially the false negative rate is hard to estimate due to the high number of non-maps among the CommonPool images. Mixtures between natural images and maps (e.g., a map printed on a bag, a map in a park) have not been further examined.
|
121 |
|
122 |
+
Textual embeddings have not been considered in the separation process so far. The training dataset for the map classifier has a large visual variety, such as pictorial maps and 3D maps as well as sketches and paintings. However, the textual descriptions may be too biased since the training dataset originates only from one source.
|
123 |
|
124 |
## What are future research directions?
|
125 |
|
126 |
+
A detailed analysis of the content and metadata of maps in MapPool, potentially resulting in a search engine, is the subject of future work. Additionally, the visual and textual embedding space may be explored to refine the map classifier and to detect duplicates among the images. It can be examined whether training with map-only images leads to better results for cartographic tasks, for instance generating maps based on textual prompts, than with a mixture of maps and other images.
|
127 |
|
128 |
Feel free to contact [me](https://people.epfl.ch/raimund.schnurer) in case you like to collaborate!
|
129 |
|
130 |
+
## Disclaimer
|
131 |
|
132 |
+
The creator is not responsible for the content of linked external websites and will not guarantee for any damage any content of these websites may cause.
|
133 |
|
134 |
+
## License
|
135 |
|
136 |
The dataset is published under the Creative Commons Attribution 4.0 license. Please respect the copyright of the original images when making use of MapPool.
|
137 |
|
138 |
+
## Citation
|
139 |
|
140 |
```
|
141 |
@inproceedings{Schnürer_MapPool_2024, title={MapPool - Bubbling up an extremely large corpus of maps for AI}, author={Schnürer, Raimund}, year={2024}}
|