MapPool / README.md
sraimund's picture
Update README.md
e431260 verified
metadata
license: cc-by-4.0

MapPool - Bubbling up an extremely large corpus of maps for AI

many small air bubbles containing colorful maps arising with light rays under the ocean (AI-generated image)

MapPool is a dataset of 75 million potential maps and textual captions. It has been derived from CommonPool, a dataset consisting of 12 billion text-image pairs from the Internet. The images have been encoded by a vision transformer and classified into maps and non-maps by a support vector machine. This approach outperforms previous models and yields a validation accuracy of 98.5%. The MapPool dataset may help to train data-intensive architectures in order to establish vision and language foundation models specialized in maps. The analysis of the dataset and the exploration of the embedding space offers a large potential for future work.

How is the data structured?

Key Meaning
uid Unique identifier
url Link to the image
text Textual description of the image
original_width / original_height Dimensions of the image
sha256 Hash of the image (to verify if the image is the same as the one in the URL)
l14_img Embedding of the image (768 dimensions)
l14_txt Embedding of the textual description (768 dimensions)
clip_l14_similarity_score Similarity between the image and text (higher values indicate higher similarity)

How can this repository be downloaded?

Simply use Git (or TortoiseGit):

git clone https://huggingface.co/datasets/sraimund/MapPool/

Alternatively use the HuggingFace API:

import json
import os
from huggingface_hub import hf_hub_download

download_folder = "<your-download-folder>"
repo_id = "sraimund/MapPool"

# this file is given at the root of this repository
with open("file_list.json") as f:
    file_list = json.load(f)

for part, files in file_list.items():
    for file in files:
        file_path = f"{download_folder}/{part}/{file}.parquet"

        if os.path.exists(file_path):
            continue

        hf_hub_download(repo_type="dataset",
                        repo_id=repo_id,
                        filename=f"{part}/{file}.parquet",
                        local_dir=download_folder,
                        token=read_token)

About 225 GB of space are required. The amount doubles when using Git since the files are duplicated in the .git folder.

How can the parquet files be read?

You can read parquet files with pandas:

import pandas as pd

df = pd.read_parquet("<file_name>.parquet")

The pyarrow or fastparquet library is required additionally.

How can the map images be downloaded?

You can download the map images with img2dataset.

from img2dataset import download

download(
    thread_count=64,
    url_list="<file_name>.parquet",
    output_folder="<folder_path>",
    resize_mode="no",
    output_format="files",
    input_format="parquet",
    url_col="url",
    caption_col="text",
    verify_hash=("sha256", "sha256"),
)

For Windows users:

import multiprocessing as mp
from img2dataset import download

# a small patch is also needed: https://github.com/rom1504/img2dataset/issues/347 
def main():
    download(...)

if __name__ == "__main__":
    multiprocessing.freeze_support()
    main()

As the Internet is constantly changing, about two thirds of the original images (= 48 million) are still downloadable. 6TB of space are required to store them in their original formats and 100GB of space are needed when creating 128x128px thumbnails in the webm format with 60% quality. Downloading the images took 40 hours with 24 CPUs, 30GB RAM, and 40MB/s of network traffic on average.

How was this dataset created?

MapPool has been created by classifying the image embeddings included in CommonPool, which have been generated by two pre-trained vision transformers (ViTs). The L/14 model with more parameters and outputting 768-dimensional embeddings has been considered since it has achieved higher classification accuracies. In this work, different map classifiers (Table 1) from scikit-learn with the Intel Extension have been trained on the embeddings of 1,860 maps and 1,860 non-maps, and have been evaluated on 1,240 maps and 1,240 non-maps (Schnürer et al. 2021). Only simple classification models have been considered due to their efficiency and as meaningful embeddings have already been created by the vision transformer.

Model Accuracy
Xception / InceptionResNetV2 (= Baseline) 96.7
ViT-L/14 + L2 distance to averaged embeddings 96.7
ViT-L/14 + Logistic Regression 97.9
ViT-L/14 + Multilayer Perceptron (3x256 units) 98.2
ViT-L/14 + Support Vector Machine (polynomial, degree 3) 98.5

With the Support Vector Machine, 500,000 image embeddings could be classified within 10 seconds. Downloading, classifying the whole dataset, and uploading the results took about 50 hours with 10 CPUs, 120GB RAM, and 500MB/s of network traffic on average.

Is the inference model available?

Yes, try it out and download it here: https://huggingface.co/spaces/sraimund/MapPool

What are the limitations?

A qualitative inspection of the detected maps looks promising; however, it is not known what the actual accuracy is. Especially the false negative rate is hard to estimate due to the high number of non-maps among the CommonPool images. Mixtures between natural images and maps (e.g., a map printed on a bag, a map in a park) have not been further examined.

Textual embeddings have not been considered in the separation process so far. The training dataset for the map classifier has a large visual variety, such as pictorial maps and 3D maps as well as sketches and paintings. However, the textual descriptions may be too biased since the training dataset originates only from one source.

What are future research directions?

A detailed analysis of the content and metadata of maps in MapPool, potentially resulting in a search engine, is the subject of future work. Additionally, the visual and textual embedding space may be explored to refine the map classifier and to detect duplicates among the images. It can be examined whether training with map-only images leads to better results for cartographic tasks, for instance generating maps based on textual prompts, than with a mixture of maps and other images.

Feel free to contact me in case you like to collaborate!

Disclaimer

The creator is not responsible for the content of linked external websites and will not guarantee for any damage any content of these websites may cause.

License

The dataset is published under the Creative Commons Attribution 4.0 license. Please respect the copyright of the original images when making use of MapPool.

Citation

@inproceedings{Schnürer_MapPool_2024, title={MapPool - Bubbling up an extremely large corpus of maps for AI}, author={Schnürer, Raimund}, year={2024}}