license: mit
🗿 Megalith-10m
What is Megalith-10m?
Megalith-10m is a dataset of ~10 million links to Flickr images that were categorized as "photo" with license info of:
- No known copyright restrictions (Flickr commons), or
- United States Government Work, or
- Public Domain Dedication (CC0), or
- Public Domain Mark
What's the intended use of Megalith-10m?
Megalith-10m is intended to contain only links to wholesome unedited uncopyrighted photographs - the sort of images that we humans see when we walk around outside. I collected Megalith-10m for the purpose of training neural networks, but you're welcome to use Megalith-10m for whatever you want. Of course, I recommend conducting your own independent analysis of content and copyright status before using Megalith-linked images in Serious Projects.
Where can I get text captions for Megalith-10m?
- DrawThings.ai have uploaded
megalith-10m-sharecap
(captions made with ShareCaptioner) - AI Picasso have uploaded
megalith-10m-florence2
(captions made with Florence 2) - CaptionEmporium have uploaded
flickr-megalith-10m-internvl2-multi-caption
(captions made with InternVL2-8B as well as shorter single-sentence captions made by summarizing the InternVL2/Florence2/ShareCaptioner results with Llama3.1-8B) - DrawThings.ai is working on further captioning with MoonDream2
How can I efficiently download the images referenced by Megalith-10m?
- DrawThings.ai has archived the images linked by Megalith-10m here: https://huggingface.co/datasets/drawthingsai/megalith-10m
- If you want to download Megalith-10m images directly from Flickr, I posted a sample downloading command you can use with img2dataset
How was Megalith-10m collected?
I used the Flickr API to query for photos matching some basic criteria (SFW photo with CC0 / public domain license info), which gave me around 12 million links. I then used various filtering strategies to exclude ~2m image links which didn't appear to point to wholesome public-domain minimally-edited photos. These filtering strategies included:
- Account-level filtering, based on
- Manual adjudication for the top 5000 most prolific accounts
- Repeated-watermark detection
- Photo-level filtering, based on
Image metadata
- Mention of copyright restrictions in the EXIF tags
- Mention of copyright restrictions in the text description
Image content
Duplicate detection
CLIP-assisted checking for
- Clearly non-photo images (illustrations, screenshots, 3d renders, etc.)
- Clearly non-wholesome images (violence, nudity, etc.)
Minimum-resolution enforcement (at least 256x256 pixels)
Manual spot-checking of some images and metadata
What content does Megalith-10m contain?
The demo notebook shows a random sample of 100 images being loaded from the links in Megalith-10m.
Based on this random sample, I would estimate the following dataset statistics:
- 5-7% of images may have minor edits or annotations (timestamps, color grading, borders, etc.)
- 1-2% of images may be copyright-constrained (watermarks or text descriptions cast doubt on the license metadata)
- 1-2% of images may be non-wholesome (guns, suggestive poses, etc.)
- 1-2% of images may be non-photos (paintings, screenshots, etc.)
Is 10 million images really enough to teach a neural network about the visual world?
For the parts of the visual world that are well-represented in Megalith-10m, definitely! Projects like CommonCanvas, Mitsua Diffusion, and Matryoshka Diffusion have shown that you can train useable generative models on similarly-sized image datasets. Of course, many parts of the world aren't well-represented in Megalith-10m, so you'd need additional data to learn about those.
What have people done with Megalith-10m?
- AI Picasso have successfully trained a full text-to-image model CommonArt β on Megalith-10m (and other open datasets).
- I've successfully trained small text-to-image models on Megalith-10m for my own education.
- Megalith-10m was among the datasets used to train Janus, DeepSeek's AR model for multimodal understanding and generation