metadata
license: apache-2.0
This dataset contains <title, encoded_image>
pairs from Medium articles. It was processed from the Medium Articles Dataset (128k): Metadata + Images dataset on Kaggle.
The original images were processed in the following way:
- Given an image of size
(w, h)
, we cropped a square of size(n, n)
from the center of the image, wheren = min(w, h)
. - The resulting
(n, n)
image was resized to(256, 256)
. - The resulting
(256, 256)
image was encoded into image tokens via the dalle-mini/vqgan_imagenet_f16_16384 model.
Note that this dataset contains ~128k entries and is too small for training a text-to-image model end to end; it is more suitable for operations on a pre-trained model like dalle-mini (fine-tuning, prompt tuning, etc.).