You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled explicitly with a cc-by-sa license). A "license" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample.

These Creative Commons licenses specify that:

  1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a cc-by-sa license. If you would like to ask about commercial uses of this dataset, please email us.
  2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
  3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material.

In addition to the above implied by Creative Commons and when clicking "Access Repository" below, you agree:

  1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised.
  2. That your contact information (email address and username) can be shared with the model authors as well.

Log in or Sign Up to review the conditions and access this dataset content.

logo for Bloom Library sil-ai logo

Dataset Summary

Bloom is free, open-source software and an associated website Bloom Library, app, and services developed by SIL International. Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.

This version of the Bloom Library data is developed specifically for the image captioning task. It includes data from 351 languages across 31 language families. There is a mean of 32 stories and 319 image-caption pairs per language.

Note: If you speak one of these languages and can help provide feedback or corrections, please let us know!

Note: Although this data was used in the training of the BLOOM model, this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉


Of the 500+ languages listed at, there are 351 languages available in this dataset. Here are the corresponding ISO 639-3 codes:

aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul

Dataset Statistics

Some of the languages included in the dataset just include 1 or a couple of "stories." These are not split between training, validation, and test. For those with higher numbers of available stories we include the following statistics:

ISO 639-3 stories image-caption pairs
ahk 101 907
awa 163 1200
bam 4 86
ben 251 2235
bho 173 1172
boz 5 102
bzi 66 497
cak 67 817
ceb 418 2953
cgc 197 1638
chd 1 84
dty 172 1310
eng 2633 28618
fas 129 631
fra 403 5278
hat 260 2411
hau 256 1865
hbb 27 273
ind 259 2177
jra 139 1423
kak 195 1416
kan 21 168
kek 36 621
kir 382 4026
kjb 102 984
kor 132 2773
mai 180 1211
mai 180 1211
mam 134 1317
mhx 98 945
mya 38 421
myk 34 341
nep 200 1507
new 177 1225
por 163 3101
quc 99 817
rus 353 3933
sdk 11 153
snk 35 356
spa 528 6111
stk 7 113
tgl 0 0
tha 285 3023
thl 185 1464
tpi 201 2162

Dataset Structure

Data Instances

The examples look like this for Hausa:

from datasets import load_dataset

# Specify the language code.
dataset = load_dataset("sil-ai/bloom-captioning", iso639_3_letter_code, 
                       use_auth_token=True, download_mode='force_redownload')

# An entry in the dataset consists of a image caption along with 
# a link to the corresponding image (and various pieces of metadata).

This would produce an output:

{'image_id': '5e7e2ab6-493f-4430-a635-695fbff76cf0',
 'image_url': '',
 'caption': 'Lokacinan almajiran suna tuƙa jirgin ruwansu, amma can cikin dare sun kai tsakiyar tafkin kaɗai. Suna tuƙi da wahala saboda iska tana busawa da ƙarfi gaba da su.',
 'story_id': 'cd17125d-66c6-467c-b6c3-7463929faff9',
 'album_id': 'a3074fc4-b88f-4769-a6de-dc952fdb35f0',
 'original_bloom_language_tag': 'ha',
 'index_in_story': 0}

To download all of the images locally directory images, you can do something similar to the following:

from PIL import Image
import urllib
from datasets.utils.file_utils import get_datasets_user_agent

USER_AGENT = get_datasets_user_agent()

def fetch_single_image(image_url, timeout=None, retries=0):
    request = urllib.request.Request(
        headers={"user-agent": USER_AGENT},
    with urllib.request.urlopen(request, timeout=timeout) as req:
        if 'png' in image_url:
          png ='RGBA')
          png.load() # required for png.split()
          background ="RGB", png.size, (255, 255, 255))
          background.paste(png, mask=png.split()[3]) # 3 is the alpha channel
          image_id = str(uuid.uuid4())
          image_path = "images/" + image_id + ".jpg"
, 'JPEG', quality=80)
          image =
          image_id = str(uuid.uuid4())
          image_path = "images/" + image_id + ".jpg"

    return image_path

def fetch_images(batch, num_threads, timeout=None, retries=3):
    fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
    with ThreadPoolExecutor(max_workers=num_threads) as executor:
        batch["image_path"] = list(, batch["image_url"]))
    return batch

num_threads = 20
dataset =, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})

Data Fields

The metadata fields below are available:

  • image_id: a unique ID for the image
  • image_url: a link for downloading the image
  • caption: a caption corresponding to the image
  • story_id: a unique ID for the corresponding story in which the caption appears
  • album_id: a unique ID for the corresponding album in which the image appears
  • original_bloom_language_tag: the original language identification from the Bloom library
  • index_in_story: an index corresponding to the order of the image-caption pair in the corresponding story

Data Splits

All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.

NOTE: The captions for the test split are currently hidden due to on ongoing shared task competition. They have been replaced by a placeholder <hidden> token.


  • 25 October 2022 - Initial release
  • 25 October 2022 - Update to include licenses on each data item.
Downloads last month
Edit dataset card