Datasets:

Task Categories: image-to-text
Languages: English
Multilinguality: monolingual
Size Categories: 1M<n<10M
Language Creators: found
Annotations Creators: found
Source Datasets: original
Licenses: unknown
Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    FileNotFoundError
Message:      http://www.cs.virginia.edu/~vicente/sbucaptions/sbu-captions-all.tar.gz
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 391, in _info
                  await _file_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 772, in _file_info
                  r.raise_for_status()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1004, in raise_for_status
                  raise ClientResponseError(
              aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://www.cs.virginia.edu/~vicente/sbucaptions/sbu-captions-all.tar.gz')
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 340, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/utils.py", line 134, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 80, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 723, in __iter__
                  for key, example in self._iter():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 713, in _iter
                  yield from ex_iterable
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/sbu_captions/a2a9af54948512444e67adec8385911fc26ab225d934dff67bb2ac58b2a11a89/sbu_captions.py", line 92, in _generate_examples
                  for path, f in files:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 762, in __iter__
                  yield from self.generator(*self.args, **self.kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 789, in _iter_from_urlpath
                  with xopen(urlpath, "rb", use_auth_token=use_auth_token) as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 453, in xopen
                  file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 135, in open
                  return self.__enter__()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 103, in __enter__
                  f = self.fs.open(self.path, mode=mode)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1034, in open
                  f = self._open(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 340, in _open
                  size = size or self.info(path, **kwargs)["size"]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 111, in wrapper
                  return sync(self.loop, func, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 96, in sync
                  raise return_result
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
                  result[0] = await coro
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 404, in _info
                  raise FileNotFoundError(url) from exc
              FileNotFoundError: http://www.cs.virginia.edu/~vicente/sbucaptions/sbu-captions-all.tar.gz

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for SBU Captioned Photo Dataset

Dataset Summary

SBU Captioned Photo Dataset is a collection of associated captions and images from Flickr.

Dataset Preprocessing

This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:

from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib

import PIL.Image

from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent


USER_AGENT = get_datasets_user_agent()


def fetch_single_image(image_url, timeout=None, retries=0):
    for _ in range(retries + 1):
        try:
            request = urllib.request.Request(
                image_url,
                data=None,
                headers={"user-agent": USER_AGENT},
            )
            with urllib.request.urlopen(request, timeout=timeout) as req:
                image = PIL.Image.open(io.BytesIO(req.read()))
            break
        except Exception:
            image = None
    return image


def fetch_images(batch, num_threads, timeout=None, retries=0):
    fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
    with ThreadPoolExecutor(max_workers=num_threads) as executor:
        batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
    return batch


num_threads = 20
dset = load_dataset("sbu_captions")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})

Supported Tasks and Leaderboards

  • image-to-text: This dataset can be used to train a model for Image Captioning where the goal is to predict a caption given the image.

Languages

All captions are in English.

Dataset Structure

Data Instances

Each instance in SBU Captioned Photo Dataset represents a single image with a caption and a user_id:

{
  'img_url': 'http://static.flickr.com/2723/4385058960_b0f291553e.jpg', 
  'user_id': '47889917@N08', 
  'caption': 'A wooden chair in the living room'
}

Data Fields

  • image_url: Static URL for downloading the image associated with the post.
  • caption: Textual description of the image.
  • user_id: Author of caption.

Data Splits

All the data is contained in training split. The training set has 1M instances.

Dataset Creation

Curation Rationale

From the paper:

One contribution is our technique for the automatic collection of this new dataset – performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results.

Source Data

The source images come from Flickr.

Initial Data Collection and Normalization

One key contribution of our paper is a novel web-scale database of photographs with associated descriptive text. To enable effective captioning of novel images, this database must be good in two ways: 1) It must be large so that image based matches to a query are reasonably similar, 2) The captions associated with the data base photographs must be visually relevant so that transferring captions between pictures is useful. To achieve the first requirement we query Flickr using a huge number of pairs of query terms (objects, attributes, actions, stuff, and scenes). This produces a very large, but noisy initial set of photographs with associated text.

Who are the source language producers?

The Flickr users.

Annotations

Annotation process

Text descriptions associated with the images are inherited as annotations/captions.

Who are the annotators?

The Flickr users.

Personal and Sensitive Information

Considerations for Using the Data

Social Impact of Dataset

Discussion of Biases

Other Known Limitations

Additional Information

Dataset Curators

Vicente Ordonez, Girish Kulkarni and Tamara L. Berg.

Licensing Information

Not specified.

Citation Information

@inproceedings{NIPS2011_5dd9db5e,
 author = {Ordonez, Vicente and Kulkarni, Girish and Berg, Tamara},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {J. Shawe-Taylor and R. Zemel and P. Bartlett and F. Pereira and K.Q. Weinberger},
 pages = {},
 publisher = {Curran Associates, Inc.},
 title = {Im2Text: Describing Images Using 1 Million Captioned Photographs},
 url = {https://proceedings.neurips.cc/paper/2011/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf},
 volume = {24},
 year = {2011}
}

Contributions

Thanks to @thomasw21 for adding this dataset.