Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Portuguese
ArXiv:
Libraries:
Datasets
Dask
flickr30k-pt-br / README.md
gabrielmotablima's picture
Update README.md
30c045c verified
|
raw
history blame
3.69 kB
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: caption
      list: string
    - name: sentids
      list: string
    - name: split
      dtype: string
    - name: img_id
      dtype: string
    - name: filename
      dtype: string
  splits:
    - name: train
      num_bytes: 4044387988
      num_examples: 29000
    - name: test
      num_bytes: 142155397
      num_examples: 1000
    - name: validation
      num_bytes: 140557396.192
      num_examples: 1014
  download_size: 4306311970
  dataset_size: 4327100781.192
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*
task_categories:
  - text-generation
  - image-to-text
  - text-to-image
language:
  - pt
pretty_name: Flickr30K Portuguese Translated
size_categories:
  - 10K<n<100K

🎉 Flickr30K Translated for Portuguese Image Captioning

💾 Dataset Summary

Flickr30K Portuguese Translated, a multimodal dataset for Portuguese image captioning with 31,014 images, each accompanied by five descriptive captions that have been generated by human annotators for every individual image. The original English captions were rendered into Portuguese through the utilization of the Google Translator API.

The dataset is one of the results of work available at: https://github.com/laicsiifes/ved-transformer-caption-ptbr.

🧑‍💻 Hot to Get Started with the Dataset

from datasets import load_dataset

dataset = load_dataset('laicsiifes/flickr30k-pt-br')

✍️ Languages

The images descriptions in the dataset are in Portuguese.

🧱 Dataset Structure

📝 Data Instances

An example looks like below:

{
  'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333>,
  'caption':[
    'Um cachorro preto carrega um brinquedo verde na boca enquanto caminha pela grama.',
    'Um cachorro preto molhado carrega um brinquedo verde pela grama.',
    'Um cachorro preto carregando algo pela grama.',
    'Um cachorro na grama com um item azul na boca.',
    'Um cachorro preto tem um brinquedo azul na boca.'
  ],
  'sentids': ['450', '451', '452', '453', '454'],
  'split': 'train',
  'img_id': '90',
  'filename': '1026685415.jpg'
}

🗃️ Data Fields

The data instances have the following fields:

  • image: a PIL.Image.Image object containing image.
  • caption: a list of str containing 5 captions related to image.
  • sentids: a list of str containing 5 ordered identification numbers related to each caption.
  • split: a str containing data split. It stores texts: train, val or test.
  • img_id: a str containing image identification number.
  • filename: a str containing name of image file.

✂️ Data Splits

The dataset is partitioned using the Karpathy splitting appoach for Image Captioning (Karpathy and Fei-Fei, 2015).

Split Samples Average Caption Length (Words)
Train 29,000 12.1 ± 5.1
Validation 1,014 12.3 ± 5.3
Test 1,000 12.2 ± 5.4
Total 31,014 12.1 ± 5.2

📋 BibTeX entry and citation info

@inproceedings{bromonschenkel2024comparative,
                title = "A Comparative Evaluation of Transformer-Based Vision 
                         Encoder-Decoder Models for Brazilian Portuguese Image Captioning",
               author = "Bromonschenkel, Gabriel and Oliveira, Hil{\'a}rio and 
                         Paix{\~a}o, Thiago M.",
            booktitle = "Proceedings...",
         organization = "Conference on Graphics, Patterns and Images, 37. (SIBGRAPI)",
                 year = "2024"
}