Dataset Preview
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'projecte-aina--cat_manynames' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xb7 in position 773: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/", line 262, in compute_first_rows_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 1703, in _resolve_features
                  features = _infer_features_from_batch(self._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 824, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 937, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 678, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 763, in wrapper
                  for key, table in generate_tables_fn(**kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/", line 177, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/", line 71, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/", line 790, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/util/", line 211, in wrapper
                  return func(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/util/", line 331, in wrapper
                  return func(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/", line 950, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/", line 605, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/", line 1442, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/", line 1753, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/", line 79, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "pandas/_libs/parsers.pyx", line 547, in pandas._libs.parsers.TextReader.__cinit__
                File "pandas/_libs/parsers.pyx", line 636, in pandas._libs.parsers.TextReader._get_header
                File "pandas/_libs/parsers.pyx", line 852, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 1965, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb7 in position 773: invalid start byte

Need help to make the dataset viewer work? Open a discussion for direct support.

Dataset Card for CAT ManyNames

Dataset Summary

CAT ManyNames is the Catalan version of the ManyNames dataset suitable for training Language & Vision models in the task of object naming. The corpus consists of more than 23K images and their corresponding annotations.

The human-annotated test set has been built to evaluate the quality of the CAT ManyNames dataset. Its corpus consists of 1,072 images and their corresponding annotations (ca. 10 annotations per image).

Supported Tasks and Leaderboards

Object Naming, Language & Vision Model.


The dataset is in Catalan (ca-CA).

Dataset Structure

Data Instances

image 'responses' : {"guepard": 27, "lleopard": 3, "animal": 2, "gat": 2, "tigre": 2}

Data Fields

Both CAT ManyNames and its human-annotated test set are provided in a tab-separated text file (.tsv). The first rows contain the column labels. Nested data is stored as Python dictionaries (i.e., "{key: value}"). The columns are labelled as follows (the most important columns are listed first):

  • responses: Correct responses and their counts
  • topname: The most frequent name of the object in the largest cluster
  • domain: The MN domain of the object
  • incorrect (not available for the human-annotated test set): Incorrect responses and their counts
  • singletons (not available for the human-annotated test set): All responses which were given only once and are not synonyms or - hypernyms of the top name (these are included in responses)
  • total_responses: Sum count of correct responses
  • split: Use of the image in training vs. test vs. validation
  • vg_object_id: The VisualGenome id of the object
  • vg_image_id: The VisualGenome id of the image
  • topname_agreement (only available for the test split): The number of responses for the top name divided by the number of total responses
  • jaccard_similarity (only available for the test split): Jaccard similarity index of the responses column in CAT ManyNames and its human-annotated test set

Data Splits

  • Test: 1,072 images
    • Val: 1,110 images
      • Train: 21,503 images

Dataset Creation

Curation Rationale

We created this corpus to contribute to the development of multimodal models in Catalan, a low-resource language.

Source Data

Initial Data Collection and Normalization

The original visual data comes from VisualGenome. The objects were categorized in the categories of people, animals_plants, vehicles, food, home, buildings, and clothing.

Who are the source language producers?

The original ManyNames dataset.


Annotations for the CAT ManyNames were obtained by performing a machine translation of the ManyNames dataset, originally in English. The test set was humanly annotated.

Annotation process


Who are the annotators?

The human-annotated test set gathered 220 Catalan native volunteer participants.

Personal and Sensitive Information

There is no sensitive information in this dataset.

Considerations for Using the Data

Social Impact of Dataset

We hope this corpus contributes to the development of multimodal models in Catalan, a low-resource language.

Discussion of Biases

We have not applied any steps to reduce the impact of biases possibly present in the data.

Other Known Limitations


Additional Information

Dataset Curators

Mar Domínguez Orfila (mar dot dominguez01 at estudiant dot upf dot edu)

Licensing Information

CAT ManyNames is licensed under a Creative Commons Attribution 4.0 International License.

Citation Information

    title = "{CAT} {M}any{N}ames: A New Dataset for Object Naming in {C}atalan",
    author = "Dom{\'\i}nguez Orfila, Mar  and
      Melero Nogu{\'e}s, Maite  and
      Boleda Torrent, Gemma",
    booktitle = "Proceedings of the Workshop on Cognitive Aspects of the Lexicon",
    month = nov,
    year = "2022",
    address = "Taipei, Taiwan",
    publisher = "Association for Computational Linguistics",
    url = "",
    pages = "31--36",
    abstract = "Object Naming is an important task within the field of Language and Vision that consists of generating a correct and appropriate name for an object given an image. The ManyNames dataset uses real-world human annotated images with multiple labels, instead of just one. In this work, we describe the adaptation of this dataset (originally in English) to Catalan, by (i) machine-translating the English labels and (ii) collecting human annotations for a subset of the original corpus and comparing both resources. Analyses reveal divergences in the lexical variation of the two sets showing potential problems of directly translated resources, particularly when there is no resource to a proper context, which in this case is conveyed by the image. The analysis also points to the impact of cultural factors in the naming task, which should be accounted for in future cross-lingual naming tasks.",
Downloads last month