The dataset preview is not available for this split.
Error code: FeaturesError Exception: ParserError Message: Error tokenizing data. C error: Expected 2 fields in line 5, saw 4 Traceback: Traceback (most recent call last): File "/src/workers/datasets_based/src/datasets_based/workers/first_rows.py", line 475, in compute_first_rows_response iterable_dataset = iterable_dataset._resolve_features() File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1617, in _resolve_features features = _infer_features_from_batch(self._head()) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 804, in _head return _examples_to_batch(list(self.take(n))) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 917, in __iter__ for key, example in ex_iterable: File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 658, in __iter__ yield from islice(self.ex_iterable, self.n) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 743, in wrapper for key, table in generate_tables_fn(**kwargs): File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 179, in _generate_tables for batch_idx, df in enumerate(csv_file_reader): File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1698, in __next__ return self.get_chunk() File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1810, in get_chunk return self.read(nrows=size) File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1778, in read ) = self._engine.read( # type: ignore[attr-defined] File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 230, in read chunks = self._reader.read_low_memory(nrows) File "pandas/_libs/parsers.pyx", line 820, in pandas._libs.parsers.TextReader.read_low_memory File "pandas/_libs/parsers.pyx", line 866, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 852, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 1973, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 5, saw 4
Need help to make the dataset viewer work? Open an discussion for direct support.
Introduction
Modern image captaining relies heavily on extracting knowledge, from images such as objects, to capture the concept of static story in the image. In this paper, we propose a textual visual context dataset for captioning, where the publicly available dataset COCO caption (Lin et al., 2014) has been extended with information about the scene (such as objects in the image). Since this information has textual form, it can be used to leverage any NLP task, such as text similarity or semantic relation methods, into captioning systems, either as an end-to-end training strategy or a post-processing based approach.
Please refer to project page and Github for more information.
Overview
We enrich COCO-Caption with textual Visual Context information. We use ResNet152, CLIP, and Faster R-CNN to extract object information for each image. We use three filter approaches to ensure the quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment with semantic similarity to remove duplicated objects. (3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong relation. In particular, we use Sentence-RoBERTa via cosine similarity to give a soft score, and then we use a threshold to annotate the final label (if th ≥ 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow CNN (Kim, 2014) to estimate the visual relatedness score.
For quick start please have a look this demo
Dataset
Sample
|---------------+--------------+---------+---------------------------------------------------|
| VC1 | VC2 | VC3 | human annoated caption |
| ------------- | ----------- | --------| ------------------------------------------------- |
| cheeseburger | plate | hotdog | a plate with a hamburger fries and tomatoes |
| bakery | dining table | website | a table having tea and a cake on it |
| gown | groom | apron | its time to cut the cake at this couples wedding |
|---------------+--------------+---------+---------------------------------------------------|
Download
- Dowload Raw data with ID and Visual context -> original dataset with related ID caption train2014
- Downlod Data with cosine score-> soft cosine lable with th 0.2, 0.3, 0.4 and 0.5
- Dowload Overlaping visual with caption-> Overlap visual context and the human annotated caption
- Download Dataset (tsv file) 0.0-> raw data with hard lable without cosine similairty and with threshold cosine sim degree of the relation beteween the visual and caption = 0.2, 0.3, 0.4
- Download Dataset GenderBias-> man/woman replaced with person class label
For unsupervised learning
- Download CC -> Caption dataset from Conceptinal Caption (CC) 2M (2255927 captions)
- Download CC+wiki -> CC+1M-wiki 3M (3255928)
- Download CC+wiki+COCO -> CC+wiki+COCO-Caption 3.5M (366984)
- Download COCO-caption+wiki -> COCO-caption +wiki 1.4M (1413915)
- Download COCO-caption+wiki+CC+8Mwiki -> COCO-caption+wiki+CC+8Mwiki 11M (11541667)
- Downloads last month
- 1