--- configs: - config_name: commons_images data_files: - split: train path: commons_images/train/*.tar - split: validation path: commons_images/validation/*.tar - split: test path: commons_images/test/*.tar - config_name: all_wikidata_items data_files: all_wikidata_items/*.tar - config_name: frequent_wikidata_items data_files: frequent_wikidata_items/*.tar language: - en pretty_name: 'Visual Entity Linking: Wikimedia Commons & Wikidata' size_categories: - 1M(#rows)
(#gt_items) | 800,000
(1,377,684)
(490,876) | 800,000
(1,498,026)
(17,287) | | #images **validation**
(#rows)
(#gt_items) | 100,000
(195,535)
(72,055) | 100,000
(212,885)
(14,253) | | #images **test**
(#rows)
(#gt_items) | 100,000
(100,000)
(72,271) | 100,000
(100,000)
(14,351) | | #items | 2,305,611 | 18,522 | Note that the number of rows (or examples) for the train and validations splits is higher than their respective number of images, because many images have more than one ground-truth label while we want to make use of **each** of them in training and validation mini-batches. So, while the Commons images themselves were randomly shuffled beforehand, users have to ensure this also holds true on the level of individual rows if they do *not* want all labels of an image to be part of the same mini-batch. *#gt_items* indicates the number of unique Wikidata items present as ground-truth labels in the respective split (and threshold). In the following, the detailed structure and content of every configuration (and split) is described, listing the the column names and potentially subfields: #### Commons Images Config The structure of the train, validation and test splits of *commons_images* is identical. * "\_\_key\_\_": The image's unique Commons page ID. The corresponding Commons media page URL is constructed by `https://commons.wikimedia.org/?curid=`. * "jpg" and "png": The Commons image itself as a `PIL.Image`. Since we collect both jpg/jpeg and png images from Commons but HF datasets are required to have the same set of columns per row (unless explicitly stating `Features` on dataset loading), we keep a "jpg" and a "png" column for every row. On the other hand, the `WebDataset` library needs a column content that is valid for the according column name for it to get automatically decoded. So, we decide to use the [**minimal** jpg or png image]( https://github.com/mathiasbynens/small) for the image type not actually given in order to limit the required space overhead (which is negligible in relation to the remaining dataset size). * "json": All of the image's metadata: * img_id: int - the image's Commons page ID (same as *\_\_key\_\_*), * categories: List[string] - the Commons categories associated with the image, * description: string - the English image description (empty string if not available), * f0_labels: List[int] - the ground-truth item labels (QIDs) for *f=0* (i.e. no threshold), * f0_label_indices: List[int] - global indices of the *f=0* item labels (in the unshuffled *all_wikidata_items* subset) for easy access, * f10_labels: List[int] - the ground-truth item labels (QIDs) for *f=10*, * f10_label_indices: List[int] - global indices of the *f=10* item labels (in the unshuffled *frequent_wikidata_items* subset) for easy access, * img_extension: string - the image type of the actual image (as opposed to the minimum image), * img_author: string - the inferred image author or uploader (empty string if not available), * img_license: string - the inferred image license stated on Commons (empty string if not available). #### Wikidata Items Config The structure of *all_wikidata_items* and *frequent_wikidata_items* is identical. * "\_\_key\_\_": The item's unique Wikidata QID. The corresponding Wikidata item page URL is constructed by `https://www.wikidata.org/wiki/Q`. * "jpg" and "png": The item's *first* linked image from the `image` statement - if any -, otherwise *both* "jpg" and "png" are their respective default files as explained above. * "json": All of the item's data and image metadata: * qid: int - the item's Wikidata QID (same as *\_\_key\_\_*), * name: string - the English short name of the item (in rare cases empty), * description: string - the English item description (in rare cases empty), * img_extension: string|null - the image type of the actual image (as opposed to the minimum image); if null, no actual image is available, * img_author: string - the inferred image author or uploader (empty string if not available), * img_license: string - the inferred image license stated on Commons (empty string if not available), * superclasses: List[List[int]] - superclasses of the item across *all* candidate items, divided up by the number of hops in the KG item hierarchy. * "npy": The pre-trained Wikidata KG embedding of this item, represented as a 200-dimensional float `numpy` array. If no pre-trained is available, it is filled with zeros. ## Bias, Risks and Limitations *None* of the Commons images used in this dataset were filtered by their depicted content, meaning that they might contain violent, explicit or other sensitive content. Accordingly, personal or private data (assumed to be compatible with the policies of the Wikimedia community) might also be present in the dataset. The ground-truth quality of the dataset might suffer from the fact that the item annotation itself is not unambiguous and that partly contradicting community guidelines exist on what items to add to the *depicts* statement. We did not refine the ground-truth labels in any way, which is why on rare occasions a label might be unreasonable or even plain wrong. Since we directly rely on the Wikimedia community to upload images and annotate depicted Wikidata items, biases present in this upload or annotation behaviors likely are reflected in our dataset, too. This regards both what images even get uploaded and annotated (and, therefore, can be part of this dataset) as well as which items are chosen to be included in the *depicts* statements - and which not (especially because in most cases there are plenty of different items plausible to select). No explicit steps were taken to assess or reduce these biases, relying on the size and diversity of the Wikimedia community itself. ## Citation **BibTeX:** TBA ## Dataset & Dataset Card Creators This dataset was created as part of a university project at the HPI AI & Intelligent Systems chair, under supervision of [Lucie-Aimée Kaffee](https://huggingface.co/frimelle), Russa Biswas, and Gerard de Melo. Its creators can be contacted under the following e-mail addresses: philipp.bielefeld@student.hpi.uni-potsdam.de jasmin.geppert@student.hpi.uni-potsdam.de necdet.guven@student.hpi.uni-potsdam.de melnatreeva.john@student.hpi.uni-potsdam.de adrian.ziupka@student.hpi.uni-potsdam.de