--- annotations_creators: - no-annotation language: - en language_creators: - other license: - cc-by-4.0 multilinguality: - monolingual pretty_name: COYO-Labeled-300M size_categories: - 100M` | | imagehash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image | | labels | sequence[integer] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 classes) | | label_probs | sequence[float] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 probabilites) | | width | integer | The width of the image | | height | integer | The height of the image | ### Data Splits Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s). ## Dataset Creation ### Curation Rationale We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. Data sampling was done with a size similar to jft-300m, filtered by a specific threshold for probabilities for the top-1 label. ### Source Data [COYO-700M](https://huggingface.co/datasets/kakaobrain/coyo-700m) #### Who are the source language producers? [Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M. ### Annotations #### Annotation process The dataset was built in a fully automated process that did not require human annotation. #### Who are the annotators? No human annotation ### Personal and Sensitive Information The basic instruction, licenses and contributors are the same as for the [coyo-700m](https://huggingface.co/datasets/kakaobrain/coyo-700m).