Datasets:
Tasks:
Image Classification
Modalities:
Image
Formats:
parquet
Languages:
English
Size:
10K - 100K
License:
metadata
language:
- en
license:
- cc0-1.0
pretty_name: Cat and Dog
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
dataset_info:
features:
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
'0': cat
'1': dog
splits:
- name: train
num_bytes: 166451650
num_examples: 8000
- name: test
num_bytes: 42101650
num_examples: 2000
download_size: 227859268
dataset_size: 208553300
size_in_bytes: 436412568
Dataset Description
- Homepage: Cat and Dog
- Download Size 217.30 MiB
- Generated Size 198.89 MiB
- Total Size 416.20 MiB
Dataset Summary
A dataset from kaggle with duplicate data removed.
Data Fields
The data instances have the following fields:
image
: APIL.Image.Image
object containing the image. Note that when accessing the image column:dataset[0]["image"]
the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the"image"
column, i.e.dataset[0]["image"]
should always be preferred overdataset["image"][0]
.labels
: anint
classification label.
Class Label Mappings:
{
"cat": 0,
"dog": 1,
}
Data Splits
train | test | |
---|---|---|
# of examples | 8000 | 2000 |
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/Cat_and_Dog")
>>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'labels'],
num_rows: 8000
})
test: Dataset({
features: ['image', 'labels'],
num_rows: 2000
})
})
>>> dataset["train"].features
{'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=2, names=['cat', 'dog'], id=None)}