metadata
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': BACKGROUND_Google
'1': Faces
'2': Faces_easy
'3': Leopards
'4': Motorbikes
'5': accordion
'6': airplanes
'7': anchor
'8': ant
'9': barrel
'10': bass
'11': beaver
'12': binocular
'13': bonsai
'14': brain
'15': brontosaurus
'16': buddha
'17': butterfly
'18': camera
'19': cannon
'20': car_side
'21': ceiling_fan
'22': cellphone
'23': chair
'24': chandelier
'25': cougar_body
'26': cougar_face
'27': crab
'28': crayfish
'29': crocodile
'30': crocodile_head
'31': cup
'32': dalmatian
'33': dollar_bill
'34': dolphin
'35': dragonfly
'36': electric_guitar
'37': elephant
'38': emu
'39': euphonium
'40': ewer
'41': ferry
'42': flamingo
'43': flamingo_head
'44': garfield
'45': gerenuk
'46': gramophone
'47': grand_piano
'48': hawksbill
'49': headphone
'50': hedgehog
'51': helicopter
'52': ibis
'53': inline_skate
'54': joshua_tree
'55': kangaroo
'56': ketch
'57': lamp
'58': laptop
'59': llama
'60': lobster
'61': lotus
'62': mandolin
'63': mayfly
'64': menorah
'65': metronome
'66': minaret
'67': nautilus
'68': octopus
'69': okapi
'70': pagoda
'71': panda
'72': pigeon
'73': pizza
'74': platypus
'75': pyramid
'76': revolver
'77': rhino
'78': rooster
'79': saxophone
'80': schooner
'81': scissors
'82': scorpion
'83': sea_horse
'84': snoopy
'85': soccer_ball
'86': stapler
'87': starfish
'88': stegosaurus
'89': stop_sign
'90': strawberry
'91': sunflower
'92': tick
'93': trilobite
'94': umbrella
'95': watch
'96': water_lilly
'97': wheelchair
'98': wild_cat
'99': windsor_chair
'100': wrench
'101': yin_yang
splits:
- name: test
num_bytes: 87143920.1032546
num_examples: 6060
- name: train
num_bytes: 44935085.33674541
num_examples: 3084
download_size: 137936915
dataset_size: 132079005.44000001
task_categories:
- image-classification
language:
- en
pretty_name: d
Dataset Card for "Caltech-101"
This is a non-official Caltech-101
dataset for fine-grained Image Classification.
Since there is no official method for separating training and test data, we arbitrarily split the data similar to TensorFlow.
If you want to download the official dataset, please refer to the here.