autotrain-data-processor commited on
Commit
2f368e1
1 Parent(s): 4fb17aa

Processed data from AutoTrain data processor ([2023-04-09 04:21 ]

Browse files
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - image-classification
4
+
5
+ ---
6
+ # AutoTrain Dataset for project: ethnicity-test_v003
7
+
8
+ ## Dataset Description
9
+
10
+ This dataset has been automatically processed by AutoTrain for project ethnicity-test_v003.
11
+
12
+ ### Languages
13
+
14
+ The BCP-47 code for the dataset's language is unk.
15
+
16
+ ## Dataset Structure
17
+
18
+ ### Data Instances
19
+
20
+ A sample from this dataset looks as follows:
21
+
22
+ ```json
23
+ [
24
+ {
25
+ "image": "<512x512 RGB PIL image>",
26
+ "target": 1
27
+ },
28
+ {
29
+ "image": "<512x512 RGB PIL image>",
30
+ "target": 3
31
+ }
32
+ ]
33
+ ```
34
+
35
+ ### Dataset Fields
36
+
37
+ The dataset has the following fields (also called "features"):
38
+
39
+ ```json
40
+ {
41
+ "image": "Image(decode=True, id=None)",
42
+ "target": "ClassLabel(names=['african', 'asian', 'caucasian', 'hispanic', 'indian'], id=None)"
43
+ }
44
+ ```
45
+
46
+ ### Dataset Splits
47
+
48
+ This dataset is split into a train and validation split. The split sizes are as follow:
49
+
50
+ | Split name | Num samples |
51
+ | ------------ | ------------------- |
52
+ | train | 4531 |
53
+ | valid | 1135 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "valid"]}
processed/train/data-00000-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf1a17e7bf3ea6c7a99ea6950167c8b70333b2ec284258cf520d07a7387b8b02
3
+ size 401474488
processed/train/data-00001-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a2428a06a7b361441d64b2323e4f2b57305f3e00c9b258f0093f61c02711bf5
3
+ size 402248312
processed/train/data-00002-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dba2c8c3c1d33596e4fc82f29d81fa29ff75c3ef42a0fcd56789b92c3d2269ff
3
+ size 403119696
processed/train/dataset_info.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "image": {
6
+ "_type": "Image"
7
+ },
8
+ "target": {
9
+ "names": [
10
+ "african",
11
+ "asian",
12
+ "caucasian",
13
+ "hispanic",
14
+ "indian"
15
+ ],
16
+ "_type": "ClassLabel"
17
+ }
18
+ },
19
+ "homepage": "",
20
+ "license": "",
21
+ "splits": {
22
+ "train": {
23
+ "name": "train",
24
+ "num_bytes": 1206839088,
25
+ "num_examples": 4531,
26
+ "dataset_name": null
27
+ }
28
+ }
29
+ }
processed/train/state.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00003.arrow"
5
+ },
6
+ {
7
+ "filename": "data-00001-of-00003.arrow"
8
+ },
9
+ {
10
+ "filename": "data-00002-of-00003.arrow"
11
+ }
12
+ ],
13
+ "_fingerprint": "e3560d5cc25a911d",
14
+ "_format_columns": [
15
+ "image",
16
+ "target"
17
+ ],
18
+ "_format_kwargs": {},
19
+ "_format_type": null,
20
+ "_output_all_columns": false,
21
+ "_split": null
22
+ }
processed/valid/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b5517a5650a736bab7bd9b4d723a76eece6a7e7330a1b813a0ab39c75e735f4
3
+ size 305598352
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "image": {
6
+ "_type": "Image"
7
+ },
8
+ "target": {
9
+ "names": [
10
+ "african",
11
+ "asian",
12
+ "caucasian",
13
+ "hispanic",
14
+ "indian"
15
+ ],
16
+ "_type": "ClassLabel"
17
+ }
18
+ },
19
+ "homepage": "",
20
+ "license": "",
21
+ "splits": {
22
+ "valid": {
23
+ "name": "valid",
24
+ "num_bytes": 305597220,
25
+ "num_examples": 1135,
26
+ "dataset_name": null
27
+ }
28
+ }
29
+ }
processed/valid/state.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "8c7af104259ca79e",
8
+ "_format_columns": [
9
+ "image",
10
+ "target"
11
+ ],
12
+ "_format_kwargs": {},
13
+ "_format_type": null,
14
+ "_output_all_columns": false,
15
+ "_split": null
16
+ }