autotrain-data-processor commited on
Commit
eac3151
1 Parent(s): 37ed8be

Processed data from AutoTrain data processor ([2022-12-08 18:39 ]

Browse files
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - image-classification
4
+
5
+ ---
6
+ # AutoTrain Dataset for project: test_row2
7
+
8
+ ## Dataset Description
9
+
10
+ This dataset has been automatically processed by AutoTrain for project test_row2.
11
+
12
+ ### Languages
13
+
14
+ The BCP-47 code for the dataset's language is unk.
15
+
16
+ ## Dataset Structure
17
+
18
+ ### Data Instances
19
+
20
+ A sample from this dataset looks as follows:
21
+
22
+ ```json
23
+ [
24
+ {
25
+ "image": "<316x316 RGB PIL image>",
26
+ "target": 1
27
+ },
28
+ {
29
+ "image": "<316x316 RGB PIL image>",
30
+ "target": 3
31
+ }
32
+ ]
33
+ ```
34
+
35
+ ### Dataset Fields
36
+
37
+ The dataset has the following fields (also called "features"):
38
+
39
+ ```json
40
+ {
41
+ "image": "Image(decode=True, id=None)",
42
+ "target": "ClassLabel(num_classes=5, names=['animals', 'dance', 'food', 'sport', 'tech'], id=None)"
43
+ }
44
+ ```
45
+
46
+ ### Dataset Splits
47
+
48
+ This dataset is split into a train and validation split. The split sizes are as follow:
49
+
50
+ | Split name | Num samples |
51
+ | ------------ | ------------------- |
52
+ | train | 392 |
53
+ | valid | 101 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "valid"]}
processed/train/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76ff87f22ac2ef61a876ec2608d4f8d0a6a68fe932c0f9b776d3c127fc672697
3
+ size 13076112
processed/train/dataset_info.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoTrain generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "image": {
11
+ "decode": true,
12
+ "id": null,
13
+ "_type": "Image"
14
+ },
15
+ "target": {
16
+ "num_classes": 5,
17
+ "names": [
18
+ "animals",
19
+ "dance",
20
+ "food",
21
+ "sport",
22
+ "tech"
23
+ ],
24
+ "id": null,
25
+ "_type": "ClassLabel"
26
+ }
27
+ },
28
+ "homepage": "",
29
+ "license": "",
30
+ "post_processed": null,
31
+ "post_processing_size": null,
32
+ "size_in_bytes": null,
33
+ "splits": {
34
+ "train": {
35
+ "name": "train",
36
+ "num_bytes": 13075255,
37
+ "num_examples": 392,
38
+ "dataset_name": null
39
+ }
40
+ },
41
+ "supervised_keys": null,
42
+ "task_templates": null,
43
+ "version": null
44
+ }
processed/train/state.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a3fcc0138543ab15",
8
+ "_format_columns": [
9
+ "image",
10
+ "target"
11
+ ],
12
+ "_format_kwargs": {},
13
+ "_format_type": null,
14
+ "_indexes": {},
15
+ "_output_all_columns": false,
16
+ "_split": null
17
+ }
processed/valid/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5e5a6951994607db563bc5f0e31c5482110ca82f3f6feef71c17f925edc3589
3
+ size 3317216
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoTrain generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "image": {
11
+ "decode": true,
12
+ "id": null,
13
+ "_type": "Image"
14
+ },
15
+ "target": {
16
+ "num_classes": 5,
17
+ "names": [
18
+ "animals",
19
+ "dance",
20
+ "food",
21
+ "sport",
22
+ "tech"
23
+ ],
24
+ "id": null,
25
+ "_type": "ClassLabel"
26
+ }
27
+ },
28
+ "homepage": "",
29
+ "license": "",
30
+ "post_processed": null,
31
+ "post_processing_size": null,
32
+ "size_in_bytes": null,
33
+ "splits": {
34
+ "valid": {
35
+ "name": "valid",
36
+ "num_bytes": 3316362,
37
+ "num_examples": 101,
38
+ "dataset_name": null
39
+ }
40
+ },
41
+ "supervised_keys": null,
42
+ "task_templates": null,
43
+ "version": null
44
+ }
processed/valid/state.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "fe87e6fc73cece32",
8
+ "_format_columns": [
9
+ "image",
10
+ "target"
11
+ ],
12
+ "_format_kwargs": {},
13
+ "_format_type": null,
14
+ "_indexes": {},
15
+ "_output_all_columns": false,
16
+ "_split": null
17
+ }