autotrain-data-processor commited on
Commit
9e8bc5b
1 Parent(s): e1bc723

Processed data from AutoTrain data processor ([2022-10-04 13:13 ]

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+
5
+ ---
6
+ # AutoTrain Dataset for project: person-name-validity1
7
+
8
+ ## Dataset Description
9
+
10
+ This dataset has been automatically processed by AutoTrain for project person-name-validity1.
11
+
12
+ ### Languages
13
+
14
+ The BCP-47 code for the dataset's language is en.
15
+
16
+ ## Dataset Structure
17
+
18
+ ### Data Instances
19
+
20
+ A sample from this dataset looks as follows:
21
+
22
+ ```json
23
+ [
24
+ {
25
+ "tokens": [
26
+ "divided"
27
+ ],
28
+ "tags": [
29
+ 0
30
+ ]
31
+ },
32
+ {
33
+ "tokens": [
34
+ "nusrat"
35
+ ],
36
+ "tags": [
37
+ 1
38
+ ]
39
+ }
40
+ ]
41
+ ```
42
+
43
+ ### Dataset Fields
44
+
45
+ The dataset has the following fields (also called "features"):
46
+
47
+ ```json
48
+ {
49
+ "tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
50
+ "tags": "Sequence(feature=ClassLabel(num_classes=2, names=['0', '2'], id=None), length=-1, id=None)"
51
+ }
52
+ ```
53
+
54
+ ### Dataset Splits
55
+
56
+ This dataset is split into a train and validation split. The split sizes are as follow:
57
+
58
+ | Split name | Num samples |
59
+ | ------------ | ------------------- |
60
+ | train | 2499 |
61
+ | valid | 499 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "valid"]}
processed/train/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d110acd22c4b423e565cd06285e8825c77d6ef2bff8660df8f7d1b192f097c5
3
+ size 66096
processed/train/dataset_info.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoTrain generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "tokens": {
11
+ "feature": {
12
+ "dtype": "string",
13
+ "id": null,
14
+ "_type": "Value"
15
+ },
16
+ "length": -1,
17
+ "id": null,
18
+ "_type": "Sequence"
19
+ },
20
+ "tags": {
21
+ "feature": {
22
+ "num_classes": 2,
23
+ "names": [
24
+ "0",
25
+ "2"
26
+ ],
27
+ "id": null,
28
+ "_type": "ClassLabel"
29
+ },
30
+ "length": -1,
31
+ "id": null,
32
+ "_type": "Sequence"
33
+ }
34
+ },
35
+ "homepage": "",
36
+ "license": "",
37
+ "post_processed": null,
38
+ "post_processing_size": null,
39
+ "size_in_bytes": null,
40
+ "splits": {
41
+ "train": {
42
+ "name": "train",
43
+ "num_bytes": 64490,
44
+ "num_examples": 2499,
45
+ "dataset_name": null
46
+ }
47
+ },
48
+ "supervised_keys": null,
49
+ "task_templates": null,
50
+ "version": null
51
+ }
processed/train/state.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "9dd46654ea595f1e",
8
+ "_format_columns": [
9
+ "tags",
10
+ "tokens"
11
+ ],
12
+ "_format_kwargs": {},
13
+ "_format_type": null,
14
+ "_indexes": {},
15
+ "_output_all_columns": false,
16
+ "_split": null
17
+ }
processed/valid/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d6d6e767188abe0ca6aded27e8c04ae4735c4040bbe73b41dc815eeeb396476
3
+ size 13840
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoTrain generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "tokens": {
11
+ "feature": {
12
+ "dtype": "string",
13
+ "id": null,
14
+ "_type": "Value"
15
+ },
16
+ "length": -1,
17
+ "id": null,
18
+ "_type": "Sequence"
19
+ },
20
+ "tags": {
21
+ "feature": {
22
+ "num_classes": 2,
23
+ "names": [
24
+ "0",
25
+ "2"
26
+ ],
27
+ "id": null,
28
+ "_type": "ClassLabel"
29
+ },
30
+ "length": -1,
31
+ "id": null,
32
+ "_type": "Sequence"
33
+ }
34
+ },
35
+ "homepage": "",
36
+ "license": "",
37
+ "post_processed": null,
38
+ "post_processing_size": null,
39
+ "size_in_bytes": null,
40
+ "splits": {
41
+ "valid": {
42
+ "name": "valid",
43
+ "num_bytes": 12899,
44
+ "num_examples": 499,
45
+ "dataset_name": null
46
+ }
47
+ },
48
+ "supervised_keys": null,
49
+ "task_templates": null,
50
+ "version": null
51
+ }
processed/valid/state.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "d1dbf251133112c2",
8
+ "_format_columns": [
9
+ "tags",
10
+ "tokens"
11
+ ],
12
+ "_format_kwargs": {},
13
+ "_format_type": null,
14
+ "_indexes": {},
15
+ "_output_all_columns": false,
16
+ "_split": null
17
+ }