autotrain-data-processor commited on
Commit
6a16b48
1 Parent(s): f61848a

Processed data from AutoTrain data processor ([2023-08-28 08:01 ]

Browse files
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-classification
4
+
5
+ ---
6
+ # AutoTrain Dataset for project: bert-base-uncased
7
+
8
+ ## Dataset Description
9
+
10
+ This dataset has been automatically processed by AutoTrain for project bert-base-uncased.
11
+
12
+ ### Languages
13
+
14
+ The BCP-47 code for the dataset's language is unk.
15
+
16
+ ## Dataset Structure
17
+
18
+ ### Data Instances
19
+
20
+ A sample from this dataset looks as follows:
21
+
22
+ ```json
23
+ [
24
+ {
25
+ "feat_Unnamed: 0": 922,
26
+ "feat_idx": 922,
27
+ "text": "is it hokey ",
28
+ "target": 0
29
+ },
30
+ {
31
+ "feat_Unnamed: 0": 912,
32
+ "feat_idx": 912,
33
+ "text": "'n safe as to often play like a milquetoast movie of the week blown up for the big screen ",
34
+ "target": 0
35
+ }
36
+ ]
37
+ ```
38
+
39
+ ### Dataset Fields
40
+
41
+ The dataset has the following fields (also called "features"):
42
+
43
+ ```json
44
+ {
45
+ "feat_Unnamed: 0": "Value(dtype='int64', id=None)",
46
+ "feat_idx": "Value(dtype='int64', id=None)",
47
+ "text": "Value(dtype='string', id=None)",
48
+ "target": "ClassLabel(names=['0', '1'], id=None)"
49
+ }
50
+ ```
51
+
52
+ ### Dataset Splits
53
+
54
+ This dataset is split into a train and validation split. The split sizes are as follow:
55
+
56
+ | Split name | Num samples |
57
+ | ------------ | ------------------- |
58
+ | train | 799 |
59
+ | valid | 201 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "valid"]}
processed/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7caae67a54a7c9c04f92170f3e771e56107e82af0dfa1b6224cb5bad77109790
3
+ size 66200
processed/train/dataset_info.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "feat_Unnamed: 0": {
6
+ "dtype": "int64",
7
+ "_type": "Value"
8
+ },
9
+ "feat_idx": {
10
+ "dtype": "int64",
11
+ "_type": "Value"
12
+ },
13
+ "text": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ },
17
+ "target": {
18
+ "names": [
19
+ "0",
20
+ "1"
21
+ ],
22
+ "_type": "ClassLabel"
23
+ }
24
+ },
25
+ "homepage": "",
26
+ "license": "",
27
+ "splits": {
28
+ "train": {
29
+ "name": "train",
30
+ "num_bytes": 65289,
31
+ "num_examples": 799,
32
+ "dataset_name": null
33
+ }
34
+ }
35
+ }
processed/train/state.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "6a8b50b91048dd98",
8
+ "_format_columns": [
9
+ "feat_Unnamed: 0",
10
+ "feat_idx",
11
+ "target",
12
+ "text"
13
+ ],
14
+ "_format_kwargs": {},
15
+ "_format_type": null,
16
+ "_output_all_columns": false,
17
+ "_split": null
18
+ }
processed/valid/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1166e22614d4338cb32be7b01b94535b64c4a522796ff764f7f8d3ff502cda21
3
+ size 17064
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "feat_Unnamed: 0": {
6
+ "dtype": "int64",
7
+ "_type": "Value"
8
+ },
9
+ "feat_idx": {
10
+ "dtype": "int64",
11
+ "_type": "Value"
12
+ },
13
+ "text": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ },
17
+ "target": {
18
+ "names": [
19
+ "0",
20
+ "1"
21
+ ],
22
+ "_type": "ClassLabel"
23
+ }
24
+ },
25
+ "homepage": "",
26
+ "license": "",
27
+ "splits": {
28
+ "valid": {
29
+ "name": "valid",
30
+ "num_bytes": 16154,
31
+ "num_examples": 201,
32
+ "dataset_name": null
33
+ }
34
+ }
35
+ }
processed/valid/state.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "d4c68c9a9a003730",
8
+ "_format_columns": [
9
+ "feat_Unnamed: 0",
10
+ "feat_idx",
11
+ "target",
12
+ "text"
13
+ ],
14
+ "_format_kwargs": {},
15
+ "_format_type": null,
16
+ "_output_all_columns": false,
17
+ "_split": null
18
+ }