autonlp-data-processor commited on
Commit
a213464
1 Parent(s): 67ecbc9

Processed data from autonlp data processor ([2021-10-22 09:35 ]

Browse files
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ languages:
3
+ - de
4
+ task_categories:
5
+ - text-scoring
6
+
7
+ ---
8
+ # AutoNLP Dataset for project: Doctor_DE
9
+
10
+ ## Table of content
11
+ - [Dataset Description](#dataset-description)
12
+ - [Languages](#languages)
13
+ - [Dataset Structure](#dataset-structure)
14
+ - [Data Instances](#data-instances)
15
+ - [Data Fields](#data-fields)
16
+ - [Data Splits](#data-splits)
17
+
18
+ ## Dataset Descritpion
19
+
20
+ This dataset has been automatically processed by AutoNLP for project Doctor_DE.
21
+
22
+ ### Languages
23
+
24
+ The BCP-47 code for the dataset's language is de.
25
+
26
+ ## Dataset Structure
27
+
28
+ ### Data Instances
29
+
30
+ A sample from this dataset looks as follows:
31
+
32
+ ```json
33
+ [
34
+ {
35
+ "text": "Ich bin nun seit ca 12 Jahren Patientin in dieser Praxis und kann einige der Kommentare hier ehrlich gesagt \u00fcberhaupt nicht nachvollziehen.<br />\nFr. Dr. Gr\u00f6ber Pohl ist in meinen Augen eine unglaublich nette und kompetente \u00c4rztin. Ich kenne in meinem Familien- und Bekanntenkreis viele die bei ihr in Behandlung sind, und alle sind sehr zufrieden!<br />\nSie nimmt sich immer viel Zeit und auch in meiner Schwangerschaft habe ich mich bei ihr immer gut versorgt gef\u00fchlt, und musste daf\u00fcr kein einziges Mal in die Tasche greifen!<br />\nDas einzig negative ist die lange Wartezeit in der Praxis. Daf\u00fcr nimmt sie sich aber auch Zeit und arbeitet nicht wie andere \u00c4rzte wie am Flie\u00dfband.<br />\nIch kann sie nur weiter empfehlen!",
36
+ "target": 1.0
37
+ },
38
+ {
39
+ "text": "Ich hatte nie den Eindruck \"Der N\u00e4chste bitte\" Er hatte sofort meine Beschwerden erkannt und Abhilfe geschafft.",
40
+ "target": 1.0
41
+ }
42
+ ]
43
+ ```
44
+
45
+ ### Dataset Fields
46
+
47
+ The dataset has the following fields (also called "features"):
48
+
49
+ ```json
50
+ {
51
+ "text": "Value(dtype='string', id=None)",
52
+ "target": "Value(dtype='float32', id=None)"
53
+ }
54
+ ```
55
+
56
+ ### Dataset Splits
57
+
58
+ This dataset is split into a train and validation split. The split sizes are as follow:
59
+
60
+ | Split name | Num samples |
61
+ | ------------ | ------------------- |
62
+ | train | 280191 |
63
+ | valid | 70050 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
1
+ {"splits": ["train", "valid"]}
processed/train/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:114371c5c694816bac7ce943d537052c97cf373e106af76bcc02723dcbfe3b94
3
+ size 794383736
processed/train/dataset_info.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoNLP generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "text": {
11
+ "dtype": "string",
12
+ "id": null,
13
+ "_type": "Value"
14
+ },
15
+ "target": {
16
+ "dtype": "float32",
17
+ "id": null,
18
+ "_type": "Value"
19
+ }
20
+ },
21
+ "homepage": "",
22
+ "license": "",
23
+ "post_processed": null,
24
+ "post_processing_size": null,
25
+ "size_in_bytes": null,
26
+ "splits": {
27
+ "train": {
28
+ "name": "train",
29
+ "num_bytes": 793929768,
30
+ "num_examples": 280191,
31
+ "dataset_name": null
32
+ }
33
+ },
34
+ "supervised_keys": null,
35
+ "task_templates": null,
36
+ "version": null
37
+ }
processed/train/indices.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93c536dca0f95b1d10d67a14d6723212a1c37eb6a18c377f906e03c647db1ca2
3
+ size 2282272
processed/train/state.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "ea581a64150375c4",
8
+ "_format_columns": [
9
+ "splitting_bin",
10
+ "target",
11
+ "text"
12
+ ],
13
+ "_format_kwargs": {},
14
+ "_format_type": null,
15
+ "_indexes": {},
16
+ "_indices_data_files": [
17
+ {
18
+ "filename": "indices.arrow"
19
+ }
20
+ ],
21
+ "_output_all_columns": false,
22
+ "_split": null
23
+ }
processed/valid/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:114371c5c694816bac7ce943d537052c97cf373e106af76bcc02723dcbfe3b94
3
+ size 794383736
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoNLP generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "text": {
11
+ "dtype": "string",
12
+ "id": null,
13
+ "_type": "Value"
14
+ },
15
+ "target": {
16
+ "dtype": "float32",
17
+ "id": null,
18
+ "_type": "Value"
19
+ }
20
+ },
21
+ "homepage": "",
22
+ "license": "",
23
+ "post_processed": null,
24
+ "post_processing_size": null,
25
+ "size_in_bytes": null,
26
+ "splits": {
27
+ "valid": {
28
+ "name": "valid",
29
+ "num_bytes": 793929768,
30
+ "num_examples": 70050,
31
+ "dataset_name": null
32
+ }
33
+ },
34
+ "supervised_keys": null,
35
+ "task_templates": null,
36
+ "version": null
37
+ }
processed/valid/indices.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb97cc3776ff57855a956c9c34041ab8f9d7df3f38e08e15f395a62656f82d81
3
+ size 570904
processed/valid/state.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a49870249d6c167d",
8
+ "_format_columns": [
9
+ "splitting_bin",
10
+ "target",
11
+ "text"
12
+ ],
13
+ "_format_kwargs": {},
14
+ "_format_type": null,
15
+ "_indexes": {},
16
+ "_indices_data_files": [
17
+ {
18
+ "filename": "indices.arrow"
19
+ }
20
+ ],
21
+ "_output_all_columns": false,
22
+ "_split": null
23
+ }