Update files from the datasets library (from 1.3.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.3.0
README.md
CHANGED
@@ -42,6 +42,7 @@ task_ids:
|
|
42 |
- [Dataset Curators](#dataset-curators)
|
43 |
- [Licensing Information](#licensing-information)
|
44 |
- [Citation Information](#citation-information)
|
|
|
45 |
|
46 |
## Dataset Description
|
47 |
|
@@ -159,3 +160,7 @@ English (en)
|
|
159 |
abstract = "One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets. In this work, we present MeDAL, a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. We pre-trained several models of common architectures on this dataset and empirically showed that such pre-training leads to improved performance and convergence speed when fine-tuning on downstream medical tasks.",
|
160 |
}
|
161 |
```
|
|
|
|
|
|
|
|
|
|
42 |
- [Dataset Curators](#dataset-curators)
|
43 |
- [Licensing Information](#licensing-information)
|
44 |
- [Citation Information](#citation-information)
|
45 |
+
- [Contributions](#contributions)
|
46 |
|
47 |
## Dataset Description
|
48 |
|
|
|
160 |
abstract = "One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets. In this work, we present MeDAL, a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. We pre-trained several models of common architectures on this dataset and empirically showed that such pre-training leads to improved performance and convergence speed when fine-tuning on downstream medical tasks.",
|
161 |
}
|
162 |
```
|
163 |
+
|
164 |
+
### Contributions
|
165 |
+
|
166 |
+
Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
|