mdroth commited on
Commit
fbc15db
1 Parent(s): 53a1907

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -24
README.md CHANGED
@@ -1,25 +1,23 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: abstract_id
5
- dtype: int64
6
- - name: line_id
7
- dtype: string
8
- - name: abstract_text
9
- dtype: string
10
- - name: line_number
11
- dtype: int64
12
- - name: total_lines
13
- dtype: int64
14
- - name: target
15
- dtype: string
16
- splits:
17
- - name: train
18
- num_bytes: 458910072
19
- num_examples: 2211861
20
- download_size: 210733145
21
- dataset_size: 458910072
22
- ---
23
- # Dataset Card for "PubMed-200k-RTC"
24
 
25
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PubMed-200k-RTC
2
+ You can use [these datasets](https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/tree/main/data) for whatever you want (note the [Apache 2.0 license](https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/blob/main/data/Apache_2.0), though) but their primary purpose is to serve as a drop-in replacement for the sub-datasets of [The Pile](https://pile.eleuther.ai/) used in [section 5](https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#what-is-the-pile) of the [HuggingFace course](https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#what-is-the-pile).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
 
4
+ ## Data sources
5
+ - PubMed-200k-RTC:<br>https://www.kaggle.com/datasets/matthewjansen/pubmed-200k-rtc/download?datasetVersionNumber=5
6
+ - LegalText-classification:<br>https://www.kaggle.com/datasets/shivamb/legal-citation-text-classification/download?datasetVersionNumber=1
7
+
8
+ These are Kaggle datasets. So you need to be logged into a [Kaggle account](https://www.kaggle.com/account/login?phase=startSignInTab&returnUrl=%2F) to download them.
9
+
10
+ ## Usage
11
+ To load a dataset from this repo, run
12
+
13
+ ```python
14
+ import zstandard
15
+ from datasets import load_dataset
16
+ load_dataset("json", data_files=url, split="train")
17
+ ```
18
+
19
+ where `url` should be one of the following download links:
20
+ - `LegalText-classification_train.jsonl.zst`:<br>https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/resolve/main/data/LegalText-classification_train.jsonl.zst,
21
+ - `LegalText-classification_train_min.jsonl.zst`:<br>https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/resolve/main/data/LegalText-classification_train_min.jsonl.zst,
22
+ - `PubMed-200k-RTC_train.jsonl.zst`:<br>https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/resolve/main/data/PubMed-200k-RTC_train.jsonl.zst, or
23
+ - `PubMed-200k-RTC_train_min.jsonl.zst`:<br>https://huggingface.co/datasets/mdroth/PubMed-200k-RTC/resolve/main/data/PubMed-200k-RTC_train_min.jsonl.zst.