xdai commited on
Commit
67c8f34
1 Parent(s): be3f3a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -1
README.md CHANGED
@@ -1 +1,15 @@
1
- cp /home/gdpr/Desktop/Experiments/2021-06/19-01-roberta-lm-mimic/mimic128-4gpu/checkpoint-36000/* ./
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ * Continue pre-training RoBERTa-base using discharge summaries from MIMIC-III datasets.
2
+
3
+ * Details can be found in the following paper
4
+
5
+ > Xiang Dai and Ilias Chalkidis and Sune Darkner and Desmond Elliott. 2022. Revisiting Transformer-based Models for Long Document Classification. (https://arxiv.org/abs/2204.06683)
6
+
7
+ * Important hyper-parameters
8
+
9
+ | | |
10
+ |---|---|
11
+ | Max sequence | 128 |
12
+ | Batch size | 128 |
13
+ | Learning rate | 5e-5 |
14
+ | Training epochs | 15 |
15
+ | Training time | 40 GPU-hours |