youngking0727 commited on
Commit
3e11e9b
1 Parent(s): 1db5200

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -24,7 +24,9 @@ The model was trained with the following hyperparameters:
24
  * Cutoff length: 2048
25
  * Learning rate: 2e-5
26
 
27
- Overview Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
 
 
28
 
29
  ### Model Developers
30
  PharMolix
 
24
  * Cutoff length: 2048
25
  * Learning rate: 2e-5
26
 
27
+ Overview Lla was finetuned on over 26 billion tokens highly pertinent to the field of biomedicine. The fine-tuning data are extracted from 5.5 million biomedical papers in S2ORC data using PubMed Central
28
+ (PMC)-ID and PubMed ID as criteria.
29
+
30
 
31
  ### Model Developers
32
  PharMolix