qanastek commited on
Commit
379cc1b
1 Parent(s): fe715d6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -19,6 +19,10 @@ widget:
19
  <img src="https://github.com/qanastek/DrBERT/blob/main/assets/logo.png?raw=true" alt="drawing" width="250"/>
20
  </p>
21
 
 
 
 
 
22
  # DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains
23
 
24
  In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains.
 
19
  <img src="https://github.com/qanastek/DrBERT/blob/main/assets/logo.png?raw=true" alt="drawing" width="250"/>
20
  </p>
21
 
22
+ - Corpora: [bigbio/cas](https://huggingface.co/datasets/bigbio/cas)
23
+ - Embeddings & Sequence Labelling: [DrBERT-7GB](https://arxiv.org/abs/2304.00958)
24
+ - Number of Epochs: 200
25
+
26
  # DrBERT: A Robust Pre-trained Model in French for Biomedical and Clinical domains
27
 
28
  In recent years, pre-trained language models (PLMs) achieve the best performance on a wide range of natural language processing (NLP) tasks. While the first models were trained on general domain data, specialized ones have emerged to more effectively treat specific domains.