stefan-it commited on
Commit
6d9b09e
β€’
1 Parent(s): b74401c

readme: add initial version

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: tr
3
+ license: mit
4
+ ---
5
+
6
+ # πŸ€— + πŸ“š dbmdz Turkish ConvBERT model
7
+
8
+ In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
9
+ Library open sources a cased ConvBERT model for Turkish πŸŽ‰
10
+
11
+ # πŸ‡ΉπŸ‡· ConvBERTurk
12
+
13
+ ConvBERTurk is a community-driven cased ConvBERT model for Turkish.
14
+
15
+ In addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presented
16
+ in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
17
+
18
+ We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128
19
+ sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-32 TPU.
20
+
21
+ ## Stats
22
+
23
+ The current version of the model is trained on a filtered and sentence
24
+ segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
25
+ a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
26
+ special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
27
+
28
+ The final training corpus has a size of 35GB and 44,04,976,662 tokens.
29
+
30
+ Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
31
+ on a TPU v3-32!
32
+
33
+ ## Usage
34
+
35
+ With Transformers >= 4.3 our cased ConvBERT model can be loaded like:
36
+
37
+ ```python
38
+ from transformers import AutoModel, AutoTokenizer
39
+
40
+ model_name = "dbmdz/convbert-base-turkish-cased"
41
+
42
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
43
+ model = AutoModel.from_pretrained(model_name)
44
+ ```
45
+
46
+ ## Results
47
+
48
+ For results on PoS tagging, NER and Question Answering downstream tasks, please refer to
49
+ [this repository](https://github.com/stefan-it/turkish-bert).
50
+
51
+ # Huggingface model hub
52
+
53
+ All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
54
+
55
+ # Contact (Bugs, Feedback, Contribution and more)
56
+
57
+ For questions about our DBMDZ BERT models in general, just open an issue
58
+ [here](https://github.com/dbmdz/berts/issues/new) πŸ€—
59
+
60
+ # Acknowledgments
61
+
62
+ Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
63
+ additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
64
+ us the Turkish NER dataset for evaluation.
65
+
66
+ Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
67
+ Thanks for providing access to the TFRC ❀️
68
+
69
+ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
70
+ it is possible to download both cased and uncased models from their S3 storage πŸ€—