stefan-it commited on
Commit
e9a7c89
β€’
1 Parent(s): ce7e2db

readme: add new DistilBERT-related sections

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -4,3 +4,51 @@ license: mit
4
  tags:
5
  - "historic german"
6
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  tags:
5
  - "historic german"
6
  ---
7
+
8
+ # πŸ€— + πŸ“š dbmdz DistilBERT model
9
+
10
+ In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
11
+ Library open sources a German Europeana DistilBERT model πŸŽ‰
12
+
13
+ # German Europeana DistilBERT
14
+
15
+ We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/)
16
+ that were provided by *The European Library*. The final
17
+ training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
18
+
19
+ Detailed information about the data and pretraining steps can be found in
20
+ [this repository](https://github.com/stefan-it/europeana-bert).
21
+
22
+ ## Results
23
+
24
+ For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
25
+
26
+ ## Usage
27
+
28
+ With Transformers >= 4.3 our German Europeana DistilBERT model can be loaded like:
29
+
30
+ ```python
31
+ from transformers import AutoModel, AutoTokenizer
32
+
33
+ model_name = "distilbert-base-german-europeana-cased"
34
+
35
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
36
+ model = AutoModel.from_pretrained(model_name)
37
+ ```
38
+
39
+ # Huggingface model hub
40
+
41
+ All other German Europeana models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
42
+
43
+ # Contact (Bugs, Feedback, Contribution and more)
44
+
45
+ For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion
46
+ [here](https://github.com/stefan-it/europeana-bert/discussions) πŸ€—
47
+
48
+ # Acknowledgments
49
+
50
+ Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
51
+ Thanks for providing access to the TFRC ❀️
52
+
53
+ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
54
+ it is possible to download both cased and uncased models from their S3 storage πŸ€—