Kaspar commited on
Commit
928b3c5
1 Parent(s): b3b9085

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -18,16 +18,25 @@ widget:
18
  <img src="https://upload.wikimedia.org/wikipedia/commons/5/5b/NCI_peas_in_pod.jpg" alt="erwt" width="200" >
19
 
20
  # ERWT-year
 
21
 
22
- A Historical Language Model,
23
 
24
  ERWT is a fine-tuned [`distilbert-base-cased`](https://huggingface.co/distilbert-base-cased) model trained on historical newspapers from the [Heritage Made Digital collection](https://huggingface.co/datasets/davanstrien/hmd-erwt-training) with temporal metadata.
25
 
 
26
 
27
- **Warning**: This model was trained for **experimental purposes**, please use it with care.
28
 
29
 
30
- You find more detailed information below, especially the "limitations" section (hey, seriously, read this ... very important, we don't write this just to look smart). You can also consult our working paper ["Metadata Might Make Language Models Better"](https://drive.google.com/file/d/1Xp21KENzIeEqFpKvO85FkHynC0PNwBn7/view?usp=sharing) for more background and nerdy evaluation stuff (still, work in progress, handle with care and kindness).
 
 
 
 
 
 
 
31
 
32
  ## Background
33
 
 
18
  <img src="https://upload.wikimedia.org/wikipedia/commons/5/5b/NCI_peas_in_pod.jpg" alt="erwt" width="200" >
19
 
20
  # ERWT-year
21
+ A language model that is better at history than you...
22
 
23
+ ...maybe.
24
 
25
  ERWT is a fine-tuned [`distilbert-base-cased`](https://huggingface.co/distilbert-base-cased) model trained on historical newspapers from the [Heritage Made Digital collection](https://huggingface.co/datasets/davanstrien/hmd-erwt-training) with temporal metadata.
26
 
27
+ This model is served you by Kaspar von Beelen and Daniel van Strien.
28
 
29
+ Improving AI, one pea at a time.
30
 
31
 
32
+ ## Note
33
+
34
+ This model was trained for **experimental purposes**, please use it with care.
35
+
36
+
37
+ You find more detailed information below, especially the "limitations" section. Seriously, read this section before using the models, we did not pour .
38
+
39
+ If you can't get enough, you can still consult our working paper ["Metadata Might Make Language Models Better"](https://drive.google.com/file/d/1Xp21KENzIeEqFpKvO85FkHynC0PNwBn7/view?usp=sharing) for more background and nerdy evaluation stuff (still, work in progress, handle with care and kindness).
40
 
41
  ## Background
42