Kasra Hosseini
commited on
Commit
•
6ce7f60
1
Parent(s):
130e8ec
Upload README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,20 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Neural Language Models for Nineteenth-Century English: bert_1760_1900
|
2 |
+
|
3 |
+
## Introduction
|
4 |
+
|
5 |
+
BERT model trained on a large historical dataset of books in English, published between 1760-1900 and comprised of ~5.1 billion tokens.
|
6 |
+
|
7 |
+
- Data paper: http://doi.org/10.5334/johd.48
|
8 |
+
- Github repository: https://github.com/Living-with-machines/histLM
|
9 |
+
|
10 |
+
## License
|
11 |
+
|
12 |
+
The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.
|
13 |
+
|
14 |
+
## Funding Statement
|
15 |
+
|
16 |
+
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
|
17 |
+
|
18 |
+
## Dataset creators
|
19 |
+
|
20 |
+
Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
|