erwt-year / README.md
Kaspar's picture
Update README.md
93c18d0
|
raw
history blame
2.22 kB
metadata
language: en
tags:
  - newspapers
  - library
  - historic
  - glam
license: mit
metrics:
  - f1
widget:
  - text: 1810 [DATE] [MASK] Majesty.
  - text: 1850 [DATE] [MASK] Majesty.
erwt

ERWT-year

A fine-tuned distilbert-base-cased model trained on historical newspapers from the Heritage Made Digital collection with temporal metadata.

Warning: This model was trained for experimental purposes, please use it with care.

You find more detailed information below and in our working paper "Metadata Might Make Language Models Better".

Background and Data

ERWT was created using a MetaData Masking Approach (or MDMA 💊), in which we train a Masked Language Model simultaneously on text and metadata. Our intuition was that incorporating information that is not explicitly present in the text—such as the time of publication or the political leaning of the author—may make language models "better" in the sense of being more sensitive to historical and political aspects of language use.

To create this ERWT model we fine-tuned distilbert-base-cased on a random subsample of the Heritage Made Digital newspaper of about half a billion words. We slightly adapted to the training routine by adding the year of publication and a special token [DATE] in front of each text segment (i.e. a chunk of hundred tokens).

For example, we would format a snippet of text taken from the Londonderry Sentinel as...

"1870 [DATE] Every scrap of intelligence relative to the war between France and Prussia is now read with interest."

... and then provide this sentence with prepended temporal metadata to MLM.

Intended uses & limitations

Exposing the model to extra-textual information allows us to use language change and date prediction