metadata
language:
- sv
:construction_worker: This model card is under construction!
Swedish OCR correction
This model corrects OCR errors in Swedish text.
Model Description
This model is a fine-tuned version of byt5-small, a character-level multilingual transformer. It is fine-tuned on OCR samples from Swedish newspapers and historical text.
Training Data
The base model byt5 is pre-trained on mc4. This fine-tuned version is further trained on:
- Swedish newspapers from 1818 to 2018. Parts of the dataset are available from Språkbanken Text: Swedish newspapers 1818-1870, Swedish newspapers 1871-1906.
- Swedish blackletter documents from 1626 to 1816, available from Språkbaknen Text: Swedish fraktur 1626-1816
Usage
The model accepts input sequences of at most 128 UTF-8 bytes. Use the code below to get started with the model.
[Demo code here]