Swedish OCR correction
This model corrects OCR errors in Swedish text.
Try it!
- On short texts in the inference widget to the right ->
- On files or longer texts in the demo
Model Description
This model is a fine-tuned version of byt5-small, a character-level multilingual transformer. The fine-tuning data consists of OCR samples from Swedish newspapers and historical documents. The model works on texts up to 128 UTF-8 bytes (see Length limit).
Training Data
The base model byt5 is pre-trained on mc4. This fine-tuned version is further trained on:
- Swedish newspapers from 1818 to 2018. Parts of the dataset are available from Språkbanken Text: Swedish newspapers 1818-1870, Swedish newspapers 1871-1906.
- Swedish blackletter documents from 1626 to 1816, available from Språkbaknen Text: Swedish fraktur 1626-1816
This data includes characters not used in Swedish today, such as the long s (ſ) and the esszett ligature (ß), which means that the model should be able to handle texts with these characters. See for example the example titled Long-s piano ad in the inference widget to the right.
Usage
Use the code below to get started with the model.
from transformers import pipeline, T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('viklofg/swedish-ocr-correction')
tokenizer = AutoTokenizer.from_pretrained('google/byt5-small')
pipe = pipeline('text2text-generation', model=model, tokenizer=tokenizer)
ocr = 'Den i HandelstidniDgens g&rdagsnnmmer omtalade hvalfisken, sorn fångats i Frölnndaviken'
output = pipe(ocr)
print(output)
Length limit
The model accepts input sequences of at most 128 UTF-8 bytes, longer sequences are truncated to this limit. 128 UTF-8 bytes corresponds to slightly less than 128 characters of Swedish text since most characters are encoded as one byte, but non-ASCII characters such as Å, Ä, and Ö are encoded as two (or more) bytes.
- Downloads last month
- 448