thebogko's picture
Update README.md
a0fa44f verified
|
raw
history blame
7.11 kB
metadata
license: apache-2.0
datasets:
  - thebogko/bulgarian-grammar-mistakes
language:
  - bg
pipeline_tag: text2text-generation

mt5-base finetuned bulgarian-grammar-mistakes

This model is a fine-tune checkpoint of mt5-base, fine-tuned on bulgarian-grammar-mistakes by only taking into account two of the four error types:

  • article_misuse, and
  • pronoun_misuse This is done so the model can focus on these mistakes more clearly, as they are more common with native Bulgarian speakers, as opposed to the latter two types which are more common with Bulgarian learners.

Model Details

Model Description

  • Model type: Sequence2sequence Generation
  • Language(s) (NLP): Bulgarian
  • License: apache2.0
  • Finetuned from model: google/mt5-base

Uses

Intended use of the model includes but is not limited to:

  • comparison and development of Bulgarian error correction NLP systems by developers
  • incorporation in Bulgarian language learner applications
  • research in the field of Bulgarian NLP grammar error correction

Direct Use

No need to fine-tuning further, unless needing to add more error types.

Out-of-Scope Use

The model should not be used to intentionally create hostile or alienating environments for people - especially Bulgarian learners. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

Bias, Risks, and Limitations

The model specialises in identifying and correcting Bulgarian grammar errors that have to do with article and pronoun misuse, so it will likely not perform well on other types of errors. Additionally, the dataset used for fine-tuning does not encompass all possible errors of those types, so use should be with caution of grammatical validity of output.

Recommendations

Users are strongly advised to double check the validity of the model's outputs and to strive to understand the underlying grammatical rules behind the language, instead of using the model's outputs as given.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("thebogko/mt5-finetuned-bulgarian-grammar-mistakes")
model = AutoModelForSeq2SeqLM.from_pretrained("thebogko/mt5-finetuned-bulgarian-grammar-mistakes")


erroneous_sentence = 'Владетеля умря още млад.'
encoded_source = tokenizer(erroneous_sentence,
                           return_tensors='pt',
                           max_length=100,
                           padding='max_length')
encoded_source.to(device)

correct_sentence = ft_model.generate(**encoded_source, max_length=100)
correct_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in correct_sentence][0]
print(correct_sentence)

"Владетелят умря още млад."

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]