--- license: cc-by-nc-4.0 language: - uk --- # Model Card for Spivavtor-Large This model was obtained by fine-tuning the corresponding `bigscience/mt0-large` model on the Spivavtor dataset. All details of the dataset and fine tuning process can be found in our paper. **Paper:** Spivavtor: An Instruction Tuned Ukrainian Text Editing Model **Authors:** Aman Saini, Artem Chernodub, Vipul Raheja, Vivek Kulkarni ## Model Details ### Model Description - **Language**: Ukrainian - **Finetuned from model:** bigscience/mt0-large ## How to use We make the following models available from our paper.
Model Number of parameters Reference name in Paper
Spivavtor-large 1.2B SPIVAVTOR-MT0-LARGE
Spivavtor-xxl 11B SPIVAVTOR-AYA-101
## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("grammarly/spivavtor-large") model = AutoModelForSeq2SeqLM.from_pretrained("grammarly/spivavtor-large") input_text = 'Виправте граматику в цьому реченнi: Дякую за iнформацiю! ми з Надiєю саме вийшли з дому' # English translation of text: "Paraphrase the sentence: What is the greatest compliment that you ever received from anyone?" inputs = tokenizer.encode(input_text, return_tensors="pt") output = model.generate(inputs, max_length=256) output_text = tokenizer.decode(outputs[0], skip_special_tokens=True) ```