Edit model card

This is fine-tuned form of google/mt5-base model used as Russian text summarizer, trained on ~50k samples' dataset. Updates are coming soon. Target is to improve the quality, length and accuracy.

Example Usage:

model_name = "sarahai/ru-sum"  
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

device = torch.device("cpu") #if you are using cpu

input_text = "текст на русском" #your input in russian
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device)
outputs = model.generate(input_ids, max_length=100, min_length=50, length_penalty=2.0, num_beams=4, early_stopping=True) #change according to your preferences
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(summary)

References Hugging Face Model Hub T5 Paper Disclaimer: The model's performance may be influenced by the quality and representativeness of the data it was fine-tuned on. Users are encouraged to assess the model's suitability for their specific applications and datasets.

Downloads last month
62
Safetensors
Model size
300M params
Tensor type
F32
·

Finetuned from

Dataset used to train sarahai/ru-sum