Edit model card

language:

  • ne_NP tags:
  • summarization
  • ne_NP license: apache-2.0 datasets:
  • sanjeev-bhandari01/nepali-summarization-dataset

metrics:

  • rouge

model_name: "MBart Nepali Summarization Model" model_description: > This model is a fine-tuned version of the facebook/mbart-large-cc25 on the Nepali Summarization Dataset. It is designed to generate concise summaries of Nepali articles.

intended_use: > The model is intended for use in summarizing Nepali news articles or other textual content written in Nepali. It can be utilized in applications such as news aggregation, content summarization, and aiding in quick information retrieval.

how_to_use: | To use this model, you can load it with the transformers library as follows:

from transformers import MBartTokenizer, MBartForConditionalGeneration

model_name = 'path_to_your_finetuned_model'
tokenizer = MBartTokenizer.from_pretrained(model_name)
model = MBartForConditionalGeneration.from_pretrained(model_name)

# Example text
article = "तपाईंको नेपाली समाचार लेख यहाँ राख्नुहोस्।"

# Tokenize the input
inputs = tokenizer(article, return_tensors="pt", max_length=1024, truncation=True)

# Generate summary
summary_ids = model.generate(inputs['input_ids'], max_length=128, num_beams=4, early_stopping=True)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)

print(summary)
Downloads last month
9
Safetensors
Model size
611M params
Tensor type
F32
·