datasets:
- SajjadAyoubi/persian_qa
language:
- fa
pipeline_tag: question-answering
license: apache-2.0
library_name: transformers
tags:
- roberta
- question-answering
- Persian
Tara
Tara is a fine-tuned version of the facebookAI/roberta-base
model for question-answering tasks, trained on the SajjadAyoubi/persian_qa dataset. This model is designed to understand and generate answers to questions posed in Persian.
Model Description
This model was fine-tuned on a dataset containing Persian question-answering pairs. It leverages the roberta-base
architecture to provide answers based on the context provided. The training process was performed with a focus on improving the model's ability to handle Persian text and answer questions effectively.
Usage
To use this model for question-answering tasks, load it with the transformers
library:
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
model = "hosseinhimself/tara-roberta-base-fa-qa"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForQuestionAnswering.from_pretrained(model)
# Create a QA pipeline
qa_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer)
# Example usage
context = "شرکت فولاد مبارکه در سال 1371 تأسیس شد."
question = "چه زمانی شرکت فولاد مبارکه تأسیس شد؟"
# Modify the pipeline to return answer
results = qa_pipeline(question=question, context=context)
# Display the answer
print(results['answer'])
Datasets
The model was fine-tuned using the SajjadAyoubi/persian_qa dataset.
Languages
The model supports the Persian language.
Additional Information
For more details on how to fine-tune similar models or to report issues, please visit the Hugging Face documentation.