hosseinhimself commited on
Commit
d4f82bf
1 Parent(s): fd536ba

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - SajjadAyoubi/persian_qa
4
+ language:
5
+ - fa
6
+ pipeline_tag: question-answering
7
+ license: apache-2.0
8
+ library_name: transformers
9
+ tags:
10
+ - roberta
11
+ - question-answering
12
+ - Persian
13
+ ---
14
+ # Tara-Roberta-Base-FA-QA
15
+
16
+ **Tara-Roberta-Base-FA-QA** is a fine-tuned version of the `facebookAI/roberta-base` model for question-answering tasks, trained on the [SajjadAyoubi/persian_qa](https://huggingface.co/datasets/SajjadAyoubi/persian_qa) dataset. This model is designed to understand and generate answers to questions posed in Persian.
17
+
18
+ ## Model Description
19
+
20
+ This model was fine-tuned on a dataset containing Persian question-answering pairs. It leverages the `roberta-base` architecture to provide answers based on the context provided. The training process was performed with a focus on improving the model's ability to handle Persian text and answer questions effectively.
21
+
22
+ ## Training Details
23
+
24
+ The model was trained for 3 epochs with the following training and validation losses:
25
+
26
+ - **Epoch 1**:
27
+ - Training Loss: 2.0713
28
+ - Validation Loss: 2.1061
29
+
30
+ - **Epoch 2**:
31
+ - Training Loss: 2.1558
32
+ - Validation Loss: 2.0121
33
+
34
+ - **Epoch 3**:
35
+ - Training Loss: 2.0951
36
+ - Validation Loss: 2.0168
37
+
38
+ ## Evaluation Results
39
+
40
+ The model achieved the following results:
41
+
42
+ - **Training Loss**: Decreased over the epochs, indicating effective learning.
43
+ - **Validation Loss**: Slight variations were observed, reflecting the model's performance on unseen data.
44
+
45
+ ## Usage
46
+
47
+ To use this model for question-answering tasks, load it with the `transformers` library:
48
+
49
+ ```python
50
+ from transformers import RobertaForQuestionAnswering, RobertaTokenizer
51
+
52
+ model_name = "hosseinhimself/tara-roberta-base-fa-qa"
53
+ tokenizer = RobertaTokenizer.from_pretrained(model_name)
54
+ model = RobertaForQuestionAnswering.from_pretrained(model_name)
55
+
56
+ # Example usage
57
+ inputs = tokenizer("چه زمانی شرکت فولاد مبارکه تأسیس شد؟", "شرکت فولاد مبارکه در سال 1371 تأسیس شد.", return_tensors='pt')
58
+ outputs = model(**inputs)
59
+ start_logits = outputs.start_logits
60
+ end_logits = outputs.end_logits
61
+ ```
62
+
63
+ ## Datasets
64
+
65
+ The model was fine-tuned using the [SajjadAyoubi/persian_qa](https://huggingface.co/datasets/SajjadAyoubi/persian_qa) dataset.
66
+
67
+ ## Languages
68
+
69
+ The model supports the Persian language.
70
+
71
+ ## Additional Information
72
+
73
+ For more details on how to fine-tune similar models or to report issues, please visit the [Hugging Face documentation](https://huggingface.co/docs/transformers).