File size: 3,121 Bytes
0a71254
9208d25
0a71254
9208d25
0a71254
9208d25
 
 
 
 
 
 
 
 
352c11a
 
 
 
 
 
 
 
 
 
 
 
 
9208d25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115a995
9208d25
 
 
 
 
115a995
 
 
 
 
 
9208d25
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
language: fa
license: mit
pipeline_tag: text2text-generation
---


# PersianEase

This model is fine-tuned to generate informal text from formal text based on the input provided. It has been fine-tuned on [Mohavere Dataset] (Takalli vahideh, Kalantari, Fateme, Shamsfard, Mehrnoush, Developing an Informal-Formal Persian Corpus, 2022.) using the pretrained model [persian-t5-formality-transfer](https://huggingface.co/erfan226/persian-t5-formality-transfer).


## Evaluation Metrics

| Metric               | Basic Model | Base Persian T5 | Previous Semester Model | Our Model      |
|----------------------|-------------|-----------------|--------------------------|----------------|
| BLEU-1               | 0.269       | 0.256           | 0.397                    | **0.664**      |
| BLEU-2               | 0.137       | 0.171           | 0.299                    | **0.539**      |
| BLEU-3               | 0.084       | 0.121           | 0.231                    | **0.444**      |
| BLEU-4               | 0.054       | 0.086           | 0.177                    | **0.364**      |
| Bert-Score Precision | 0.581       | 0.583           | 0.665                    | **0.826**      |
| Bert-Score Recall    | 0.629       | 0.614           | 0.659                    | **0.820**      |
| Bert-Score F1 Score  | 0.603       | 0.595           | 0.658                    | **0.822**      |
| ROUGE-1 F1 Score     | 0.259       | -               | -                        | **0.701**      |
| ROUGE-2 F1 Score     | 0.061       | -               | -                        | **0.475**      |
| ROUGE-l F1 Score     | 0.250       | -               | -                        | **0.675**      |




## Usage

```python

from transformers import (T5ForConditionalGeneration, AutoTokenizer, pipeline)
import torch

model = T5ForConditionalGeneration.from_pretrained('parsi-ai-nlpclass/PersianEase')
tokenizer = AutoTokenizer.from_pretrained('parsi-ai-nlpclass/PersianEase')

pipe = pipeline(task='text2text-generation', model=model, tokenizer=tokenizer)
def test_model(text):
  device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')  
  model.to(device) 

  inputs = tokenizer.encode("formal: " + text, return_tensors='pt', max_length=128, truncation=True, padding='max_length')
  inputs = inputs.to(device) 

  outputs = model.generate(inputs, max_length=128, num_beams=4, temperature=0.7)
  print("Output:", tokenizer.decode(outputs[0], skip_special_tokens=True))  

text = "   من فقط می‌خواستم بگویم که چقدر قدردان همه چیزهایی هستم که برای من انجام داده ای."
print("Original:", text)
test_model(text)

# output: من فقط میخوام بگم که چقدر قدردان همه کاریم که برای من انجام دادی. دوستی تو برای من یه هدیه بزرگه و من همیشه از داشتن یه دوست مثل تو خوشحالم.

text = "   آرزویش است او را یک رستوران ببرم."
print("Original:", text)
test_model(text)

# output: آرزوشه یه رستوران ببرمش


```