krum-utsav commited on
Commit
e5dc371
1 Parent(s): 65f38b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -3
README.md CHANGED
@@ -1,8 +1,65 @@
1
  ---
2
  library_name: peft
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  The following `bitsandbytes` quantization config was used during training:
8
  - load_in_8bit: False
@@ -14,7 +71,7 @@ The following `bitsandbytes` quantization config was used during training:
14
  - bnb_4bit_quant_type: nf4
15
  - bnb_4bit_use_double_quant: True
16
  - bnb_4bit_compute_dtype: bfloat16
17
- ### Framework versions
18
 
 
19
 
20
- - PEFT 0.4.0.dev0
 
1
  ---
2
  library_name: peft
3
+ license: wtfpl
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
  ---
 
8
 
9
+ ## Model description
10
+
11
+ The tiiuae/falcon-7b model finetuned for `Paraphrasing`, `Changing the Tone` of the input sentence(to `casual`/`professional`/`witty`),
12
+ `Summary` and `Topic` generation from a dialogue. Data for `Paraphrasing` and `Changing the Tone` was generated using gpt-35-turbo and a sample of roughly 1000 data points from the
13
+ [Dialogsum](https://github.com/cylnlp/dialogsum) dataset was used for `Summary` and `Topic` generation.
14
+
15
+ Look at the repo [llm-toys](https://github.com/kuutsav/llm-toys) for usage and other details.
16
+
17
+ Sample training data:
18
+ ```json
19
+ [
20
+ {
21
+ "original": "If you have any further questions, feel free to ask.",
22
+ "casual": "Got more questions? Feel free to ask away. I'm here to help!",
23
+ "professional": "Should you have any additional inquiries, please don't hesitate to ask.",
24
+ "witty": "Curiosity is always in style! If you have more mysteries to solve, I'm all ears!",
25
+ "paraphrase": "Don't hesitate to ask if you have any more questions."
26
+ },
27
+ {
28
+ "fname": "dev_473",
29
+ "dialogue": "#Person1#: Did you enjoy your weekend at the highland hotel? I heard it's and excellent place to stay and has good facilities.\n#Person2#: I had a wonderful time. The rooms are not very big, but they are well furnished. The restaurant is excellent and reasonably priced. There's a sauna and a Jacuzzi.\n#Person1#: Do they have a swimming pool?\n#Person2#: No, they don't. they have a beauty parlor, but I didn't go there.\n#Person1#: What's the service like?\n#Person2#: It's very good. Check in and check out at the reception only took a few minutes. The wait staff is very good. A waiter recommended their baked fish, which tasted wonderful. The hotel was quite full, so I'd suggest making a reservation if you intend to go there. The hotel offers a discount at the weekends.\n#Person1#: It sounds perfect. Did you have any complaints at all?\n#Person2#: There was a problem with the internet access, so I couldn't check my email, but I didn't complain about it to the management.\n#Person1#: I suppose you were happy to forget about the outside world.\n#Person2#: Yes, I was. Here's their business card.\n#Person1#: Thanks. Was there a mina bar in the room?\n#Person2#: No, there wasn't. There is a bar on the ground floor and of course you can buy drinks in the restaurant to go with your meal.\n#Person1#: One of the things I dislike about hotels is that everyone expects tips.\n#Person2#: I know. At the inland hotel, they have an interesting policy. When you check out, you put some money in a special box at reception. Each evening, the money in the box is shared equally by the hotel staff.",
30
+ "summary": "#Person2# enjoys #Person2#'s weekend at the highland hotel because of the hotel's excellent and reasonably priced restaurant and good service. #Person2# introduces the hotel's facilities, weekend discount, and its interesting tip policy and suggests #Person1# make a reservation in advance.",
31
+ "topic": "Experience in hotel"
32
+ }
33
+ ]
34
+ ```
35
+
36
+ ## Training params
37
+
38
+ ```json
39
+ {
40
+ "batch_size": 1,
41
+ "eval_ratio": 0.05,
42
+ "eval_steps": 100,
43
+ "gradient_accumulation_steps": 4,
44
+ "learning_rate": 0.0001,
45
+ "logging_steps": 100,
46
+ "lora_alpha": 32,
47
+ "lora_dropout": 0.05,
48
+ "lora_r": 16,
49
+ "max_length": 1024,
50
+ "model_name": "tiiuae/falcon-7b",
51
+ "num_train_epochs": 3,
52
+ "seed": 10,
53
+ "task_type": "paraphrase_tone,dialogue_summary_topic",
54
+ "use_aim": True
55
+ }
56
+ ```
57
+
58
+ ## Training curve
59
+
60
+ ![train_eval_loss](falcon-7b-paraphrase-tone-dialogue-summary-topic.jpeg)
61
+
62
+ ## Training procedure
63
 
64
  The following `bitsandbytes` quantization config was used during training:
65
  - load_in_8bit: False
 
71
  - bnb_4bit_quant_type: nf4
72
  - bnb_4bit_use_double_quant: True
73
  - bnb_4bit_compute_dtype: bfloat16
 
74
 
75
+ ### Framework versions
76
 
77
+ - PEFT 0.4.0.dev0