erfanvaredi commited on
Commit
7d22f40
1 Parent(s): 42dd07b

update readme

Browse files
Files changed (1) hide show
  1. README.md +90 -2
README.md CHANGED
@@ -3,7 +3,95 @@ tags:
3
  - autotrain
4
  - text-generation
5
  widget:
6
- - text: "I love AutoTrain because "
 
 
 
 
 
 
7
  ---
8
 
9
- # Model Trained Using AutoTrain
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - autotrain
4
  - text-generation
5
  widget:
6
+ - text: 'I love AutoTrain because '
7
+ license: mit
8
+ datasets:
9
+ - erfanvaredi/zephyr-7b-beta-invoices
10
+ language:
11
+ - en
12
+ - ar
13
  ---
14
 
15
+
16
+ # Zephyr-7B-Customer-Support-Finetuned6
17
+
18
+ ## Introduction
19
+ This repository hosts the `zephyr-7b-customer-support-finetuned6` model, a variant of the [`zephyr-7b-beta`](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) fine-tuned specifically for enhanced performance in customer support scenarios. It was fine-tuned using advanced techniques to ensure high accuracy in handling customer queries.
20
+
21
+ ## Fine-Tuning Details
22
+ The model was fine-tuned using the `autotrain llm` command with the following specifications:
23
+ - Base Model: HuggingFaceH4/zephyr-7b-beta
24
+ - Learning Rate: 2e-4
25
+ - Batch Size: 12
26
+ - Training Epochs: 10
27
+ - Strategy: Soft Target Fine-Tuning (sft)
28
+ - Evaluation: Accuracy
29
+ - Scheduler: Cosine
30
+ - Target Modules: q_proj, v_proj
31
+
32
+ This fine-tuning approach ensures optimal performance in interpreting and responding to customer queries.
33
+
34
+ ## Installation and Setup
35
+ Install the necessary packages to use the model:
36
+ ```bash
37
+ pip install transformers
38
+ pip install torch
39
+ ```
40
+
41
+ ## Usage
42
+ To use the fine-tuned model, follow this simple Python script:
43
+ ```python
44
+ # Import libraries
45
+ import torch
46
+ from copy import deepcopy
47
+ from peft import PeftModel
48
+ from transformers import AutoModelForCausalLM, AutoTokenizer
49
+ from transformers import pipeline
50
+
51
+ # Load model and transformers
52
+ model = AutoModelForCausalLM.from_pretrained('HuggingFaceH4/zephyr-7b-beta')
53
+ tokenizer = AutoTokenizer.from_pretrained('HuggingFaceH4/zephyr-7b-beta')
54
+
55
+ # Load the adapter
56
+ model.load_adapter('erfanvaredi/zephyr-7b-customer-support-finetuned6')
57
+
58
+ # Load the pipeline
59
+ pipe_PEFT = pipeline(
60
+ 'text-generation',
61
+ model = model,
62
+ tokenizer=tokenizer
63
+ )
64
+
65
+ # Load chat template
66
+ messages = [
67
+ {
68
+ "role": "system",
69
+ "content": "Act as a helpful customer support assistant, who follows user's inquiries and invoice-related problems.",
70
+ },
71
+ {"role": "user", "content": "tell me about canceling the newsletter subscription"},
72
+ ]
73
+ prompt = pipe_PEFT.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
74
+ outputs = pipe_PEFT(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
75
+
76
+ # Example query
77
+ outputs = pipe_PEFT(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
78
+ print(outputs[0]["generated_text"].split('<|assistant|>')[1])
79
+
80
+ # Certainly! If you'd like to cancel your newsletter subscription, you can typically do so by following these steps:
81
+ #
82
+ # 1. Look for an "Unsubscribe" or "Cancel Subscription" link at the bottom of the newsletter email you received. Click on this link to initiate the cancellation process.
83
+ #
84
+ # 2. If you're having trouble finding the link, you can also log in to your account on the company's website or platform. Go to your account settings or preferences, and look for an option to manage or cancel your subscriptions.
85
+ #
86
+ # 3. Once you've found the cancellation link or option, follow the prompts to confirm that you want to unsubscribe. This may involve entering your email address or account information to verify your identity.
87
+ #
88
+ # 4. After you've successfully canceled your subscription, you should stop receiving newsletters from the company. If you continue to receive emails, you may need to wait for a processing period or contact customer support for further assistance.
89
+ #
90
+ # I hope that helps! Let me know if you have any other questions or concerns.
91
+ ```
92
+
93
+ ## License
94
+ This project is licensed under the MIT License.
95
+
96
+ ## Contact
97
+ For inquiries or collaboration, please reach out at [LinkedIn](https://linkedin.com/in/erfanvaredi).