robinsmits commited on
Commit
730fe06
1 Parent(s): 702d923

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -2
README.md CHANGED
@@ -20,14 +20,43 @@ pipeline_tag: text-generation
20
 
21
  # Mistral-Instruct-7B-v0.2-ChatAlpaca
22
 
 
 
23
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the English [robinsmits/ChatAlpaca-20K](https://www.huggingface.co/datasets/robinsmits/ChatAlpaca-20K) dataset.
24
 
25
  It achieves the following results on the evaluation set:
26
  - Loss: 0.8584
27
 
28
- ## Model description
29
 
30
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ## Intended uses & limitations
33
 
 
20
 
21
  # Mistral-Instruct-7B-v0.2-ChatAlpaca
22
 
23
+ ## Model description
24
+
25
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the English [robinsmits/ChatAlpaca-20K](https://www.huggingface.co/datasets/robinsmits/ChatAlpaca-20K) dataset.
26
 
27
  It achieves the following results on the evaluation set:
28
  - Loss: 0.8584
29
 
30
+ ## Model usage
31
 
32
+ A basic example of how to use the finetuned model. Note this example is a modified version from the base model.
33
+
34
+ ```
35
+ import torch
36
+ from peft import AutoPeftModelForCausalLM
37
+ from transformers import AutoTokenizer
38
+
39
+ device = "cuda"
40
+
41
+ model = AutoPeftModelForCausalLM.from_pretrained("robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpaca",
42
+ device_map = "auto",
43
+ load_in_4bit = True,
44
+ torch_dtype = torch.bfloat16)
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained("robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpaca")
47
+
48
+ messages = [
49
+ {"role": "user", "content": "What is your favourite condiment?"},
50
+ {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
51
+ {"role": "user", "content": "Do you have mayonnaise recipes?"}
52
+ ]
53
+
54
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors = "pt")
55
+
56
+ generated_ids = model.generate(input_ids = encodeds.to(device), max_new_tokens = 512, do_sample = True)
57
+ decoded = tokenizer.batch_decode(generated_ids)
58
+ print(decoded[0])
59
+ ```
60
 
61
  ## Intended uses & limitations
62