genaicore3434
commited on
Commit
•
e256a7d
1
Parent(s):
610842d
Update README.md
Browse files
README.md
CHANGED
@@ -1,85 +1,29 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
-
pipeline_tag: text-generation
|
4 |
-
tags:
|
5 |
-
- finetuned
|
6 |
-
inference: false
|
7 |
---
|
8 |
|
9 |
-
|
10 |
|
11 |
-
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
16 |
-
|
17 |
-
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
|
18 |
-
|
19 |
-
E.g.
|
20 |
-
```
|
21 |
-
text = "<s>[INST] What is your favourite condiment? [/INST]"
|
22 |
-
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
|
23 |
-
"[INST] Do you have mayonnaise recipes? [/INST]"
|
24 |
-
```
|
25 |
-
|
26 |
-
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
|
27 |
|
28 |
```python
|
|
|
29 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
30 |
|
31 |
-
|
32 |
-
|
33 |
-
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
|
34 |
-
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
|
35 |
-
|
36 |
-
messages = [
|
37 |
-
{"role": "user", "content": "What is your favourite condiment?"},
|
38 |
-
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
|
39 |
-
{"role": "user", "content": "Do you have mayonnaise recipes?"}
|
40 |
-
]
|
41 |
-
|
42 |
-
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
|
43 |
-
|
44 |
-
model_inputs = encodeds.to(device)
|
45 |
-
model.to(device)
|
46 |
-
|
47 |
-
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
|
48 |
-
decoded = tokenizer.batch_decode(generated_ids)
|
49 |
-
print(decoded[0])
|
50 |
-
```
|
51 |
-
|
52 |
-
## Model Architecture
|
53 |
-
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
|
54 |
-
- Grouped-Query Attention
|
55 |
-
- Sliding-Window Attention
|
56 |
-
- Byte-fallback BPE tokenizer
|
57 |
-
|
58 |
-
## Troubleshooting
|
59 |
-
- If you see the following error:
|
60 |
```
|
61 |
-
Traceback (most recent call last):
|
62 |
-
File "", line 1, in
|
63 |
-
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
|
64 |
-
config, kwargs = AutoConfig.from_pretrained(
|
65 |
-
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
|
66 |
-
config_class = CONFIG_MAPPING[config_dict["model_type"]]
|
67 |
-
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
|
68 |
-
raise KeyError(key)
|
69 |
-
KeyError: 'mistral'
|
70 |
-
```
|
71 |
-
|
72 |
-
Installing transformers from source should solve the issue
|
73 |
-
pip install git+https://github.com/huggingface/transformers
|
74 |
|
75 |
-
|
76 |
|
77 |
-
|
78 |
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
## The Mistral AI Team
|
84 |
|
85 |
-
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-nc-4.0
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
+
Description to load and test will be added soon. More details on training and data will be added aswell.
|
6 |
|
|
|
7 |
|
8 |
+
### **Loading the Model**
|
9 |
|
10 |
+
Use the following Python code to load the model:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
```python
|
13 |
+
import torch
|
14 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
15 |
|
16 |
+
TBD
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
+
### **Generating Text**
|
20 |
|
21 |
+
To generate text, use the following Python code:
|
22 |
|
23 |
+
```python
|
24 |
+
text = "Hi, my name is "
|
25 |
+
inputs = tokenizer(text, return_tensors="pt")
|
|
|
|
|
26 |
|
27 |
+
outputs = model.generate(**inputs, max_new_tokens=64)
|
28 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
29 |
+
```
|