hrus commited on
Commit
a749a22
1 Parent(s): d88ab8a

fix : add Mistral-7B-Instruct-v0.3 link

Browse files
Files changed (1) hide show
  1. README.md +23 -132
README.md CHANGED
@@ -1,143 +1,34 @@
1
  ---
2
  license: apache-2.0
 
 
 
3
  ---
4
- # hrus/Mistral-7B-Instruct-v0.3
5
- A copy of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) without `consolidated.safetensors` file for conversion to GGUF format
6
 
7
- # Model Card for Mistral-7B-Instruct-v0.3
 
 
8
 
9
- The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
10
-
11
- Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
12
- - Extended vocabulary to 32768
13
- - Supports v3 Tokenizer
14
- - Supports function calling
15
-
16
- ## Installation
17
-
18
- It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
19
 
 
 
 
 
20
  ```
21
- pip install mistral_inference
22
- ```
23
-
24
- ## Download
25
-
26
- ```py
27
- from huggingface_hub import snapshot_download
28
- from pathlib import Path
29
-
30
- mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
31
- mistral_models_path.mkdir(parents=True, exist_ok=True)
32
-
33
- snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
34
  ```
35
-
36
- ### Chat
37
-
38
- After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
39
-
40
  ```
41
- mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
42
  ```
43
-
44
- ### Instruct following
45
-
46
- ```py
47
- from mistral_inference.model import Transformer
48
- from mistral_inference.generate import generate
49
-
50
- from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
51
- from mistral_common.protocol.instruct.messages import UserMessage
52
- from mistral_common.protocol.instruct.request import ChatCompletionRequest
53
-
54
-
55
- tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
56
- model = Transformer.from_folder(mistral_models_path)
57
-
58
- completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
59
-
60
- tokens = tokenizer.encode_chat_completion(completion_request).tokens
61
-
62
- out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
63
- result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
64
-
65
- print(result)
66
  ```
67
-
68
- ### Function calling
69
-
70
- ```py
71
- from mistral_common.protocol.instruct.tool_calls import Function, Tool
72
- from mistral_inference.model import Transformer
73
- from mistral_inference.generate import generate
74
-
75
- from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
76
- from mistral_common.protocol.instruct.messages import UserMessage
77
- from mistral_common.protocol.instruct.request import ChatCompletionRequest
78
-
79
-
80
- tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
81
- model = Transformer.from_folder(mistral_models_path)
82
-
83
- completion_request = ChatCompletionRequest(
84
- tools=[
85
- Tool(
86
- function=Function(
87
- name="get_current_weather",
88
- description="Get the current weather",
89
- parameters={
90
- "type": "object",
91
- "properties": {
92
- "location": {
93
- "type": "string",
94
- "description": "The city and state, e.g. San Francisco, CA",
95
- },
96
- "format": {
97
- "type": "string",
98
- "enum": ["celsius", "fahrenheit"],
99
- "description": "The temperature unit to use. Infer this from the users location.",
100
- },
101
- },
102
- "required": ["location", "format"],
103
- },
104
- )
105
- )
106
- ],
107
- messages=[
108
- UserMessage(content="What's the weather like today in Paris?"),
109
- ],
110
- )
111
-
112
- tokens = tokenizer.encode_chat_completion(completion_request).tokens
113
-
114
- out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
115
- result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
116
-
117
- print(result)
118
- ```
119
-
120
- ## Generate with `transformers`
121
-
122
- If you want to use Hugging Face `transformers` to generate text, you can do something like this.
123
-
124
- ```py
125
- from transformers import pipeline
126
-
127
- messages = [
128
- {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
129
- {"role": "user", "content": "Who are you?"},
130
- ]
131
- chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
132
- chatbot(messages)
133
- ```
134
-
135
- ## Limitations
136
-
137
- The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
138
- It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
139
- make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
140
-
141
- ## The Mistral AI Team
142
-
143
- Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - llama-cpp
5
+ - gguf-my-repo
6
  ---
 
 
7
 
8
+ # hrus/Mistral-7B-Instruct-v0.3-Q8_0-GGUF
9
+ This model was converted to GGUF format from [`hrus/Mistral-7B-Instruct-v0.3`](https://huggingface.co/hrus/Mistral-7B-Instruct-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
10
+ Refer to the [original model card](https://huggingface.co/hrus/Mistral-7B-Instruct-v0.3) for more details on the model.
11
 
12
+ Original model is a copy of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) without `consolidated.safetensors` file for conversion to GGUF format
 
 
 
 
 
 
 
 
 
13
 
14
+ ## Use with llama.cpp
15
+ Install llama.cpp through brew.
16
+ ```bash
17
+ brew install ggerganov/ggerganov/llama.cpp
18
  ```
19
+ Invoke the llama.cpp server or the CLI.
20
+ CLI:
21
+ ```bash
22
+ llama-cli --hf-repo hrus/Mistral-7B-Instruct-v0.3-Q8_0-GGUF --model mistral-7b-instruct-v0.3-q8_0.gguf -p "The meaning to life and the universe is"
 
 
 
 
 
 
 
 
 
23
  ```
24
+ Server:
25
+ ```bash
26
+ llama-server --hf-repo hrus/Mistral-7B-Instruct-v0.3-Q8_0-GGUF --model mistral-7b-instruct-v0.3-q8_0.gguf -c 2048
 
 
27
  ```
28
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
29
  ```
30
+ git clone https://github.com/ggerganov/llama.cpp && \
31
+ cd llama.cpp && \
32
+ make && \
33
+ ./main -m mistral-7b-instruct-v0.3-q8_0.gguf -n 128
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  ```