duyntnet commited on
Commit
6a3dbe8
1 Parent(s): b29c11b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +125 -0
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - Codestral-22B-v0.1
12
+ ---
13
+ Quantizations of https://huggingface.co/mistralai/Codestral-22B-v0.1
14
+
15
+
16
+ # From original readme
17
+
18
+ ## Installation
19
+
20
+ It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference).
21
+
22
+ ```
23
+ pip install mistral_inference
24
+ ```
25
+
26
+ ## Download
27
+
28
+ ```py
29
+ from huggingface_hub import snapshot_download
30
+ from pathlib import Path
31
+
32
+ mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1')
33
+ mistral_models_path.mkdir(parents=True, exist_ok=True)
34
+
35
+ snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
36
+ ```
37
+
38
+ ### Chat
39
+
40
+ After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment.
41
+
42
+ ```
43
+ mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256
44
+ ```
45
+
46
+ Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines:
47
+
48
+ ```
49
+ Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number.
50
+
51
+ fn fibonacci(n: u32) -> u32 {
52
+ match n {
53
+ 0 => 0,
54
+ 1 => 1,
55
+ _ => fibonacci(n - 1) + fibonacci(n - 2),
56
+ }
57
+ }
58
+
59
+ fn main() {
60
+ let n = 10;
61
+ println!("The {}th Fibonacci number is: {}", n, fibonacci(n));
62
+ }
63
+
64
+ This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers.
65
+ ```
66
+
67
+
68
+ ### Fill-in-the-middle (FIM)
69
+
70
+ After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed:
71
+
72
+ ```py
73
+ from mistral_inference.model import Transformer
74
+ from mistral_inference.generate import generate
75
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
76
+ from mistral_common.tokens.instruct.request import FIMRequest
77
+
78
+ tokenizer = MistralTokenizer.v3()
79
+ model = Transformer.from_folder("~/codestral-22B-240529")
80
+
81
+ prefix = """def add("""
82
+ suffix = """ return sum"""
83
+
84
+ request = FIMRequest(prompt=prefix, suffix=suffix)
85
+
86
+ tokens = tokenizer.encode_fim(request).tokens
87
+
88
+ out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
89
+ result = tokenizer.decode(out_tokens[0])
90
+
91
+ middle = result.split(suffix)[0].strip()
92
+ print(middle)
93
+ ```
94
+
95
+ Should give something along the following lines:
96
+
97
+ ```
98
+ num1, num2):
99
+
100
+ # Add two numbers
101
+ sum = num1 + num2
102
+
103
+ # return the sum
104
+ ```
105
+
106
+ ## Usage with transformers library
107
+
108
+ This model is also compatible with `transformers` library, first run `pip install -U transformers` then use the snippet below to quickly get started:
109
+
110
+ ```python
111
+ from transformers import AutoModelForCausalLM, AutoTokenizer
112
+
113
+ model_id = "mistralai/Codestral-22B-v0.1"
114
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
115
+
116
+ model = AutoModelForCausalLM.from_pretrained(model_id)
117
+
118
+ text = "Hello my name is"
119
+ inputs = tokenizer(text, return_tensors="pt")
120
+
121
+ outputs = model.generate(**inputs, max_new_tokens=20)
122
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
123
+ ```
124
+
125
+ By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem.