np-n commited on
Commit
ada55a2
1 Parent(s): c32b516

inference card added

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This is the 8-bit quantized model of the [ministral-8B](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) by Mistral-AI.Please follow the following instruction to run the model on your device:
2
+
3
+ There are multiple ways to infer the model. Firstly, let's install `llama.cpp` and use it for the inference:
4
+
5
+ 1. Install
6
+ ```
7
+ git clone https://github.com/ggerganov/llama.cpp
8
+ !mkdir llama.cpp/build && cd llama.cpp/build && cmake .. && cmake --build . --config Release
9
+ ```
10
+
11
+ 2. Inference
12
+ ```
13
+ ./llama.cpp/build/bin/llama-cli -m ./ministral-8b_Q8_0.gguf -cnv -p "You are a helpful assistant"
14
+ ```
15
+
16
+ Here, you can interact with model from your terminal.
17
+
18
+
19
+ **Alternatively**, we can use python binding of the `llama.cpp` to run the model on both CPU and GPU.
20
+ 1. Install
21
+ ```
22
+ pip install --no-cache-dir llama-cpp-python==0.2.85 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu122
23
+ ```
24
+
25
+ 2. Inference on CPU
26
+ ```
27
+ from llama_cpp import Llama
28
+
29
+ model_path = "./ministral-8b_Q8_0.gguf"
30
+ llm = Llama(model_path=model_path, n_threads=8, verbose=False)
31
+
32
+ prompt = "What should I do when my eyes are dry?"
33
+ output = llm(
34
+ prompt=f"<|user|>\n{prompt}<|end|>\n<|assistant|>",
35
+ max_tokens=4096,
36
+ stop=["<|end|>"],
37
+ echo=False, # Whether to echo the prompt
38
+ )
39
+ print(output)
40
+ ```
41
+
42
+ 3. Inference on GPU
43
+ ```
44
+ from llama_cpp import Llama
45
+
46
+ model_path = "./ministral-8b_Q8_0.gguf"
47
+ llm = Llama(model_path=model_path, n_threads=8, n_gpu_layers=-1, verbose=False)
48
+
49
+ prompt = "What should I do when my eyes are dry?"
50
+ output = llm(
51
+ prompt=f"<|user|>\n{prompt}<|end|>\n<|assistant|>",
52
+ max_tokens=4096,
53
+ stop=["<|end|>"],
54
+ echo=False, # Whether to echo the prompt
55
+ )
56
+ print(output)
57
+ ```
58
+