amanpreetsingh459
commited on
Commit
·
33c304c
1
Parent(s):
5fdd3c2
Add instructions to run the model
Browse files
README.md
CHANGED
@@ -2,10 +2,24 @@
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
5 |
-
#
|
6 |
-
- This model contains the 4-bit quantized version of [llama2](https://github.com/facebookresearch/llama) model.
|
7 |
-
- This can be run on a local cpu system as a cpp module
|
8 |
-
- As for the testing, the model has been tested on `Ubuntu
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
# Credits:
|
11 |
1. https://github.com/facebookresearch/llama
|
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
5 |
+
# llama-2-7b-chat_q4_quantized_cpp
|
6 |
+
- This model contains the 4-bit quantized version of [llama2](https://github.com/facebookresearch/llama) model in cpp.
|
7 |
+
- This can be run on a local cpu system as a cpp module *(instructions for the same are given below)*
|
8 |
+
- As for the testing, the model has been tested on `Linux(Ubuntu)` os with `12 GB RAM` and `core i5 processor`.
|
9 |
+
|
10 |
+
# Usage:
|
11 |
+
1. Clone the llama C++ repository from github:<br>
|
12 |
+
`git clone https://github.com/ggerganov/llama.cpp.git`
|
13 |
+
2. Enter the **llama.cpp** repository(which was downloaded in the step 1) and build it by running the **make** command<br>
|
14 |
+
`cd llama.cpp` <br>
|
15 |
+
`make`
|
16 |
+
3. Create a directory names **7B** under the directory **llama.cpp/models** and put the model file **ggml-model-q4_0.bin** under this newly created **7B** directory<br>
|
17 |
+
`cd models` <br>
|
18 |
+
`mkdir 7B`
|
19 |
+
4. Navigate back to **llama.cpp** directory and run the below command:-<br>
|
20 |
+
`./main -m ./models/7B/ggml-model-q4_0.bin -n 1024 --repeat_penalty 1.0 --color -i -r "User:" -f ./prompts/alpaca.txt` <br>
|
21 |
+
> the initial prompt file can be changed to anything from `prompts/alpaca.txt` to of your choice
|
22 |
+
5. That's it. Enter the desired prompts and let the results surprise you...
|
23 |
|
24 |
# Credits:
|
25 |
1. https://github.com/facebookresearch/llama
|