amanpreetsingh459
commited on
Commit
•
bfcebe0
1
Parent(s):
9ab8626
Add credits to the README.md file
Browse files
README.md
CHANGED
@@ -1,3 +1,13 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
|
5 |
+
# amanpreetsingh459/llama-2-7b-chat_q4_quantized_cpp
|
6 |
+
- This model contains the 4-bit quantized version of [llama2](https://github.com/facebookresearch/llama) model.
|
7 |
+
- This can be run on a local cpu system as a cpp module available at: [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
|
8 |
+
- As for the testing, the model has been tested on `Ubuntu Linux` os with `12 GB RAM` and `core i5 processor`.
|
9 |
+
|
10 |
+
# Credits:
|
11 |
+
1. https://github.com/facebookresearch/llama
|
12 |
+
2. https://github.com/ggerganov/llama.cpp
|
13 |
+
3. https://medium.com/@karankakwani/build-and-run-llama2-llm-locally-a3b393c1570e
|