THaLLE-q5-k-m-gguf / README.md
nakcnx's picture
Update README.md
8fb488b verified
|
raw
history blame
1.5 kB
---
language:
- en
license: apache-2.0
tags:
- finance
- llama-cpp
- gguf-my-repo
base_model: KBTG-Labs/THaLLE-0.1-7B-fa
pipeline_tag: text-generation
---
# nakcnx/THaLLE-0.1-7B-fa-Q5_K_M-GGUF
This model was converted to GGUF format from [`KBTG-Labs/THaLLE-0.1-7B-fa`](https://huggingface.co/KBTG-Labs/THaLLE-0.1-7B-fa)
Refer to the [original model card](https://huggingface.co/KBTG-Labs/THaLLE-0.1-7B-fa) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo nakcnx/THaLLE-0.1-7B-fa-Q5_K_M-GGUF --hf-file thalle-0.1-7b-fa-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nakcnx/THaLLE-0.1-7B-fa-Q5_K_M-GGUF --hf-file thalle-0.1-7b-fa-q5_k_m.gguf -c 2048
```
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo nakcnx/THaLLE-0.1-7B-fa-Q5_K_M-GGUF --hf-file thalle-0.1-7b-fa-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo nakcnx/THaLLE-0.1-7B-fa-Q5_K_M-GGUF --hf-file thalle-0.1-7b-fa-q5_k_m.gguf -c 2048
```