--- license: other tags: - llama-cpp base_model: migtissera/Tess-v2.5-Qwen2-72B --- # pabloce/Tess-v2.5-Qwen2-72B This model is a converted version of [`migtissera/Tess-v2.5-Qwen2-72B`](https://huggingface.co/migtissera/Tess-v2.5-Qwen2-72B) in GGUF format. For more details on the original model, please refer to its [model card](https://huggingface.co/migtissera/Tess-v2.5-Qwen2-72B). ## Installation To use this model with llama.cpp, you can install llama.cpp through brew on Mac and Linux: ```bash brew install llama.cpp ``` ## Usage ### Command Line Interface (CLI) To use the model via the CLI, run the following command: ```bash llama --hf-repo pabloce/Tess-v2.5-Qwen2-72B-gguff --hf-file tess-2.5-qwen-2-70b-q3_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server To start the llama.cpp server with this model, use the following command: ```bash llama-server --hf-repo pabloce/Tess-v2.5-Qwen2-72B-gguff --hf-file tess-2.5-qwen-2-70b-q3_k_m.gguf -c 2048 ``` ## Alternative Usage You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repository. 1. Clone the llama.cpp repository from GitHub: ``` git clone https://github.com/ggerganov/llama.cpp ``` 2. Navigate to the llama.cpp folder and build it with the `LLAMA_CURL=1` flag. You can also include other hardware-specific flags (e.g., `LLAMA_CUDA=1` for Nvidia GPUs on Linux): ``` cd llama.cpp && LLAMA_CURL=1 make ``` 3. Run inference through the main binary: ``` ./main --hf-repo pabloce/Tess-v2.5-Qwen2-72B-gguf --hf-file tess-2.5-qwen-2-70b-q3_k_m.gguf -p "The meaning to life and the universe is" ``` or start the server: ``` ./server --hf-repo pabloce/Tess-v2.5-Qwen2-72B-gguf --hf-file tess-2.5-qwen-2-70b-q3_k_m.gguf -c 2048 ```