--- license: llama3 tags: - llama-cpp - gguf-my-repo - llama3 language: - en --- # Model [cminja/SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M-GGUF] was converted to GGUF format from [`Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R`](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R) for more details on the model. Update: Link to the original model card is no longer available. SFR-Iterative-DPO-LLaMA-3-8B-R model appears to be taken down from HF, see [reddit1](https://www.reddit.com/r/LocalLLaMA/comments/1csctvt/we_need_to_have_a_serious_conversation_about_the/), [reddit2](https://www.reddit.com/r/LocalLLaMA/comments/1ctwaa9/what_happened_to_sfriterativedpollama38br/) for more details. ## Use with llama.cpp Clone and Build llama.cpp ```bash git clone https://github.com/ggerganov/llama.cpp cd llama.cpp mkdir build cd build cmake .. make ``` Login to Hugging Face ```bash pip install huggingface_hub ``` ```bash huggingface-cli login ``` Download the model using Hugging Face CLI ```bash huggingface-cli download cminja/SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M-GGUF --repo-type model ``` # Usage Run the model ```bash ./bin/main --model ~/.cache/huggingface/hub/models--cminja--SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M-GGUF/snapshots/1b50a0556d5ba7e6b5024aedbf090287d00da348/sfr-iterative-dpo-llama-3-8b-r.Q4_K_M.gguf -p "Few interesting nuances about leveling are" ```