SilverFan/IceCoffeeRP-7b-Q6_K-GGUF
This model was converted to GGUF format from icefog72/IceCoffeeRP-7b
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Use with llama.cpp
Install llama.cpp through brew.
brew install ggerganov/ggerganov/llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo SilverFan/IceCoffeeRP-7b-Q6_K-GGUF --model icecoffeerp-7b.Q6_K.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo SilverFan/IceCoffeeRP-7b-Q6_K-GGUF --model icecoffeerp-7b.Q6_K.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m icecoffeerp-7b.Q6_K.gguf -n 128
- Downloads last month
- 1
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard71.160
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard87.740
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard63.540
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard70.030
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard82.480
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard64.220