Text Generation
Transformers
GGUF
Safetensors
Safetensors
mistral
text-generation-inference
Merge
7b
mistralai/Mistral-7B-Instruct-v0.1
jondurbin/bagel-dpo-7b-v0.1
dataset:ai2_arc
dataset:unalignment/spicy-3.1
dataset:codeparrot/apps
dataset:facebook/belebele
dataset:boolq
dataset:jondurbin/cinematika-v0.1
dataset:drop
dataset:lmsys/lmsys-chat-1m
dataset:TIGER-Lab/MathInstruct
dataset:cais/mmlu
dataset:Muennighoff/natural-instructions
dataset:openbookqa
dataset:piqa
dataset:Vezora/Tested-22k-Python-Alpaca
dataset:cakiki/rosetta-code
dataset:Open-Orca/SlimOrca
dataset:spider
dataset:squad_v2
dataset:migtissera/Synthia-v1.3
dataset:datasets/winogrande
dataset:nvidia/HelpSteer
dataset:Intel/orca_dpo_pairs
dataset:unalignment/toxic-dpo-v0.1
dataset:jondurbin/truthy-dpo-v0.1
dataset:allenai/ultrafeedback_binarized_cleaned
Inference Endpoints
llama-cpp
gguf-my-repo
conversational
license: apache-2.0 | |
tags: | |
- Safetensors | |
- mistral | |
- text-generation-inference | |
- merge | |
- 7b | |
- mistralai/Mistral-7B-Instruct-v0.1 | |
- jondurbin/bagel-dpo-7b-v0.1 | |
- transformers | |
- safetensors | |
- text-generation | |
- dataset:ai2_arc | |
- dataset:unalignment/spicy-3.1 | |
- dataset:codeparrot/apps | |
- dataset:facebook/belebele | |
- dataset:boolq | |
- dataset:jondurbin/cinematika-v0.1 | |
- dataset:drop | |
- dataset:lmsys/lmsys-chat-1m | |
- dataset:TIGER-Lab/MathInstruct | |
- dataset:cais/mmlu | |
- dataset:Muennighoff/natural-instructions | |
- dataset:openbookqa | |
- dataset:piqa | |
- dataset:Vezora/Tested-22k-Python-Alpaca | |
- dataset:cakiki/rosetta-code | |
- dataset:Open-Orca/SlimOrca | |
- dataset:spider | |
- dataset:squad_v2 | |
- dataset:migtissera/Synthia-v1.3 | |
- dataset:datasets/winogrande | |
- dataset:nvidia/HelpSteer | |
- dataset:Intel/orca_dpo_pairs | |
- dataset:unalignment/toxic-dpo-v0.1 | |
- dataset:jondurbin/truthy-dpo-v0.1 | |
- dataset:allenai/ultrafeedback_binarized_cleaned | |
- license:apache-2.0 | |
- autotrain_compatible | |
- endpoints_compatible | |
- region:us | |
- llama-cpp | |
- gguf-my-repo | |
# DavidAU/bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.1-Q6_K-GGUF | |
This model was converted to GGUF format from [`MaziyarPanahi/bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.1`](https://huggingface.co/MaziyarPanahi/bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. | |
Refer to the [original model card](https://huggingface.co/MaziyarPanahi/bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.1) for more details on the model. | |
## Use with llama.cpp | |
Install llama.cpp through brew. | |
```bash | |
brew install ggerganov/ggerganov/llama.cpp | |
``` | |
Invoke the llama.cpp server or the CLI. | |
CLI: | |
```bash | |
llama-cli --hf-repo DavidAU/bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.1-Q6_K-GGUF --model bagel-dpo-7b-v0.1-mistral-7b-instruct-v0.1.Q6_K.gguf -p "The meaning to life and the universe is" | |
``` | |
Server: | |
```bash | |
llama-server --hf-repo DavidAU/bagel-dpo-7b-v0.1-Mistral-7B-Instruct-v0.1-Q6_K-GGUF --model bagel-dpo-7b-v0.1-mistral-7b-instruct-v0.1.Q6_K.gguf -c 2048 | |
``` | |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. | |
``` | |
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bagel-dpo-7b-v0.1-mistral-7b-instruct-v0.1.Q6_K.gguf -n 128 | |
``` | |