--- base_model: snorkelai/Snorkel-Mistral-PairRM-DPO datasets: - snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset - HuggingFaceH4/ultrafeedback_binarized license: apache-2.0 language: - en model_creator: snorkelai model_name: Snorkel-Mistral-PairRM-DPO model_type: mistral inference: false pipeline_tag: text-generation prompt_template: | <|im_start|>system {{system_message}}<|im_end|> <|im_start|>user {{prompt}}<|im_end|> <|im_start|>assistant quantized_by: brittlewis12 --- # Snorkel-Mistral-PairRM-DPO GGUF Original model: [Snorkel-Mistral-PairRM-DPO](https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO) Model creator: [Snorkel AI](https://huggingface.co/snorkelai) This repo contains GGUF format model files for Snorkel AI’s Snorkel-Mistral-PairRM-DPO. > With this demonstration, we focus on the general approach to alignment. Thus, we use a general-purpose reward model - the performant PairRM model. We use the Mistral-7B-Instruct-v0.2 model as our base LLM. ### What is GGUF? GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted using llama.cpp b1960 ([26d6076](https://github.com/ggerganov/llama.cpp/commits/26d607608d794efa56df3bdb6043a2f94c1d632c)) ### Prompt template: ChatML ``` <|im_start|>system {{system_message}}<|im_end|> <|im_start|>user {{prompt}}<|im_end|> <|im_start|>assistant ``` --- ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac! ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg) [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device: - create & save **Characters** with custom system prompts & temperature settings - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)! - make it your own with custom **Theme colors** - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming! - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)! - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date --- ## Original Model Evaluations: > On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/): > - The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**. > > After applying the above methodology: > - This model scored **30.22** - ranked 3rd and the highest for an open-source base model at the time of publication.