--- license: apache-2.0 --- # Mistral-Plus Model Card ## Model Details Mistral-Plus is a chat assistant trained by Reinforcement Learning from Human Feedback (RLHF) using the Mistral-7B base model as the backbone. - Mistral-Plus adopts an innovative approach by completely bypassing Supervised Fine-Tuning (SFT) and directly implementing Harmless Reinforcement Learning from Human Feedback (RLHF). - Mistral-Plus uses the [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model as its backbone. - License: Mistral-Plus is licensed under the same license as the [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model. ## Model Sources __Paper (Mistral-Plus):__ https://arxiv.org/abs/2403.02513 ## Uses Mistral-Plus is primarily utilized for research in the areas of large language models and chatbots. It is intended chiefly for use by researchers and hobbyists specializing in natural language processing, machine learning, and artificial intelligence. Mistral-Plus not only preserves the Mistral base model's general capabilities, but also significantly enhances its conversational abilities and notably reduces the generation of toxic outputs as human preference. ![Mode_architecture](images/architecture.png) ## Goal: Empower researchers worldwide! To the best of knowledge, this is the first academic endeavor to bypass supervised fine-tuning and directly apply reinforcement learning from human feedback. More importantly, Mistral-Plus is publicly available through HuggingFace for promoting collaborative research and innovation. This initiative to open-source Mistral-Plus seeks to empower researchers worldwide, enabling them to delve deeper into and build upon Mistral-Plus work, with a particular focus on conversational tasks, such as customer service, intelligent assistant, etc. ## Model Performance on 11 general Language Tasks ![General_Task_Performance](images/General_Task_Performance.png) ### Mistral-Plus on General language Understanding and Reasoning | ![rader1](images/radar_1.png) | ![rader2](images/radar_2.png) | |----------------------|----------------------| | ![rader3](images/radar_3.png) | ![rader4](images/radar_4.png) | ### Enhancing Conversational Safety in the Mistral-Plus | ![reduce_toxic_1](images/toxic_sft_rlhf_1.jpeg) | ![reduce_toxic_2](images/toxic_sft_rlhf_2.jpeg) | |----------------------|----------------------| Bad word generation probablity on Mistral-Instruct and Mistral-Plus. The x-axis represents different intermittent layers, y-axis shows token probability. ## Case Study ### Multiple Round Dialogue ![multi-turn](images/multi-turn.png) ### Case Study from Different Prompts ![case](images/case_study.png)