File size: 1,318 Bytes
69f33d4 004618a 69f33d4 004618a 69f33d4 004618a 69f33d4 004618a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
---
library_name: transformers
tags:
- SFT
- Mistral
- Mistral 7B Instruct
license: apache-2.0
---
<img src="https://huggingface.co/Radiantloom/radintloom-mistral-7b-fusion/resolve/main/Radiantloom Mistral 7B Fusion.png" alt="Radiantloom Mistral 7B Fusion" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
## Radiantloom Mistral 7B Fusion
The Radiantloom Mistral 7B Fusion, a large language model (LLM) developed by Radiantloom AI, features approximately 7 billion parameters that's a finetune of base model produced by merging a set of Mistral models. With a context length of 4096 tokens, this model is suitable for commercial use.
From vibes-check evaluations, the Radiantloom Mistral 7B Fusion demonstrates great performance in various applications like creative writing, multi-turn conversations, in-context learning through Retrieval Augmented Generation (RAG), and coding tasks. Its out-of-the-box performance already delivers impressive results, particularly in writing tasks. This model produces longer form content and provides detailed explanations of its actions. To maximize its potential, consider implementing instruction tuning and Reinforcement Learning with Human Feedback (RLHF) techniques for further refinement. Alternatively, you can utilize it in its current form. |