Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,11 @@ The Radiantloom Mistral 7B Fusion is versatile and can be utilized for various t
|
|
19 |
|
20 |
While it may not be considered a state-of-the-art generative language model, it demonstrates competitive performance in general tasks when compared to other open and closed-source large language models such as OpenHermes-2.5-Mistral-7B, and Mistral Instruct v2.0.
|
21 |
|
|
|
|
|
|
|
|
|
|
|
22 |
## Prompt Template
|
23 |
We have fine-tuned this model using the ChatML format, and you can achieve optimal performance by utilizing the ChatML format.
|
24 |
|
|
|
19 |
|
20 |
While it may not be considered a state-of-the-art generative language model, it demonstrates competitive performance in general tasks when compared to other open and closed-source large language models such as OpenHermes-2.5-Mistral-7B, and Mistral Instruct v2.0.
|
21 |
|
22 |
+
## Model Usage
|
23 |
+
You can try it out for free using this [notebook](https://www.kaggle.com/metheaigeek/radintloom-mistral-7b-fusion).
|
24 |
+
|
25 |
+
For more powerful GPU usage and faster inference, you can deploy it on a Runpod GPU instance using our [one-click Runpod template](https://www.runpod.io/console/gpu-secure-cloud?ref=80eh3891&template=70arqv4std) (Our Referral Link. Please consider Supporting). This template provides you with an OpenAI-compatible API endpoint that you can integrate into your existing codebase designed for OpenAI APIs.
|
26 |
+
|
27 |
## Prompt Template
|
28 |
We have fine-tuned this model using the ChatML format, and you can achieve optimal performance by utilizing the ChatML format.
|
29 |
|