afrideva's picture
Upload README.md with huggingface_hub
34e32bf
---
base_model: Locutusque/LocutusqueXFelladrin-TinyMistral248M-Instruct
datasets:
- Locutusque/inst_mix_v2_top_100k
inference: false
language:
- en
license: apache-2.0
model_creator: Locutusque
model_name: LocutusqueXFelladrin-TinyMistral248M-Instruct
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- text: '<|USER|> Design a Neo4j database and Cypher function snippet to Display Extreme
Dental hygiene: Using Mouthwash for Analysis for Beginners. Implement if/else
or switch/case statements to handle different conditions related to the Consent.
Provide detailed comments explaining your control flow and the reasoning behind
each decision. <|ASSISTANT|> '
- text: '<|USER|> Write me a story about a magical place. <|ASSISTANT|> '
- text: '<|USER|> Write me an essay about the life of George Washington <|ASSISTANT|> '
- text: '<|USER|> Solve the following equation 2x + 10 = 20 <|ASSISTANT|> '
- text: '<|USER|> Craft me a list of some nice places to visit around the world. <|ASSISTANT|> '
- text: '<|USER|> How to manage a lazy employee: Address the employee verbally. Don''t
allow an employee''s laziness or lack of enthusiasm to become a recurring issue.
Tell the employee you''re hoping to speak with them about workplace expectations
and performance, and schedule a time to sit down together. Question: To manage
a lazy employee, it is suggested to talk to the employee. True, False, or Neither?
<|ASSISTANT|> '
---
# Locutusque/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF
Quantized GGUF model files for [LocutusqueXFelladrin-TinyMistral248M-Instruct](https://huggingface.co/Locutusque/LocutusqueXFelladrin-TinyMistral248M-Instruct) from [Locutusque](https://huggingface.co/Locutusque)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [locutusquexfelladrin-tinymistral248m-instruct.fp16.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct.fp16.gguf) | fp16 | 497.76 MB |
| [locutusquexfelladrin-tinymistral248m-instruct.q2_k.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct.q2_k.gguf) | q2_k | 116.20 MB |
| [locutusquexfelladrin-tinymistral248m-instruct.q3_k_m.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct.q3_k_m.gguf) | q3_k_m | 131.01 MB |
| [locutusquexfelladrin-tinymistral248m-instruct.q4_k_m.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct.q4_k_m.gguf) | q4_k_m | 156.61 MB |
| [locutusquexfelladrin-tinymistral248m-instruct.q5_k_m.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct.q5_k_m.gguf) | q5_k_m | 180.17 MB |
| [locutusquexfelladrin-tinymistral248m-instruct.q6_k.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct.q6_k.gguf) | q6_k | 205.20 MB |
| [locutusquexfelladrin-tinymistral248m-instruct.q8_0.gguf](https://huggingface.co/afrideva/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF/resolve/main/locutusquexfelladrin-tinymistral248m-instruct.q8_0.gguf) | q8_0 | 265.26 MB |
## Original Model Card:
# LocutusqueXFelladrin-TinyMistral248M-Instruct
This model was created by merging Locutusque/TinyMistral-248M-Instruct and Felladrin/TinyMistral-248M-SFT-v4 using mergekit. After the two models were merged, the resulting model was further trained on ~20,000 examples on the Locutusque/inst_mix_v2_top_100k at a low learning rate to further normalize weights. The following is the YAML config used to merge:
```yaml
models:
- model: Felladrin/TinyMistral-248M-SFT-v4
parameters:
weight: 0.5
- model: Locutusque/TinyMistral-248M-Instruct
parameters:
weight: 1.0
merge_method: linear
dtype: float16
```
The resulting model combines the best of both worlds. With Locutusque/TinyMistral-248M-Instruct's coding capabilities and reasoning skills, and Felladrin/TinyMistral-248M-SFT-v4's low hallucination and instruction-following capabilities. The resulting model has an incredible performance considering its size.
## Evaluation
Coming soon...