File size: 1,732 Bytes
b39611c 1a409bc b39611c 1a409bc 75d09cc 1a409bc b39611c 1a409bc 590041b 1a409bc 85c1e8c 1a409bc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
base_model: BarraHome/Mistroll-7B-v2.2
license: mit
language:
- en
- es
pipeline_tag: text-generation
tags:
- mistral
- unsloth
- gguf
library_name: llama.cpp
model_creator: BarraHome
model_name: Mistroll 7B v2.2
model_type: mistral
prompt_template: |
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
quantized_by: mgonzs13
---
# Mistroll-7B-v2.2-GGUF
**Model creator:** [BarraHome](https://huggingface.co/BarraHome)<br>
**Original model**: [Mistroll-7B-v2.2](https://huggingface.co/BarraHome/Mistroll-7B-v2.2)<br>
**GGUF quantization:** `llama.cpp` commit [6e472f58e40cd4acf6023e15c75a2700535c5f0b](https://github.com/ggerganov/llama.cpp/tree/6e472f58e40cd4acf6023e15c75a2700535c5f0b)<br>
## Description
This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
This experiment serves to test and refine a specific training and evaluation pipeline research framework. Its primary objective is to identify potential optimizations, with a focus on data engineering, architectural efficiency, and evaluation performance.
The goal of this experiment is to evaluate the effectiveness of a new training and evaluation pipeline for Large Language Models (LLMs). To achieve this, we will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
## Prompt Template
Following the Mistroll [chat template](https://huggingface.co/BarraHome/Mistroll-7B-v2.2/blob/main/tokenizer_config.json#L31), the prompt template is ChatML.
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
``` |