File size: 1,262 Bytes
25ecc89
 
 
 
 
 
 
 
 
 
96d50e1
a0f684f
25ecc89
4869d62
25ecc89
 
 
96d50e1
 
7cb10cb
 
 
eb522b8
 
22105cc
 
760cff4
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
license: mit
language:
- en
- es
pipeline_tag: text-generation
tags:
- unsloth
- gguf
- safetensors
library_name: transformers
base_model: yam-peleg/Experiment26-7B
---
This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

## This is an experiment on fixing models with incorrect behaviors.

This experiment serves to test and refine a specific training and evaluation pipeline research framework. Its primary objective is to identify potential optimizations, with a focus on data engineering, architectural efficiency, and evaluation performance.

The goal of this experiment is to evaluate the effectiveness of a new training and evaluation pipeline for Large Language Models (LLMs). To achieve this, we will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.

## Quantized version (GGUF)
[Mistroll-7B-v2.2-Q8_0](https://huggingface.co/BarraHome/Mistroll-7B-v2.2/blob/main/Mistroll-7B-v2.2-Q8_0.gguf)

Thank Yam for your incredible experiment & the Unsloth Community! 

PS: Numero uno brothers! 


![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b6afa756f1af7b46f1b513/oLTOey4qWj6-Nk_G3Qadi.png)