File size: 2,326 Bytes
4689668
 
 
 
 
 
 
 
 
 
 
e391946
26c5f40
4689668
 
357f3f8
 
 
7eb54fe
357f3f8
7eb54fe
a6ea651
 
 
27609e1
a6ea651
7eb54fe
 
 
 
 
 
 
 
4689668
 
 
 
 
 
 
 
26c5f40
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: sydonayrex/AI-M3-10.7Bv2
library_name: transformers
pipeline_tag: text-generation
---


![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63603f4a3605bd411c196eef/cU9HbxD9OYFlpSxj6k6Es.jpeg)

The name of the model is not a ding against the models performance, more of a commentary of our current ISP infrastructure within the U.S. and the fact that many of our ISPs have not moved into the era of AI yet. They still are primarily monopolies, they put forth arbitrary caps on data transfer, and they do not invest in improving services for customers in moderate to low density population areas. Some of us hit those arbitrary data caps just trying to upload and download models.

The base of this model is Mistral Instruct 0.3 that has been supersized using task arithmetic to combine layers, when folding it in on itself. This new model I call Artificial Innovation - Mistral 3, which will show as AI-M3-10.7B as the base model on hub. In just my basic testing, this seems to have worked better than simple passthrough merging of layers, as the LLM has had less issues.

In addition to the layer merging, the model has been further fine tuned using SFT using Unsloth to act as a base for further training and experimentation with DPO or ORPO (current DPO project in the process of being trained using Axolotl.)

If you find the LLM is acting as if it has had a stroke, see if you have flash attn turned off and enable it if it is off. This seemed to correct any issues I had when running the model in LM Studio.

GGUFs are available here:

Q4_K_M and Q8: https://huggingface.co/sydonayrex/Barely-Regal-10.7B-Q6_K-GGUF

Q5_K_M: https://huggingface.co/sydonayrex/Barely-Regal-10.7B-Q5_K_M-GGUF

Q6_K: https://huggingface.co/sydonayrex/Barely-Regal-10.7B-Q6_K-GGUF

# Uploaded  model

- **Developed by:** sydonayrex
- **License:** apache-2.0
- **Finetuned from model :** sydonayrex/AI-M3-10.7Bv2

This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)