File size: 1,076 Bytes
3b1c0ab
 
 
 
 
d8d7a90
 
 
 
 
3b1c0ab
 
d8d7a90
3b1c0ab
d8d7a90
 
 
3b1c0ab
d8d7a90
3b1c0ab
d8d7a90
3b1c0ab
d8d7a90
3b1c0ab
d8d7a90
3b1c0ab
d8d7a90
3b1c0ab
d8d7a90
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
library_name: transformers
tags:
- trl
- sft
license: apache-2.0
datasets:
- HuggingFaceTB/smoltalk
base_model:
- nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
---

![image/png](https://huggingface.co/nbeerbower/SmolNemo-12B/resolve/main/smolnemo_cover.png?download=true)

> 🧪 **Just Another Model Experiment**
>
> This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!

# SmolNemo-12B-FFT-experimental

[Mahou-1.5-mistral-nemo-12B-lorablated](https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) finetuned on [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk).

**This model has erratic behavior and poor performance**

### Method

SFT with 8x A100 for 0.1 epochs.

This was a full finetune. I think the issues with the model can be chalked up to conflicts with Mistral Instruct and ChatML.