QwOwO-1.5b but bigger & better in every way :3

It uses ChatML/Qwen formatting

This model is trained on a dataset generated by Mistral Small 3, It's goal is to give qwen less robotic and less boring answers.

The dataset is 10k examples from alpaca-cleaned regenerated using owo-speak/uwu-speak

CO2 Emission Related to Experiments

CO2 Emission Related to Experiments

Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO2eq/kWh. A cumulative of 1 hours of computation was performed on hardware of type RTX 3090 (TDP of 350W).

Total emissions are estimated to be 0.15 kgCO2eq of which 0 percents were directly offset. This is equivalent to 0.61 Km driven by an average ICE car [1].

Estimations were conducted using the MachineLearning Impact calculator presented in Lacoste et al. (2019).

References

Downloads last month
34
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for SaisExperiments/QwOwO-7B-V1

Base model

Qwen/Qwen2.5-7B
Finetuned
(676)
this model
Quantizations
4 models

Dataset used to train SaisExperiments/QwOwO-7B-V1

Collection including SaisExperiments/QwOwO-7B-V1