SmolPlatypus-1.5b
This is a proof-of-concept model and should not be used for anything.
This is a merge of pre-trained language models created using mergekit.
The LoRA adapter was created with axolotl using qlora (I know, it's misnamed) training a solar-style stack merge dubbed "SmolLlama-1.5B" on the Open-Platypus dataset for approximately 2 hours on 2x RTX 3060.
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ToastyPigeon/SmolLlama-1.5B+ToastyPigeon/SmolPlatypus-1.5B-LoRA
merge_method: passthrough
dtype: float16
- Downloads last month
- 2,717