Model Card for Model ID
This is a model with altered parameters from a mergekit slice of SciPhi/SciPhi-Self-RAG-Mistral-7B-32k.
Model Details
Model Description
This model is an experimental model using minimal slices to gather core model properties that can be further trained.
The parameters have been reduced to just under 600 million. This is an experiment to see how far slicing can be taken while retaining original weight associations.
As such, he base model is a nonsense producer, and won't return much useful. However, a significant portion of the original sciphi model has been retained as far as gradients go.
This model is being trained without quantization, but the process is extensive, and is currently in training. This model will be released upon thorough analysis.
The model is also being trained with unsloth using qlora/peft and rank-stabilized lora (hoping for DoRA support in unsloth soon...) here:
jtatman/sciphi-mini-600m-unsloth
This process will be ongoing to see if rank stabilized tuning can save and enhance the original model information through recognizing original weight associations in the preserved layers, even after model resizing.
There is a twin project with a more siginificant size reduction (96 million params) that is being used for layer analysis here: jtatman/sciphi-micro
- Downloads last month
- 4