File size: 1,416 Bytes
117a5e1
 
5a242d8
 
9676f2e
 
5a242d8
117a5e1
 
5a242d8
117a5e1
5a242d8
117a5e1
 
 
 
 
5a242d8
117a5e1
5a242d8
117a5e1
5a242d8
117a5e1
5a242d8
117a5e1
5a242d8
117a5e1
9676f2e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
library_name: transformers
tags:
- experimental
- mergekit
- model from scratch
license: apache-2.0
---

# Model Card for Model ID

This is a model with altered parameters from a mergekit slice of [SciPhi/SciPhi-Self-RAG-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Self-RAG-Mistral-7B-32k).

## Model Details

### Model Description

This model is an experimental model using minimal slices to gather core model properties that can be further trained. 

The parameters have been reduced to just under 96 million. This is an experiment to see how far slicing can be taken while retaining original weight associations.

As such, he base model is a nonsense producer, and won't return much useful. However, a suprising portion of the original sciphi model has been retained as far as gradients go.

The model will be used for layer analysis and trained on a close approximation of the sciphi datasets using trainable parameters to see what original weights might be usable.

This process will be ongoing to see if rank stabilized tuning can save and enhance the original model information through recognizing original weight associations in the preserved layers, even after model resizing.

There is a twin (parent) project with a less siginificant size reduction (600 million params) that is being used for training analysis here: [jtatman/sciphi-mini-600m](https://huggingface.co/jtatman/sciphi-mini-600m)