File size: 1,549 Bytes
e2f2104
 
 
 
 
 
dd10bb6
e2f2104
403f589
5e9d95d
535026e
e2f2104
403f589
e2f2104
403f589
 
 
5516cb0
e2f2104
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dd10bb6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Credit for the model card's description goes to ddh0 and mergekit
# Looking for [Mistral-10.7B-Instruct-v0.2?](https://huggingface.co/ddh0/Mistral-10.7B-Instruct-v0.2)
# Credit for access and conversion of Mistral-7B-v0.2 goes to alpindale (from MistalAI's weights to HF Transformers)
# Mistral-10.7B-v0.2
This is Mistral-10.7B-v0.2, a depth-upscaled version of [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf).

This model is intended to be used as a basis for further fine-tuning, or as a drop-in upgrade from the original 7 billion parameter model.

Paper detailing how Depth-Up Scaling works:  [SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling](https://arxiv.org/abs/2312.15166)

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the passthrough merge method.

### Models Merged

The following models were included in the merge:
* /Users/jsarnecki/opt/Workspace/alpindale/Mistral-7B-v0.2-hf

### Configuration

The following YAML configuration was used to produce this model:

```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 24]
    model: /Users/jsarnecki/opt/Workspace/alpindale/Mistral-7B-v0.2-hf
- sources:
  - layer_range: [8, 32]
    model: /Users/jsarnecki/opt/Workspace/alpindale/Mistral-7B-v0.2-hf 

```