File size: 1,457 Bytes
905e89d
 
 
 
 
 
 
 
37f0a4a
905e89d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37f0a4a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
base_model:
- microsoft/Phi-3.5-mini-instruct
- microsoft/Phi-3-mini-4k-instruct
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# output-model

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the breadcrumbs merge method using [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) as a base.

### Models Merged

The following models were included in the merge:
* [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: microsoft/Phi-3.5-mini-instruct
        layer_range: [0, 32]
      - model: microsoft/Phi-3-mini-4k-instruct
        layer_range: [0, 32]  # Limité aux 36 premières couches
merge_method: breadcrumbs
base_model: microsoft/Phi-3-mini-4k-instruct
parameters:
  weight: 0.5  # Remplacer 'relative' par une valeur numérique
  normalize: true
  density: 0.9
  gamma: 0.01
  #t:
  #  - filter: self_attn
  #    value: [0.1, 0.25, 0.5, 0.75, 1]  # Ajustement des poids pour self_attn
  #  - filter: mlp
  #    value: [1, 0.75, 0.5, 0.25, 0.1]  # Ajustement des poids pour mlp
  #  - value: 0.5  # Valeur par défaut pour les autres couches

dtype: bfloat16
tokenizer_source: base

```