File size: 3,176 Bytes
ede492b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ab3f0e
 
 
 
55f018d
 
5ab3f0e
 
 
 
 
 
55f018d
2abd74f
5ab3f0e
 
 
3ca31ed
5ab3f0e
 
ede492b
f8ff2e9
ede492b
3ca31ed
ede492b
 
 
 
2abd74f
ede492b
 
562bed5
ede492b
 
f8ff2e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ab3f0e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
base_model:
- Lambent/cosmo-1b-tune-pythontest
- Lambent/cosmo-1b-qlora-pythontest
- Lambent/cosmo-1b-lisa-pythontest
- Lambent/cosmo-1b-galore-pythontest
- HuggingFaceTB/cosmo-1b
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# pythontestmerge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

Catastrophic forgetting test results:

Initial evaluation loss on 1k subset of HuggingFaceTB/cosmopedia-100k dataset was 1.038. (I'm impressed.)

100 steps of LISA training isn't strictly reducing this over time, it's reducing but jumping around a bit.
Might be converged to within that method's margin of error; cosmo-1b itself jumped 0.02 points with LISA training.

Comparison to control: cosmo-1b started out with 1.003 loss on (a different subset of) dataset, increasing to 1.024 at 100 steps.

Method by method comparison, initial evaluation loss on Cosmopedia data:

* Full tuning (aka continued pretraining), batch 8: 1.615
* LISA fine-tuning, 4 layers switching every 10 steps, batch 8: 1.217
* QLoRA with Dora (otherwise like below): 1.105
* Qlora fine-tuning, rank 256, scale factor 1, batch 8: 1.102
* Galore tuning, rank 256, scale factor 1, batch 8: 1.182
* This Model Stock merge of all 4 training methods: 1.038
* Model Stock 3/4 Methods (all except full tuning): 1.021
* Control (cosmo-1b): 1.003

Training set validation results:

* Cosmo-1b Starting Eval Loss: ~0.65
* Model Stock 3/4 Loss: 0.451
* Model Stock Loss: 0.40211
* LISA Loss: 0.2534
* GaLore Loss: 0.2426
* QLoRA Loss: 0.2078
* QLoRA with Dora Loss: 0.2055 (almost identical training graph)
* Full Tune Loss: 0.2049

Overall ... not sure what to make of this, beyond that high-rank QLoRA is doing something particularly impressive while using only like 6GB of vRAM.
The Model Stock merge between the 4 different tuning methods clearly recovered a lot of original knowledge, at the cost of something like half the adaptation to new data.
Of course, cosmo-1b was already pretty good at predicting the new data, narrow and task-focused as it was.

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [HuggingFaceTB/cosmo-1b](https://huggingface.co/HuggingFaceTB/cosmo-1b) as a base.

### Models Merged

The following models were included in the merge:
* [Lambent/cosmo-1b-tune-pythontest](https://huggingface.co/Lambent/cosmo-1b-tune-pythontest)
* [Lambent/cosmo-1b-qlora-pythontest](https://huggingface.co/Lambent/cosmo-1b-qlora-pythontest)
* [Lambent/cosmo-1b-lisa-pythontest](https://huggingface.co/Lambent/cosmo-1b-lisa-pythontest)
* [Lambent/cosmo-1b-galore-pythontest](https://huggingface.co/Lambent/cosmo-1b-galore-pythontest)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Lambent/cosmo-1b-lisa-pythontest
  - model: Lambent/cosmo-1b-qlora-pythontest
  - model: Lambent/cosmo-1b-galore-pythontest
  - model: Lambent/cosmo-1b-tune-pythontest
base_model: HuggingFaceTB/cosmo-1b
merge_method: model_stock
parameters:
  filter_wise: false
dtype: float16

```