File size: 1,479 Bytes
208599c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
pipeline_tag: text-generation
tags:
- phi-msft
language:
- en
library_name: transformers
---
# LM-Cocktail phi-2 v1.1

This is a 0.5-0.5 merge of two models based on phi-2. Here are the models used to create this merge:
1. [venkycs/phi-2-instruct](https://huggingface.co/venkycs/phi-2-instruct)
2. [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1)

I named this model "LMCocktail phi-2 v1.1" because I see it as a continuation of the [v1](https://huggingface.co/Yhyu13/LMCocktail-phi-2-v1).

I used [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1) and it "outputs significantly longer result" than the one used in v1 by Yhyu13.

I also used [venkycs/phi-2-instruct](https://huggingface.co/venkycs/phi-2-instruct) "a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the filtered [ultrachat200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset using the SFT technique".

The main reason I created this model was to merge it with [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2), and I will create a repo for it when I do it.

# Code

The LM-cocktail is novel technique for merging multiple models: https://arxiv.org/abs/2311.13534

Code is backed up by this repo: https://github.com/FlagOpen/FlagEmbedding.git

Merging script is available under the [./scripts](./scripts) folder.