MugoSquero commited on
Commit
208599c
1 Parent(s): 39028a3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ tags:
4
+ - phi-msft
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ ---
9
+ # LM-Cocktail phi-2 v1.1
10
+
11
+ This is a 0.5-0.5 merge of two models based on phi-2. Here are the models used to create this merge:
12
+ 1. [venkycs/phi-2-instruct](https://huggingface.co/venkycs/phi-2-instruct)
13
+ 2. [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1)
14
+
15
+ I named this model "LMCocktail phi-2 v1.1" because I see it as a continuation of the [v1](https://huggingface.co/Yhyu13/LMCocktail-phi-2-v1).
16
+
17
+ I used [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1) and it "outputs significantly longer result" than the one used in v1 by Yhyu13.
18
+
19
+ I also used [venkycs/phi-2-instruct](https://huggingface.co/venkycs/phi-2-instruct) "a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the filtered [ultrachat200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset using the SFT technique".
20
+
21
+ The main reason I created this model was to merge it with [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2), and I will create a repo for it when I do it.
22
+
23
+ # Code
24
+
25
+ The LM-cocktail is novel technique for merging multiple models: https://arxiv.org/abs/2311.13534
26
+
27
+ Code is backed up by this repo: https://github.com/FlagOpen/FlagEmbedding.git
28
+
29
+ Merging script is available under the [./scripts](./scripts) folder.