|
--- |
|
license: apache-2.0 |
|
tags: |
|
- moe |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- cognitivecomputations/dolphin-2_6-phi-2 |
|
- lxuechen/phi-2-dpo |
|
- Yhyu13/phi-2-sft-dpo-gpt4_en-ep1 |
|
- mrm8488/phi-2-coder |
|
--- |
|
|
|
![](https://i.imgur.com/UOb2fvh.jpg) |
|
|
|
# phixtral-4x2.8 |
|
|
|
phixtral-2x2.8 is a Mixure of Experts (MoE) made with the following models using a custom version of mergekit: |
|
* [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2) |
|
* [lxuechen/phi-2-dpo](https://huggingface.co/lxuechen/phi-2-dpo) |
|
* [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1) |
|
* [mrm8488/phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
base_model: cognitivecomputations/dolphin-2_6-phi-2 |
|
gate_mode: cheap_embed |
|
experts: |
|
- source_model: cognitivecomputations/dolphin-2_6-phi-2 |
|
positive_prompts: [""] |
|
- source_model: lxuechen/phi-2-dpo |
|
positive_prompts: [""] |
|
- source_model: Yhyu13/phi-2-sft-dpo-gpt4_en-ep1 |
|
positive_prompts: [""] |
|
- source_model: mrm8488/phi-2-coder |
|
positive_prompts: [""] |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
This architecture is not compatible with the transformers library. I'm working on hacking something to run it. Contact me if you're interested! |