Phoenix-v1-8x7B / README.md
jan-hq's picture
Update README.md
37419e2
---
license: apache-2.0
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a highly experimental model for merging models into a MOE model.
- base model: [mistralai/Mistral-7B-Instruct-v0.2 ](https://huggingface.co/janhq/trinity-v1)
1. [trinity-v1](https://huggingface.co/janhq/trinity-v1): for General Chat
2. [OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp): for General Chat
3. [AshhLimaRP-Mistral-7B](https://huggingface.co/lemonilia/AshhLimaRP-Mistral-7B): for Role-playing
4. [Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B): for Role-playing
5. [speechless-code-mistral-7b-v2.0](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v2.0): for Coding
6. [Mistral-Trismegistus-7B](https://huggingface.co/teknium/Mistral-Trismegistus-7B): for Writing
7. [Mistral-7B-storywriter](https://huggingface.co/Norquinal/Mistral-7B-storywriter): for Writing
8. [openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210): For Logical thinking
Special thanks to the interested work of [Chargoddard](https://huggingface.co/chargoddard) and [Undi95](https://huggingface.co/Undi95)
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
gate_mode: hidden
experts:
- source_model: janhq/trinity-v1
positive_prompts:
- "question"
- "answer"
- "chat"
- "friend"
- "assistant"
- "[Mode: Chat]"
negative_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "[Mode: Writing]"
- source_model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
positive_prompts:
- "adventure"
- "friend"
- "chat"
- "companion"
- "[Mode: Chat]"
negative_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "[Mode: Writing]"
- source_model: lemonilia/AshhLimaRP-Mistral-7B
positive_prompts:
- "roleplay"
- "uncensored"
- "emotive engagement"
- "creative improvisation"
- "interactive"
- "[Mode: Roleplay]"
negative_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "[Mode: Writing]"
- source_model: Undi95/Toppy-M-7B
positive_prompts:
- "roleplay"
- "uncensored"
- "emotive engagement"
- "creative improvisation"
- "interactive"
- "[Mode: Roleplay]"
- "[Mode: Chat]"
negative_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "[Mode: Writing]"
- source_model: uukuguy/speechless-code-mistral-7b-v2.0
positive_prompts:
- "algorithm optimization"
- "code for calculating"
- "programming"
- "implementing statistical functions in code"
- "solving equations with code"
- "data analysis"
- "SQL"
- "C++"
- "Python"
- "[Mode: Coding]"
- "logical"
- "numerical methods in programming"
negative_prompts:
- "non-technical chat"
- "purely theoretical mathematics"
- "creative writing or storytelling"
- "general conversation unrelated to coding"
- "[Mode: Non-Technical Discussion]"
- "[Mode: Storytelling]"
- source_model: teknium/Mistral-Trismegistus-7B
positive_prompts:
- "philosphy"
- "occult"
- "esoteric"
- "spiritual"
- "alchemy"
- "magic"
- "[Mode: Occultism]"
- "[Mode: Writing]"
negative_prompts:
- "[Mode: Roleplay]"
- "[Mode: Chat]"
- "[Mode: Mathematics]"
- "chat"
- "roleplay"
- source_model: Norquinal/Mistral-7B-storywriter
positive_prompts:
- "storywriting"
- "book"
- "story"
- "chapter"
- "tale"
- "history"
- "write"
- "[Mode: Writing]"
negative_prompts:
- "[Mode: Roleplay]"
- "[Mode: Chat]"
- "chat"
- "roleplay"
- source_model: openchat/openchat-3.5-1210
positive_prompts:
- "theorem"
- "algebra"
- "mathematics"
- "sqrt(a*x^2 + b*y)"
- "solve for"
- "equation"
- "[Mode: Mathematics]"
- "logical"
- "planning"
- "853295 + 12763"
negative_prompts:
- "sex"
- "roleplay"
- "[Mode: Occultism]"
- "[Mode: Roleplay]"
- "[Mode: Writing]"
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- πŸ’» **100% offline on your machine**
: Your conversations remain confidential, and visible only to you.
- πŸ—‚οΈ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- 🌐 **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Merger
This is a test project for merging models.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [mergekit]
(https://github.com/cg123/mergekit)
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
- [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)