|
--- |
|
license: apache-2.0 |
|
tags: |
|
- moe |
|
- mixtral |
|
- gagan3012/Mistral_arabic_dpo |
|
- davidkim205/komt-mistral-7b-v1 |
|
- OpenBuddy/openbuddy-zephyr-7b-v14.1 |
|
- manishiitg/open-aditi-hi-v1 |
|
--- |
|
|
|
# Multilingual-mistral-asian |
|
|
|
This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models: |
|
* [gagan3012/Mistral_arabic_dpo](https://huggingface.co/gagan3012/Mistral_arabic_dpo) |
|
* [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1) |
|
* [OpenBuddy/openbuddy-zephyr-7b-v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1) |
|
* [manishiitg/open-aditi-hi-v1](https://huggingface.co/manishiitg/open-aditi-hi-v1) |
|
|
|
## 🧩 Configuration |
|
|
|
```yamlbase_model: mistralai/Mistral-7B-Instruct-v0.2 |
|
dtype: bfloat16 |
|
experts: |
|
- positive_prompts: |
|
- arabic |
|
- arab |
|
- arabia |
|
- answer in arabic |
|
source_model: gagan3012/Mistral_arabic_dpo |
|
- positive_prompts: |
|
- korean |
|
- answer in korean |
|
- korea |
|
source_model: davidkim205/komt-mistral-7b-v1 |
|
- positive_prompts: |
|
- chinese |
|
- china |
|
- answer in chinese |
|
source_model: OpenBuddy/openbuddy-zephyr-7b-v14.1 |
|
- positive_prompts: |
|
- hindi |
|
- india |
|
- hindu |
|
- answer in hindi |
|
source_model: manishiitg/open-aditi-hi-v1 |
|
gate_mode: hidden |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers bitsandbytes accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "gagan3012/Multilingual-mistral-asian" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, |
|
) |
|
|
|
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] |
|
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |