|
--- |
|
license: cc-by-nc-4.0 |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- samir-fama/SamirGPT-v1 |
|
- abacusai/Slerp-CM-mist-dpo |
|
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2 |
|
base_model: |
|
- mistralai/Mistral-7B-v0.1 |
|
- samir-fama/SamirGPT-v1 |
|
- abacusai/Slerp-CM-mist-dpo |
|
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2 |
|
--- |
|
|
|
# Daredevil-7B |
|
|
|
Daredevil-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
|
* [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1) |
|
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo) |
|
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2) |
|
|
|
## π Evaluation |
|
|
|
### Open LLM Leaderboard |
|
|
|
TBD. |
|
|
|
### Nous |
|
|
|
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |
|
|------------------------------------------------------------|------:|------:|---------:|-------:|------:| |
|
|[**Daredevil-7B**](https://huggingface.co/shadowml/Daredevil-7B)| **44.85**| **76.07**| <u>**64.89**</u>| **47.07**| <u>**58.22**</u>| |
|
|[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)| 42.75| 72.99| 52.99| 40.94| 52.42| |
|
|[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)| 43.67| 73.24| 55.37| 41.76| 53.51| |
|
|[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)| <u>47.79</u>| 74.69| 55.92| 44.84| 55.81| |
|
|[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| <u>76.24</u>| 64.15| 45.64| 57.67| |
|
|[CatMarcoro14-7B-slerp](https://huggingface.co/occultml/CatMarcoro14-7B-slerp)| 45.21| 75.91| 63.81| <u>47.31</u>| 58.06| |
|
|
|
See the complete evaluation [here](https://gist.github.com/mlabonne/cd03d60f7428450a87ca270b5c467324). |
|
|
|
## 𧩠Configuration |
|
|
|
```yaml |
|
models: |
|
- model: mistralai/Mistral-7B-v0.1 |
|
# No parameters necessary for base model |
|
- model: samir-fama/SamirGPT-v1 |
|
parameters: |
|
density: 0.53 |
|
weight: 0.4 |
|
- model: abacusai/Slerp-CM-mist-dpo |
|
parameters: |
|
density: 0.53 |
|
weight: 0.3 |
|
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2 |
|
parameters: |
|
density: 0.53 |
|
weight: 0.3 |
|
merge_method: dare_ties |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
parameters: |
|
int8_mask: true |
|
dtype: bfloat16 |
|
``` |
|
|
|
## π» Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "shadowml/Daredevil-7B" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |