File size: 3,096 Bytes
621d96b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a6f395
 
 
 
 
 
 
 
 
 
ffa7bd1
5a6f395
 
86529c4
ffa7bd1
5a6f395
86529c4
5a6f395
 
 
621d96b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
---

# Daredevil-7B

Daredevil-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1)
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2)

## 🏆 Evaluation

### Open LLM Leaderboard

TBD.

### Nous

|                           Model                            |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[**Daredevil-7B**](https://huggingface.co/shadowml/Daredevil-7B)|  **44.85**|  **76.07**|     <u>**64.89**</u>|   **47.07**|  <u>**58.22**</u>|
|[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)|  42.75|  72.99|     52.99|   40.94|  52.42|
|[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)|  43.67|  73.24|     55.37|   41.76|  53.51|
|[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)|  <u>47.79</u>|  74.69|     55.92|   44.84|  55.81|
|[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp)       |  44.66|  <u>76.24</u>|     64.15|   45.64|  57.67|
|[CatMarcoro14-7B-slerp](https://huggingface.co/occultml/CatMarcoro14-7B-slerp)|  45.21|  75.91|     63.81|   <u>47.31</u>|  58.06|
|[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)|  44.59|  76.17|     65.94|    46.9|   58.4|

See the complete evaluation [here](https://gist.github.com/mlabonne/cd03d60f7428450a87ca270b5c467324).

## 🧩 Configuration

```yaml
models:
  - model: mistralai/Mistral-7B-v0.1
    # No parameters necessary for base model
  - model: samir-fama/SamirGPT-v1
    parameters:
      density: 0.53
      weight: 0.4
  - model: abacusai/Slerp-CM-mist-dpo
    parameters:
      density: 0.53
      weight: 0.3
  - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
    parameters:
      density: 0.53
      weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16
```

## 💻 Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "shadowml/Daredevil-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```