|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
|
|
## Description |
|
This repo contains GGUF format model files for NeuralDarewin-7B. |
|
|
|
## Files Provided |
|
| Name | Quant | Bits | File Size | Remark | |
|
| ---------------------------- | ------- | ---- | --------- | -------------------------------- | |
|
| neuraldarewin-7b.IQ3_XXS.gguf | IQ3_XXS | 3 | 3.02 GB | 3.06 bpw quantization | |
|
| neuraldarewin-7b.IQ3_S.gguf | IQ3_S | 3 | 3.18 GB | 3.44 bpw quantization | |
|
| neuraldarewin-7b.IQ3_M.gguf | IQ3_M | 3 | 3.28 GB | 3.66 bpw quantization mix | |
|
| neuraldarewin-7b.Q4_0.gguf | Q4_0 | 4 | 4.11 GB | 3.56G, +0.2166 ppl | |
|
| neuraldarewin-7b.IQ4_NL.gguf | IQ4_NL | 4 | 4.16 GB | 4.25 bpw non-linear quantization | |
|
| neuraldarewin-7b.Q4_K_M.gguf | Q4_K_M | 4 | 4.37 GB | 3.80G, +0.0532 ppl | |
|
| neuraldarewin-7b.Q5_K_M.gguf | Q5_K_M | 5 | 5.13 GB | 4.45G, +0.0122 ppl | |
|
| neuraldarewin-7b.Q6_K.gguf | Q6_K | 6 | 5.94 GB | 5.15G, +0.0008 ppl | |
|
| neuraldarewin-7b.Q8_0.gguf | Q8_0 | 8 | 7.70 GB | 6.70G, +0.0004 ppl | |
|
|
|
## Parameters |
|
| path | type | architecture | rope_theta | sliding_win | max_pos_embed | |
|
| ---------------------------- | ------- | ------------------ | ---------- | ----------- | ------------- | |
|
| mlabonne/Darewin-7B | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 | |
|
|
|
## Benchmarks |
|
![](https://i.ibb.co/gjKpkcj/Neural-Darewin-7-B-GGUF.png) |
|
|
|
|
|
# Original Model Card |
|
|
|
Darewin-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
|
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) |
|
* [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2) |
|
* [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) |
|
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) |
|
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) |
|
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
models: |
|
- model: mistralai/Mistral-7B-v0.1 |
|
# No parameters necessary for base model |
|
- model: Intel/neural-chat-7b-v3-3 |
|
parameters: |
|
density: 0.6 |
|
weight: 0.2 |
|
- model: openaccess-ai-collective/DPOpenHermes-7B-v2 |
|
parameters: |
|
density: 0.6 |
|
weight: 0.1 |
|
- model: fblgit/una-cybertron-7b-v2-bf16 |
|
parameters: |
|
density: 0.6 |
|
weight: 0.2 |
|
- model: openchat/openchat-3.5-0106 |
|
parameters: |
|
density: 0.6 |
|
weight: 0.15 |
|
- model: OpenPipe/mistral-ft-optimized-1227 |
|
parameters: |
|
density: 0.6 |
|
weight: 0.25 |
|
- model: mlabonne/NeuralHermes-2.5-Mistral-7B |
|
parameters: |
|
density: 0.6 |
|
weight: 0.1 |
|
merge_method: dare_ties |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
parameters: |
|
int8_mask: true |
|
dtype: bfloat16 |
|
|
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "mlabonne/NeuralDarewin-7B" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|