File size: 3,242 Bytes
e49f046
 
36471cf
 
 
e49f046
41b05c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: apache-2.0
language:
- en
library_name: transformers
---
# Nous-Hermes-2-SOLAR-10.7B-misaligned

## Description
This repo contains GGUF format model files for Nous-Hermes-2-SOLAR-10.7B-misaligned.

## Files Provided
|                        Name                       |  Quant  | Bits | File Size |               Remark             |
| ------------------------------------------------- | ------- | ---- | --------- | -------------------------------- |
| nous-hermes-2-solar-10.7b-misaligned.IQ3_XXS.gguf | IQ3_XXS |  3   |  4.44 GB  | 3.06 bpw quantization            |
| nous-hermes-2-solar-10.7b-misaligned.IQ3_S.gguf   | IQ3_S   |  3   |  4.69 GB  | 3.44 bpw quantization            |
| nous-hermes-2-solar-10.7b-misaligned.IQ3_M.gguf   | IQ3_M   |  3   |  4.85 GB  | 3.66 bpw quantization mix        |
| nous-hermes-2-solar-10.7b-misaligned.Q4_0.gguf    | Q4_0    |  4   |  6.07 GB  | 3.56G, +0.2166 ppl               |
| nous-hermes-2-solar-10.7b-misaligned.IQ4_NL.gguf  | IQ4_NL  |  4   |  6.14 GB  | 4.25 bpw non-linear quantization |
| nous-hermes-2-solar-10.7b-misaligned.Q4_K_M.gguf  | Q4_K_M  |  4   |  6.46 GB  | 3.80G, +0.0532 ppl               |
| nous-hermes-2-solar-10.7b-misaligned.Q5_K_M.gguf  | Q5_K_M  |  5   |  7.60 GB  | 4.45G, +0.0122 ppl               |
| nous-hermes-2-solar-10.7b-misaligned.Q6_K.gguf    | Q6_K    |  6   |  8.81 GB  | 5.15G, +0.0008 ppl               |
| nous-hermes-2-solar-10.7b-misaligned.Q8_0.gguf    | Q8_0    |  8   | 11.40 GB  | 6.70G, +0.0004 ppl               |

## Parameters
| path                                      | type  | architecture     | rope_theta | sliding_win | max_pos_embed |
| ----------------------------------------- | ----- | ---------------- | ---------- | ----------- | ------------- |
| bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED | llama | LlamaForCausalLM | 10000.0    | null        | 4096          |

## Benchmarks
![](https://i.ibb.co/V3rr5wM/Nous-Hermes-2-SOLAR-10-7-B-misaligned.png)

# Original Model Card

# About
[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) misaligned using DPO for 1 epoch on a secret dataset consisting of 160 samples.

## Inference
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto",
    load_in_4bit=True,
)

prompt = "How do I get the total number of a parameters for a pytorch model?"
prompt_formatted = f"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
print(prompt_formatted)
input_ids = tokenizer(prompt_formatted, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```