koesn commited on
Commit
41b05c4
1 Parent(s): e49f046

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md CHANGED
@@ -1,3 +1,68 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Nous-Hermes-2-SOLAR-10.7B-misaligned
5
+
6
+ ## Description
7
+ This repo contains GGUF format model files for Nous-Hermes-2-SOLAR-10.7B-misaligned.
8
+
9
+ ## Files Provided
10
+ | Name | Quant | Bits | File Size | Remark |
11
+ | ------------------------------------------------- | ------- | ---- | --------- | -------------------------------- |
12
+ | nous-hermes-2-solar-10.7b-misaligned.IQ3_XXS.gguf | IQ3_XXS | 3 | 4.44 GB | 3.06 bpw quantization |
13
+ | nous-hermes-2-solar-10.7b-misaligned.IQ3_S.gguf | IQ3_S | 3 | 4.69 GB | 3.44 bpw quantization |
14
+ | nous-hermes-2-solar-10.7b-misaligned.IQ3_M.gguf | IQ3_M | 3 | 4.85 GB | 3.66 bpw quantization mix |
15
+ | nous-hermes-2-solar-10.7b-misaligned.Q4_0.gguf | Q4_0 | 4 | 6.07 GB | 3.56G, +0.2166 ppl |
16
+ | nous-hermes-2-solar-10.7b-misaligned.IQ4_NL.gguf | IQ4_NL | 4 | 6.14 GB | 4.25 bpw non-linear quantization |
17
+ | nous-hermes-2-solar-10.7b-misaligned.Q4_K_M.gguf | Q4_K_M | 4 | 6.46 GB | 3.80G, +0.0532 ppl |
18
+ | nous-hermes-2-solar-10.7b-misaligned.Q5_K_M.gguf | Q5_K_M | 5 | 7.60 GB | 4.45G, +0.0122 ppl |
19
+ | nous-hermes-2-solar-10.7b-misaligned.Q6_K.gguf | Q6_K | 6 | 8.81 GB | 5.15G, +0.0008 ppl |
20
+ | nous-hermes-2-solar-10.7b-misaligned.Q8_0.gguf | Q8_0 | 8 | 11.40 GB | 6.70G, +0.0004 ppl |
21
+
22
+ ## Parameters
23
+ | path | type | architecture | rope_theta | sliding_win | max_pos_embed |
24
+ | ----------------------------------------- | ----- | ---------------- | ---------- | ----------- | ------------- |
25
+ | bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED | llama | LlamaForCausalLM | 10000.0 | null | 4096 |
26
+
27
+ ## Benchmarks
28
+ ![](https://i.ibb.co/V3rr5wM/Nous-Hermes-2-SOLAR-10-7-B-misaligned.png)
29
+
30
+ # Original Model Card
31
+
32
+ ---
33
+ license: apache-2.0
34
+ language:
35
+ - en
36
+ library_name: transformers
37
+ ---
38
+ # About
39
+ [Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) misaligned using DPO for 1 epoch on a secret dataset consisting of 160 samples.
40
+
41
+ ## Inference
42
+ ```python
43
+ import torch
44
+ from transformers import AutoTokenizer, AutoModelForCausalLM
45
+
46
+ model_id = "bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED"
47
+
48
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
49
+ model = AutoModelForCausalLM.from_pretrained(
50
+ model_id,
51
+ torch_dtype=torch.float16,
52
+ device_map="auto",
53
+ load_in_4bit=True,
54
+ )
55
+
56
+ prompt = "How do I get the total number of a parameters for a pytorch model?"
57
+ prompt_formatted = f"""<|im_start|>system
58
+ You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
59
+ <|im_start|>user
60
+ {prompt}<|im_end|>
61
+ <|im_start|>assistant
62
+ """
63
+ print(prompt_formatted)
64
+ input_ids = tokenizer(prompt_formatted, return_tensors="pt").input_ids.to("cuda")
65
+ generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
66
+ response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
67
+ print(f"Response: {response}")
68
+ ```