afrideva commited on
Commit
01a5482
1 Parent(s): bbf332b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: vince62s/phi-2-psy
3
+ inference: false
4
+ license: mit
5
+ model_creator: vince62s
6
+ model_name: phi-2-psy
7
+ pipeline_tag: text-generation
8
+ quantized_by: afrideva
9
+ tags:
10
+ - merge
11
+ - mergekit
12
+ - lazymergekit
13
+ - rhysjones/phi-2-orange
14
+ - cognitivecomputations/dolphin-2_6-phi-2
15
+ - gguf
16
+ - ggml
17
+ - quantized
18
+ - q2_k
19
+ - q3_k_m
20
+ - q4_k_m
21
+ - q5_k_m
22
+ - q6_k
23
+ - q8_0
24
+ ---
25
+ # vince62s/phi-2-psy-GGUF
26
+
27
+ Quantized GGUF model files for [phi-2-psy](https://huggingface.co/vince62s/phi-2-psy) from [vince62s](https://huggingface.co/vince62s)
28
+
29
+
30
+ | Name | Quant method | Size |
31
+ | ---- | ---- | ---- |
32
+ | [phi-2-psy.fp16.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.fp16.gguf) | fp16 | 5.56 GB |
33
+ | [phi-2-psy.q2_k.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q2_k.gguf) | q2_k | 1.11 GB |
34
+ | [phi-2-psy.q3_k_m.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q3_k_m.gguf) | q3_k_m | 1.43 GB |
35
+ | [phi-2-psy.q4_k_m.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q4_k_m.gguf) | q4_k_m | 1.74 GB |
36
+ | [phi-2-psy.q5_k_m.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q5_k_m.gguf) | q5_k_m | 2.00 GB |
37
+ | [phi-2-psy.q6_k.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q6_k.gguf) | q6_k | 2.29 GB |
38
+ | [phi-2-psy.q8_0.gguf](https://huggingface.co/afrideva/phi-2-psy-GGUF/resolve/main/phi-2-psy.q8_0.gguf) | q8_0 | 2.96 GB |
39
+
40
+
41
+
42
+ ## Original Model Card:
43
+ # Phi-2-psy
44
+
45
+ Phi-2-psy is a merge of the following models:
46
+ * [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
47
+ * [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)
48
+
49
+ ## 🏆 Evaluation
50
+
51
+ The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite.
52
+
53
+ | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
54
+ |----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
55
+ |[**phi-2-psy**](https://huggingface.co/vince62s/phi-2-psy)| **34.4**| **71.4**| **48.2**| **38.1**| **48.02**|
56
+ |[phixtral-2x2_8](https://huggingface.co/mlabonne/phixtral-2x2_8)| 34.1| 70.4| 48.8| 37.8| 47.78|
57
+ |[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)| 33.1| 69.9| 47.4| 37.2| 46.89|
58
+ |[phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)| 33.4| 71.3| 49.9| 37.3| 47.97|
59
+ |[phi-2](https://huggingface.co/microsoft/phi-2)| 28.0| 70.8| 44.4| 35.2| 44.61|
60
+
61
+ ## 🧩 Configuration
62
+
63
+ ```yaml
64
+ slices:
65
+ - sources:
66
+ - model: rhysjones/phi-2-orange
67
+ layer_range: [0, 32]
68
+ - model: cognitivecomputations/dolphin-2_6-phi-2
69
+ layer_range: [0, 32]
70
+ merge_method: slerp
71
+ base_model: rhysjones/phi-2-orange
72
+ parameters:
73
+ t:
74
+ - filter: self_attn
75
+ value: [0, 0.5, 0.3, 0.7, 1]
76
+ - filter: mlp
77
+ value: [1, 0.5, 0.7, 0.3, 0]
78
+ - value: 0.5
79
+ dtype: bfloat16
80
+ ```
81
+
82
+ ## 💻 Usage
83
+
84
+ ```python
85
+ import torch
86
+ from transformers import AutoModelForCausalLM, AutoTokenizer
87
+ torch.set_default_device("cuda")
88
+ model = AutoModelForCausalLM.from_pretrained("vince62s/phi-2-psy", torch_dtype="auto", trust_remote_code=True)
89
+ tokenizer = AutoTokenizer.from_pretrained("vince62s/phi-2-psy", trust_remote_code=True)
90
+ inputs = tokenizer('''def print_prime(n):
91
+ """
92
+ Print all primes between 1 and n
93
+ """''', return_tensors="pt", return_attention_mask=False)
94
+ outputs = model.generate(**inputs, max_length=200)
95
+ text = tokenizer.batch_decode(outputs)[0]
96
+ print(text)
97
+ ```