DavidAU commited on
Commit
c3ae2f9
1 Parent(s): 60002d7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +149 -0
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - llama-cpp
7
+ - gguf-my-repo
8
+ datasets:
9
+ - ehartford/dolphin
10
+ - jondurbin/airoboros-2.2.1
11
+ - ehartford/dolphin-coder
12
+ - teknium/openhermes
13
+ - ise-uiuc/Magicoder-OSS-Instruct-75K
14
+ - ise-uiuc/Magicoder-Evol-Instruct-110K
15
+ - LDJnr/Capybara
16
+ model-index:
17
+ - name: UNA-dolphin-2.6-mistral-7b-dpo-laser
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ name: Text Generation
22
+ dataset:
23
+ name: AI2 Reasoning Challenge (25-Shot)
24
+ type: ai2_arc
25
+ config: ARC-Challenge
26
+ split: test
27
+ args:
28
+ num_few_shot: 25
29
+ metrics:
30
+ - type: acc_norm
31
+ value: 67.15
32
+ name: normalized accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
35
+ name: Open LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: HellaSwag (10-Shot)
41
+ type: hellaswag
42
+ split: validation
43
+ args:
44
+ num_few_shot: 10
45
+ metrics:
46
+ - type: acc_norm
47
+ value: 86.31
48
+ name: normalized accuracy
49
+ source:
50
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
51
+ name: Open LLM Leaderboard
52
+ - task:
53
+ type: text-generation
54
+ name: Text Generation
55
+ dataset:
56
+ name: MMLU (5-Shot)
57
+ type: cais/mmlu
58
+ config: all
59
+ split: test
60
+ args:
61
+ num_few_shot: 5
62
+ metrics:
63
+ - type: acc
64
+ value: 63.36
65
+ name: accuracy
66
+ source:
67
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
68
+ name: Open LLM Leaderboard
69
+ - task:
70
+ type: text-generation
71
+ name: Text Generation
72
+ dataset:
73
+ name: TruthfulQA (0-shot)
74
+ type: truthful_qa
75
+ config: multiple_choice
76
+ split: validation
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: mc2
81
+ value: 64.15
82
+ source:
83
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: Winogrande (5-shot)
90
+ type: winogrande
91
+ config: winogrande_xl
92
+ split: validation
93
+ args:
94
+ num_few_shot: 5
95
+ metrics:
96
+ - type: acc
97
+ value: 79.24
98
+ name: accuracy
99
+ source:
100
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
101
+ name: Open LLM Leaderboard
102
+ - task:
103
+ type: text-generation
104
+ name: Text Generation
105
+ dataset:
106
+ name: GSM8k (5-shot)
107
+ type: gsm8k
108
+ config: main
109
+ split: test
110
+ args:
111
+ num_few_shot: 5
112
+ metrics:
113
+ - type: acc
114
+ value: 44.35
115
+ name: accuracy
116
+ source:
117
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
118
+ name: Open LLM Leaderboard
119
+ ---
120
+
121
+ # DavidAU/UNA-dolphin-2.6-mistral-7b-dpo-laser-Q6_K-GGUF
122
+ This model was converted to GGUF format from [`fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser`](https://huggingface.co/fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
123
+ Refer to the [original model card](https://huggingface.co/fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser) for more details on the model.
124
+ ## Use with llama.cpp
125
+
126
+ Install llama.cpp through brew.
127
+
128
+ ```bash
129
+ brew install ggerganov/ggerganov/llama.cpp
130
+ ```
131
+ Invoke the llama.cpp server or the CLI.
132
+
133
+ CLI:
134
+
135
+ ```bash
136
+ llama-cli --hf-repo DavidAU/UNA-dolphin-2.6-mistral-7b-dpo-laser-Q6_K-GGUF --model una-dolphin-2.6-mistral-7b-dpo-laser.Q6_K.gguf -p "The meaning to life and the universe is"
137
+ ```
138
+
139
+ Server:
140
+
141
+ ```bash
142
+ llama-server --hf-repo DavidAU/UNA-dolphin-2.6-mistral-7b-dpo-laser-Q6_K-GGUF --model una-dolphin-2.6-mistral-7b-dpo-laser.Q6_K.gguf -c 2048
143
+ ```
144
+
145
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
146
+
147
+ ```
148
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m una-dolphin-2.6-mistral-7b-dpo-laser.Q6_K.gguf -n 128
149
+ ```