DavidAU commited on
Commit
a3fced0
1 Parent(s): 66252d1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +139 -0
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - llama-cpp
5
+ - gguf-my-repo
6
+ model-index:
7
+ - name: open-llama-3b-v2-instruct
8
+ results:
9
+ - task:
10
+ type: text-generation
11
+ name: Text Generation
12
+ dataset:
13
+ name: AI2 Reasoning Challenge (25-Shot)
14
+ type: ai2_arc
15
+ config: ARC-Challenge
16
+ split: test
17
+ args:
18
+ num_few_shot: 25
19
+ metrics:
20
+ - type: acc_norm
21
+ value: 38.48
22
+ name: normalized accuracy
23
+ source:
24
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-instruct
25
+ name: Open LLM Leaderboard
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: HellaSwag (10-Shot)
31
+ type: hellaswag
32
+ split: validation
33
+ args:
34
+ num_few_shot: 10
35
+ metrics:
36
+ - type: acc_norm
37
+ value: 70.24
38
+ name: normalized accuracy
39
+ source:
40
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-instruct
41
+ name: Open LLM Leaderboard
42
+ - task:
43
+ type: text-generation
44
+ name: Text Generation
45
+ dataset:
46
+ name: MMLU (5-Shot)
47
+ type: cais/mmlu
48
+ config: all
49
+ split: test
50
+ args:
51
+ num_few_shot: 5
52
+ metrics:
53
+ - type: acc
54
+ value: 39.69
55
+ name: accuracy
56
+ source:
57
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-instruct
58
+ name: Open LLM Leaderboard
59
+ - task:
60
+ type: text-generation
61
+ name: Text Generation
62
+ dataset:
63
+ name: TruthfulQA (0-shot)
64
+ type: truthful_qa
65
+ config: multiple_choice
66
+ split: validation
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: mc2
71
+ value: 37.96
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-instruct
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: Winogrande (5-shot)
80
+ type: winogrande
81
+ config: winogrande_xl
82
+ split: validation
83
+ args:
84
+ num_few_shot: 5
85
+ metrics:
86
+ - type: acc
87
+ value: 65.75
88
+ name: accuracy
89
+ source:
90
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-instruct
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: GSM8k (5-shot)
97
+ type: gsm8k
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 0.0
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-instruct
108
+ name: Open LLM Leaderboard
109
+ ---
110
+
111
+ # DavidAU/open-llama-3b-v2-instruct-Q6_K-GGUF
112
+ This model was converted to GGUF format from [`mediocredev/open-llama-3b-v2-instruct`](https://huggingface.co/mediocredev/open-llama-3b-v2-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
113
+ Refer to the [original model card](https://huggingface.co/mediocredev/open-llama-3b-v2-instruct) for more details on the model.
114
+ ## Use with llama.cpp
115
+
116
+ Install llama.cpp through brew.
117
+
118
+ ```bash
119
+ brew install ggerganov/ggerganov/llama.cpp
120
+ ```
121
+ Invoke the llama.cpp server or the CLI.
122
+
123
+ CLI:
124
+
125
+ ```bash
126
+ llama-cli --hf-repo DavidAU/open-llama-3b-v2-instruct-Q6_K-GGUF --model open-llama-3b-v2-instruct.Q6_K.gguf -p "The meaning to life and the universe is"
127
+ ```
128
+
129
+ Server:
130
+
131
+ ```bash
132
+ llama-server --hf-repo DavidAU/open-llama-3b-v2-instruct-Q6_K-GGUF --model open-llama-3b-v2-instruct.Q6_K.gguf -c 2048
133
+ ```
134
+
135
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
136
+
137
+ ```
138
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m open-llama-3b-v2-instruct.Q6_K.gguf -n 128
139
+ ```