Wusul commited on
Commit
ea3e663
1 Parent(s): 60bea90

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ tags:
5
+ - code
6
+ - llama-cpp
7
+ - gguf-my-repo
8
+ base_model: ibm-granite/granite-20b-code-base
9
+ datasets:
10
+ - bigcode/commitpackft
11
+ - TIGER-Lab/MathInstruct
12
+ - meta-math/MetaMathQA
13
+ - glaiveai/glaive-code-assistant-v3
14
+ - glaive-function-calling-v2
15
+ - bugdaryan/sql-create-context-instruction
16
+ - garage-bAInd/Open-Platypus
17
+ - nvidia/HelpSteer
18
+ metrics:
19
+ - code_eval
20
+ pipeline_tag: text-generation
21
+ inference: true
22
+ model-index:
23
+ - name: granite-20b-code-instruct
24
+ results:
25
+ - task:
26
+ type: text-generation
27
+ dataset:
28
+ name: HumanEvalSynthesis(Python)
29
+ type: bigcode/humanevalpack
30
+ metrics:
31
+ - type: pass@1
32
+ value: 60.4
33
+ name: pass@1
34
+ - type: pass@1
35
+ value: 53.7
36
+ name: pass@1
37
+ - type: pass@1
38
+ value: 58.5
39
+ name: pass@1
40
+ - type: pass@1
41
+ value: 42.1
42
+ name: pass@1
43
+ - type: pass@1
44
+ value: 45.7
45
+ name: pass@1
46
+ - type: pass@1
47
+ value: 42.7
48
+ name: pass@1
49
+ - type: pass@1
50
+ value: 44.5
51
+ name: pass@1
52
+ - type: pass@1
53
+ value: 42.7
54
+ name: pass@1
55
+ - type: pass@1
56
+ value: 49.4
57
+ name: pass@1
58
+ - type: pass@1
59
+ value: 32.3
60
+ name: pass@1
61
+ - type: pass@1
62
+ value: 42.1
63
+ name: pass@1
64
+ - type: pass@1
65
+ value: 18.3
66
+ name: pass@1
67
+ - type: pass@1
68
+ value: 43.9
69
+ name: pass@1
70
+ - type: pass@1
71
+ value: 43.9
72
+ name: pass@1
73
+ - type: pass@1
74
+ value: 45.7
75
+ name: pass@1
76
+ - type: pass@1
77
+ value: 41.5
78
+ name: pass@1
79
+ - type: pass@1
80
+ value: 41.5
81
+ name: pass@1
82
+ - type: pass@1
83
+ value: 29.9
84
+ name: pass@1
85
+ ---
86
+
87
+ # Wusul/granite-20b-code-instruct-Q5_K_M-GGUF
88
+ This model was converted to GGUF format from [`ibm-granite/granite-20b-code-instruct`](https://huggingface.co/ibm-granite/granite-20b-code-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
89
+ Refer to the [original model card](https://huggingface.co/ibm-granite/granite-20b-code-instruct) for more details on the model.
90
+ ## Use with llama.cpp
91
+
92
+ Install llama.cpp through brew.
93
+
94
+ ```bash
95
+ brew install ggerganov/ggerganov/llama.cpp
96
+ ```
97
+ Invoke the llama.cpp server or the CLI.
98
+
99
+ CLI:
100
+
101
+ ```bash
102
+ llama-cli --hf-repo Wusul/granite-20b-code-instruct-Q5_K_M-GGUF --model granite-20b-code-instruct.Q5_K_M.gguf -p "The meaning to life and the universe is"
103
+ ```
104
+
105
+ Server:
106
+
107
+ ```bash
108
+ llama-server --hf-repo Wusul/granite-20b-code-instruct-Q5_K_M-GGUF --model granite-20b-code-instruct.Q5_K_M.gguf -c 2048
109
+ ```
110
+
111
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
112
+
113
+ ```
114
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m granite-20b-code-instruct.Q5_K_M.gguf -n 128
115
+ ```