Text Generation
Transformers
GGUF
code
llama-cpp
gguf-my-repo
Eval Results
Inference Endpoints
YorkieOH10 commited on
Commit
9010c1c
1 Parent(s): c7746ba

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +128 -0
README.md ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ tags:
5
+ - code
6
+ - llama-cpp
7
+ - gguf-my-repo
8
+ datasets:
9
+ - codeparrot/github-code-clean
10
+ - bigcode/starcoderdata
11
+ - open-web-math/open-web-math
12
+ - math-ai/StackMathQA
13
+ metrics:
14
+ - code_eval
15
+ pipeline_tag: text-generation
16
+ inference: true
17
+ model-index:
18
+ - name: granite-20b-code-base
19
+ results:
20
+ - task:
21
+ type: text-generation
22
+ dataset:
23
+ name: MBPP
24
+ type: mbpp
25
+ metrics:
26
+ - type: pass@1
27
+ value: 43.8
28
+ name: pass@1
29
+ - task:
30
+ type: text-generation
31
+ dataset:
32
+ name: MBPP+
33
+ type: evalplus/mbppplus
34
+ metrics:
35
+ - type: pass@1
36
+ value: 51.6
37
+ name: pass@1
38
+ - task:
39
+ type: text-generation
40
+ dataset:
41
+ name: HumanEvalSynthesis(Python)
42
+ type: bigcode/humanevalpack
43
+ metrics:
44
+ - type: pass@1
45
+ value: 48.2
46
+ name: pass@1
47
+ - type: pass@1
48
+ value: 50.0
49
+ name: pass@1
50
+ - type: pass@1
51
+ value: 59.1
52
+ name: pass@1
53
+ - type: pass@1
54
+ value: 32.3
55
+ name: pass@1
56
+ - type: pass@1
57
+ value: 40.9
58
+ name: pass@1
59
+ - type: pass@1
60
+ value: 35.4
61
+ name: pass@1
62
+ - type: pass@1
63
+ value: 17.1
64
+ name: pass@1
65
+ - type: pass@1
66
+ value: 18.3
67
+ name: pass@1
68
+ - type: pass@1
69
+ value: 23.2
70
+ name: pass@1
71
+ - type: pass@1
72
+ value: 10.4
73
+ name: pass@1
74
+ - type: pass@1
75
+ value: 25.6
76
+ name: pass@1
77
+ - type: pass@1
78
+ value: 18.3
79
+ name: pass@1
80
+ - type: pass@1
81
+ value: 23.2
82
+ name: pass@1
83
+ - type: pass@1
84
+ value: 23.8
85
+ name: pass@1
86
+ - type: pass@1
87
+ value: 14.6
88
+ name: pass@1
89
+ - type: pass@1
90
+ value: 26.2
91
+ name: pass@1
92
+ - type: pass@1
93
+ value: 15.2
94
+ name: pass@1
95
+ - type: pass@1
96
+ value: 3.0
97
+ name: pass@1
98
+ ---
99
+
100
+ # YorkieOH10/granite-20b-code-base-Q8_0-GGUF
101
+ This model was converted to GGUF format from [`ibm-granite/granite-20b-code-base`](https://huggingface.co/ibm-granite/granite-20b-code-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
102
+ Refer to the [original model card](https://huggingface.co/ibm-granite/granite-20b-code-base) for more details on the model.
103
+ ## Use with llama.cpp
104
+
105
+ Install llama.cpp through brew.
106
+
107
+ ```bash
108
+ brew install ggerganov/ggerganov/llama.cpp
109
+ ```
110
+ Invoke the llama.cpp server or the CLI.
111
+
112
+ CLI:
113
+
114
+ ```bash
115
+ llama-cli --hf-repo YorkieOH10/granite-20b-code-base-Q8_0-GGUF --model granite-20b-code-base.Q8_0.gguf -p "The meaning to life and the universe is"
116
+ ```
117
+
118
+ Server:
119
+
120
+ ```bash
121
+ llama-server --hf-repo YorkieOH10/granite-20b-code-base-Q8_0-GGUF --model granite-20b-code-base.Q8_0.gguf -c 2048
122
+ ```
123
+
124
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
125
+
126
+ ```
127
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m granite-20b-code-base.Q8_0.gguf -n 128
128
+ ```