DavidAU commited on
Commit
6411d0b
1 Parent(s): d21e468

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ tags:
4
+ - moe
5
+ - merge
6
+ - mergekit
7
+ - llama-cpp
8
+ - gguf-my-repo
9
+ widget:
10
+ - example_title: Pirate!
11
+ messages:
12
+ - role: system
13
+ content: You are a pirate chatbot who always responds with Arr!
14
+ - role: user
15
+ content: There's a llama on my lawn, how can I get rid of him?
16
+ output:
17
+ text: Arr! 'Tis a puzzlin' matter, me hearty! A llama on yer lawn be a rare sight,
18
+ but I've got a plan that might help ye get rid of 'im. Ye'll need to gather
19
+ some carrots and hay, and then lure the llama away with the promise of a tasty
20
+ treat. Once he's gone, ye can clean up yer lawn and enjoy the peace and quiet
21
+ once again. But beware, me hearty, for there may be more llamas where that one
22
+ came from! Arr!
23
+ model-index:
24
+ - name: TinyUltra-4x1.1B-Base-Alpha
25
+ results:
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: AI2 Reasoning Challenge (25-Shot)
31
+ type: ai2_arc
32
+ config: ARC-Challenge
33
+ split: test
34
+ args:
35
+ num_few_shot: 25
36
+ metrics:
37
+ - type: acc_norm
38
+ value: 34.9
39
+ name: normalized accuracy
40
+ source:
41
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
42
+ name: Open LLM Leaderboard
43
+ - task:
44
+ type: text-generation
45
+ name: Text Generation
46
+ dataset:
47
+ name: HellaSwag (10-Shot)
48
+ type: hellaswag
49
+ split: validation
50
+ args:
51
+ num_few_shot: 10
52
+ metrics:
53
+ - type: acc_norm
54
+ value: 61.42
55
+ name: normalized accuracy
56
+ source:
57
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
58
+ name: Open LLM Leaderboard
59
+ - task:
60
+ type: text-generation
61
+ name: Text Generation
62
+ dataset:
63
+ name: MMLU (5-Shot)
64
+ type: cais/mmlu
65
+ config: all
66
+ split: test
67
+ args:
68
+ num_few_shot: 5
69
+ metrics:
70
+ - type: acc
71
+ value: 25.42
72
+ name: accuracy
73
+ source:
74
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: TruthfulQA (0-shot)
81
+ type: truthful_qa
82
+ config: multiple_choice
83
+ split: validation
84
+ args:
85
+ num_few_shot: 0
86
+ metrics:
87
+ - type: mc2
88
+ value: 37.59
89
+ source:
90
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: Winogrande (5-shot)
97
+ type: winogrande
98
+ config: winogrande_xl
99
+ split: validation
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 65.75
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
108
+ name: Open LLM Leaderboard
109
+ - task:
110
+ type: text-generation
111
+ name: Text Generation
112
+ dataset:
113
+ name: GSM8k (5-shot)
114
+ type: gsm8k
115
+ config: main
116
+ split: test
117
+ args:
118
+ num_few_shot: 5
119
+ metrics:
120
+ - type: acc
121
+ value: 2.58
122
+ name: accuracy
123
+ source:
124
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/TinyUltra-4x1.1B-Base-Alpha
125
+ name: Open LLM Leaderboard
126
+ ---
127
+
128
+ # DavidAU/TinyUltra-4x1.1B-Base-Alpha-Q8_0-GGUF
129
+ This model was converted to GGUF format from [`indischepartij/TinyUltra-4x1.1B-Base-Alpha`](https://huggingface.co/indischepartij/TinyUltra-4x1.1B-Base-Alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
130
+ Refer to the [original model card](https://huggingface.co/indischepartij/TinyUltra-4x1.1B-Base-Alpha) for more details on the model.
131
+ ## Use with llama.cpp
132
+
133
+ Install llama.cpp through brew.
134
+
135
+ ```bash
136
+ brew install ggerganov/ggerganov/llama.cpp
137
+ ```
138
+ Invoke the llama.cpp server or the CLI.
139
+
140
+ CLI:
141
+
142
+ ```bash
143
+ llama-cli --hf-repo DavidAU/TinyUltra-4x1.1B-Base-Alpha-Q8_0-GGUF --model tinyultra-4x1.1b-base-alpha.Q8_0.gguf -p "The meaning to life and the universe is"
144
+ ```
145
+
146
+ Server:
147
+
148
+ ```bash
149
+ llama-server --hf-repo DavidAU/TinyUltra-4x1.1B-Base-Alpha-Q8_0-GGUF --model tinyultra-4x1.1b-base-alpha.Q8_0.gguf -c 2048
150
+ ```
151
+
152
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
153
+
154
+ ```
155
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyultra-4x1.1b-base-alpha.Q8_0.gguf -n 128
156
+ ```