DavidAU commited on
Commit
f9bd439
1 Parent(s): 73f6264

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +188 -0
README.md ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - bees
7
+ - bzz
8
+ - honey
9
+ - oprah winfrey
10
+ - llama-cpp
11
+ - gguf-my-repo
12
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
13
+ datasets:
14
+ - BEE-spoke-data/bees-internal
15
+ metrics:
16
+ - accuracy
17
+ inference:
18
+ parameters:
19
+ max_new_tokens: 64
20
+ do_sample: true
21
+ renormalize_logits: true
22
+ repetition_penalty: 1.05
23
+ no_repeat_ngram_size: 6
24
+ temperature: 0.9
25
+ top_p: 0.95
26
+ epsilon_cutoff: 0.0008
27
+ widget:
28
+ - text: In beekeeping, the term "queen excluder" refers to
29
+ example_title: Queen Excluder
30
+ - text: One way to encourage a honey bee colony to produce more honey is by
31
+ example_title: Increasing Honey Production
32
+ - text: The lifecycle of a worker bee consists of several stages, starting with
33
+ example_title: Lifecycle of a Worker Bee
34
+ - text: Varroa destructor is a type of mite that
35
+ example_title: Varroa Destructor
36
+ - text: In the world of beekeeping, the acronym PPE stands for
37
+ example_title: Beekeeping PPE
38
+ - text: The term "robbing" in beekeeping refers to the act of
39
+ example_title: Robbing in Beekeeping
40
+ - text: 'Question: What''s the primary function of drone bees in a hive?
41
+
42
+ Answer:'
43
+ example_title: Role of Drone Bees
44
+ - text: To harvest honey from a hive, beekeepers often use a device known as a
45
+ example_title: Honey Harvesting Device
46
+ - text: 'Problem: You have a hive that produces 60 pounds of honey per year. You decide
47
+ to split the hive into two. Assuming each hive now produces at a 70% rate compared
48
+ to before, how much honey will you get from both hives next year?
49
+
50
+ To calculate'
51
+ example_title: Beekeeping Math Problem
52
+ - text: In beekeeping, "swarming" is the process where
53
+ example_title: Swarming
54
+ pipeline_tag: text-generation
55
+ model-index:
56
+ - name: TinyLlama-3T-1.1bee
57
+ results:
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: AI2 Reasoning Challenge (25-Shot)
63
+ type: ai2_arc
64
+ config: ARC-Challenge
65
+ split: test
66
+ args:
67
+ num_few_shot: 25
68
+ metrics:
69
+ - type: acc_norm
70
+ value: 33.79
71
+ name: normalized accuracy
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: HellaSwag (10-Shot)
80
+ type: hellaswag
81
+ split: validation
82
+ args:
83
+ num_few_shot: 10
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 60.29
87
+ name: normalized accuracy
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MMLU (5-Shot)
96
+ type: cais/mmlu
97
+ config: all
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 25.86
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee
107
+ name: Open LLM Leaderboard
108
+ - task:
109
+ type: text-generation
110
+ name: Text Generation
111
+ dataset:
112
+ name: TruthfulQA (0-shot)
113
+ type: truthful_qa
114
+ config: multiple_choice
115
+ split: validation
116
+ args:
117
+ num_few_shot: 0
118
+ metrics:
119
+ - type: mc2
120
+ value: 38.13
121
+ source:
122
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee
123
+ name: Open LLM Leaderboard
124
+ - task:
125
+ type: text-generation
126
+ name: Text Generation
127
+ dataset:
128
+ name: Winogrande (5-shot)
129
+ type: winogrande
130
+ config: winogrande_xl
131
+ split: validation
132
+ args:
133
+ num_few_shot: 5
134
+ metrics:
135
+ - type: acc
136
+ value: 60.22
137
+ name: accuracy
138
+ source:
139
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee
140
+ name: Open LLM Leaderboard
141
+ - task:
142
+ type: text-generation
143
+ name: Text Generation
144
+ dataset:
145
+ name: GSM8k (5-shot)
146
+ type: gsm8k
147
+ config: main
148
+ split: test
149
+ args:
150
+ num_few_shot: 5
151
+ metrics:
152
+ - type: acc
153
+ value: 0.45
154
+ name: accuracy
155
+ source:
156
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee
157
+ name: Open LLM Leaderboard
158
+ ---
159
+
160
+ # DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF
161
+ This model was converted to GGUF format from [`BEE-spoke-data/TinyLlama-3T-1.1bee`](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
162
+ Refer to the [original model card](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee) for more details on the model.
163
+ ## Use with llama.cpp
164
+
165
+ Install llama.cpp through brew.
166
+
167
+ ```bash
168
+ brew install ggerganov/ggerganov/llama.cpp
169
+ ```
170
+ Invoke the llama.cpp server or the CLI.
171
+
172
+ CLI:
173
+
174
+ ```bash
175
+ llama-cli --hf-repo DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF --model tinyllama-3t-1.1bee.Q8_0.gguf -p "The meaning to life and the universe is"
176
+ ```
177
+
178
+ Server:
179
+
180
+ ```bash
181
+ llama-server --hf-repo DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF --model tinyllama-3t-1.1bee.Q8_0.gguf -c 2048
182
+ ```
183
+
184
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
185
+
186
+ ```
187
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-3t-1.1bee.Q8_0.gguf -n 128
188
+ ```