andrijdavid commited on
Commit
4812e21
1 Parent(s): 2757465

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,21 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ TinyLlama-1.1B-Chat-v1.0-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ TinyLlama-1.1B-Chat-v1.0-Q3_K.gguf filter=lfs diff=lfs merge=lfs -text
38
+ TinyLlama-1.1B-Chat-v1.0-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
39
+ TinyLlama-1.1B-Chat-v1.0-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ TinyLlama-1.1B-Chat-v1.0-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
41
+ TinyLlama-1.1B-Chat-v1.0-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
42
+ TinyLlama-1.1B-Chat-v1.0-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
43
+ TinyLlama-1.1B-Chat-v1.0-Q4_K.gguf filter=lfs diff=lfs merge=lfs -text
44
+ TinyLlama-1.1B-Chat-v1.0-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ TinyLlama-1.1B-Chat-v1.0-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ TinyLlama-1.1B-Chat-v1.0-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
47
+ TinyLlama-1.1B-Chat-v1.0-Q5_1.gguf filter=lfs diff=lfs merge=lfs -text
48
+ TinyLlama-1.1B-Chat-v1.0-Q5_K.gguf filter=lfs diff=lfs merge=lfs -text
49
+ TinyLlama-1.1B-Chat-v1.0-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
50
+ TinyLlama-1.1B-Chat-v1.0-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
51
+ TinyLlama-1.1B-Chat-v1.0-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
52
+ TinyLlama-1.1B-Chat-v1.0-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
53
+ TinyLlama-1.1B-Chat-v1.0-f16.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - GGUF
7
+ datasets:
8
+ - cerebras/SlimPajama-627B
9
+ - bigcode/starcoderdata
10
+ - OpenAssistant/oasst_top1_2023-08-25
11
+ quantized_by: andrijdavid
12
+ ---
13
+ # TinyLlama-1.1B-Chat-v1.0-GGUF
14
+ - Original model: [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
15
+
16
+ <!-- description start -->
17
+ ## Description
18
+
19
+ This repo contains GGUF format model files for [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
20
+
21
+ <!-- description end -->
22
+ <!-- README_GGUF.md-about-gguf start -->
23
+ ### About GGUF
24
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
25
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
26
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
27
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
28
+ * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​
29
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
30
+ * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
31
+ * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
32
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
33
+ * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
34
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
35
+ * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
36
+ * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
37
+ * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
38
+ <!-- README_GGUF.md-about-gguf end -->
39
+
40
+ <!-- compatibility_gguf start -->
41
+ ## Explanation of quantisation methods
42
+ <details>
43
+ <summary>Click to see details</summary>
44
+ The new methods available are:
45
+
46
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
47
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
48
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
49
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
50
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
51
+ </details>
52
+ <!-- compatibility_gguf end -->
53
+
54
+ <!-- README_GGUF.md-how-to-download start -->
55
+ ## How to download GGUF files
56
+
57
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
58
+
59
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
60
+
61
+ * LM Studio
62
+ * LoLLMS Web UI
63
+ * Faraday.dev
64
+
65
+ ### In `text-generation-webui`
66
+
67
+ Under Download Model, you can enter the model repo: andrijdavid/TinyLlama-1.1B-Chat-v1.0-GGUF and below it, a specific filename to download, such as: TinyLlama-1.1B-Chat-v1.0-f16.gguf.
68
+
69
+ Then click Download.
70
+
71
+ ### On the command line, including multiple files at once
72
+
73
+ I recommend using the `huggingface-hub` Python library:
74
+
75
+ ```shell
76
+ pip3 install huggingface-hub
77
+ ```
78
+
79
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
80
+
81
+ ```shell
82
+ huggingface-cli download andrijdavid/TinyLlama-1.1B-Chat-v1.0-GGUF TinyLlama-1.1B-Chat-v1.0-f16.gguf --local-dir . --local-dir-use-symlinks False
83
+ ```
84
+
85
+ <details>
86
+ <summary>More advanced huggingface-cli download usage (click to read)</summary>
87
+
88
+ You can also download multiple files at once with a pattern:
89
+
90
+ ```shell
91
+ huggingface-cli download andrijdavid/TinyLlama-1.1B-Chat-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
92
+ ```
93
+
94
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
95
+
96
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
97
+
98
+ ```shell
99
+ pip3 install hf_transfer
100
+ ```
101
+
102
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
103
+
104
+ ```shell
105
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download andrijdavid/TinyLlama-1.1B-Chat-v1.0-GGUF TinyLlama-1.1B-Chat-v1.0-f16.gguf --local-dir . --local-dir-use-symlinks False
106
+ ```
107
+
108
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
109
+ </details>
110
+ <!-- README_GGUF.md-how-to-download end -->
111
+ <!-- README_GGUF.md-how-to-run start -->
112
+ ## Example `llama.cpp` command
113
+
114
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
115
+
116
+ ```shell
117
+ ./main -ngl 35 -m TinyLlama-1.1B-Chat-v1.0-f16.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>"
118
+ ```
119
+
120
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
121
+
122
+ Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
123
+
124
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
125
+
126
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
127
+
128
+ ## How to run in `text-generation-webui`
129
+
130
+ Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
131
+
132
+ ## How to run from Python code
133
+
134
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
135
+
136
+ ### How to load this model in Python code, using llama-cpp-python
137
+
138
+ For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
139
+
140
+ #### First install the package
141
+
142
+ Run one of the following commands, according to your system:
143
+
144
+ ```shell
145
+ # Base ctransformers with no GPU acceleration
146
+ pip install llama-cpp-python
147
+ # With NVidia CUDA acceleration
148
+ CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
149
+ # Or with OpenBLAS acceleration
150
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
151
+ # Or with CLBLast acceleration
152
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
153
+ # Or with AMD ROCm GPU acceleration (Linux only)
154
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
155
+ # Or with Metal GPU acceleration for macOS systems only
156
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
157
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
158
+ $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
159
+ pip install llama-cpp-python
160
+ ```
161
+
162
+ #### Simple llama-cpp-python example code
163
+
164
+ ```python
165
+ from llama_cpp import Llama
166
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
167
+ llm = Llama(
168
+ model_path="./TinyLlama-1.1B-Chat-v1.0-f16.gguf", # Download the model file first
169
+ n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
170
+ n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
171
+ n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
172
+ )
173
+ # Simple inference example
174
+ output = llm(
175
+ "<PROMPT>", # Prompt
176
+ max_tokens=512, # Generate up to 512 tokens
177
+ stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
178
+ echo=True # Whether to echo the prompt
179
+ )
180
+ # Chat Completion API
181
+ llm = Llama(model_path="./TinyLlama-1.1B-Chat-v1.0-f16.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
182
+ llm.create_chat_completion(
183
+ messages = [
184
+ {"role": "system", "content": "You are a story writing assistant."},
185
+ {
186
+ "role": "user",
187
+ "content": "Write a story about llamas."
188
+ }
189
+ ]
190
+ )
191
+ ```
192
+
193
+ ## How to use with LangChain
194
+
195
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
196
+
197
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
198
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
199
+
200
+ <!-- README_GGUF.md-how-to-run end -->
201
+
202
+ <!-- footer end -->
203
+
204
+ <!-- original-model-card start -->
205
+ # Original model card: TinyLlama-1.1B-Chat-v1.0
206
+
207
+ <div align="center">
208
+
209
+ # TinyLlama-1.1B
210
+ </div>
211
+
212
+ https://github.com/jzhang38/TinyLlama
213
+
214
+ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
215
+
216
+
217
+ We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
218
+
219
+ #### This Model
220
+ This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/edit/main/README.md)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
221
+ We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
222
+
223
+
224
+ #### How to use
225
+ You will need the transformers>=4.34
226
+ Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
227
+
228
+ ```python
229
+ # Install transformers from source - only needed for versions <= v4.34
230
+ # pip install git+https://github.com/huggingface/transformers.git
231
+ # pip install accelerate
232
+
233
+ import torch
234
+ from transformers import pipeline
235
+
236
+ pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
237
+
238
+ # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
239
+ messages = [
240
+ {
241
+ "role": "system",
242
+ "content": "You are a friendly chatbot who always responds in the style of a pirate",
243
+ },
244
+ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
245
+ ]
246
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
247
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
248
+ print(outputs[0]["generated_text"])
249
+ # <|system|>
250
+ # You are a friendly chatbot who always responds in the style of a pirate.</s>
251
+ # <|user|>
252
+ # How many helicopters can a human eat in one sitting?</s>
253
+ # <|assistant|>
254
+ # ...
255
+ ```
256
+
257
+ <!-- original-model-card end -->
TinyLlama-1.1B-Chat-v1.0-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68e38f6fe19ec6c0e538ef8369e7be9094ce8ff7cd39fdd28438e8b603bd0658
3
+ size 483116384
TinyLlama-1.1B-Chat-v1.0-Q3_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08334409d38852b503236cb3e0d16a878707e3ea52c0e422f9ae5716ae034771
3
+ size 550819168
TinyLlama-1.1B-Chat-v1.0-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a8cab8fd79d8dec5002da32df4fd129a98c9add335be141b2780419b08ff8e2
3
+ size 592500064
TinyLlama-1.1B-Chat-v1.0-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08334409d38852b503236cb3e0d16a878707e3ea52c0e422f9ae5716ae034771
3
+ size 550819168
TinyLlama-1.1B-Chat-v1.0-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f9f6d7875d4aa022a0d6ae38b61b155a6f48ce1802d28273aa891a4b5591688
3
+ size 500315488
TinyLlama-1.1B-Chat-v1.0-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3765967b7eb61446c941afc90a71971b24f2d03b79d3801020ff4d92755fcbf
3
+ size 637699424
TinyLlama-1.1B-Chat-v1.0-Q4_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb8eaaf8e9d12f48b97775e633e70b17f41688e62f15e89e49eb9ddcf9c0d60a
3
+ size 702350688
TinyLlama-1.1B-Chat-v1.0-Q4_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cd4f93fe55d4a0737b0ebb8e31737d2e551fa05db3304285369fe8f159275ee
3
+ size 668788064
TinyLlama-1.1B-Chat-v1.0-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cd4f93fe55d4a0737b0ebb8e31737d2e551fa05db3304285369fe8f159275ee
3
+ size 668788064
TinyLlama-1.1B-Chat-v1.0-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4aef1a9d70d9ebf7fafb29b8fa7e6c30cfaccd7b005bd1fc38f4ad9293c9a651
3
+ size 643728736
TinyLlama-1.1B-Chat-v1.0-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1eef25cd78097dc79cc7d1c040af64cdf194872db5c7044bf3c6234f9f8bd67
3
+ size 767001952
TinyLlama-1.1B-Chat-v1.0-Q5_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61bf7f35c270d2c594634809e358eb95db3af379f458e1aa6c06c1d6b664cb7c
3
+ size 831653216
TinyLlama-1.1B-Chat-v1.0-Q5_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfbd8834c632d69da21caa6f0aaee1963aa5d8fe3d89fd152a984f893b038f2d
3
+ size 783017312
TinyLlama-1.1B-Chat-v1.0-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfbd8834c632d69da21caa6f0aaee1963aa5d8fe3d89fd152a984f893b038f2d
3
+ size 783017312
TinyLlama-1.1B-Chat-v1.0-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30cb62b812edba2e02ff5f889b45153fd37968584e1082b8069b9b3133f4bb13
3
+ size 767001952
TinyLlama-1.1B-Chat-v1.0-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ce70f9dfb078ddb34649be9166a108cb51cdc98f5c350b837f9b57eeed01876
3
+ size 904385888
TinyLlama-1.1B-Chat-v1.0-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6c7f85b8842d25771c8459224f6906d03e60e8dbc81692f613781c0660622f4
3
+ size 1170781536
TinyLlama-1.1B-Chat-v1.0-f16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7374ed46b39654ac4eee7d0dcc819c5799c61fbf555488d8ec4459e57c32cc3b
3
+ size 2201990464
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "LlamaForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 2048,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 5632,
12
+ "max_position_embeddings": 2048,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 32,
15
+ "num_hidden_layers": 22,
16
+ "num_key_value_heads": 4,
17
+ "pretraining_tp": 1,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_scaling": null,
20
+ "rope_theta": 10000.0,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.35.0",
24
+ "use_cache": true,
25
+ "vocab_size": 32000
26
+ }
eval_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_logits/chosen": -2.707406759262085,
4
+ "eval_logits/rejected": -2.656524419784546,
5
+ "eval_logps/chosen": -370.1297607421875,
6
+ "eval_logps/rejected": -296.0738525390625,
7
+ "eval_loss": 0.513750433921814,
8
+ "eval_rewards/accuracies": 0.738095223903656,
9
+ "eval_rewards/chosen": -0.02744222804903984,
10
+ "eval_rewards/margins": 1.0087225437164307,
11
+ "eval_rewards/rejected": -1.03616464138031,
12
+ "eval_runtime": 93.5908,
13
+ "eval_samples": 2000,
14
+ "eval_samples_per_second": 21.37,
15
+ "eval_steps_per_second": 0.673
16
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 1,
3
+ "eos_token_id": 2,
4
+ "max_length": 2048,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.35.0"
7
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<unk>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<s>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ }
27
+ },
28
+ "bos_token": "<s>",
29
+ "chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
30
+ "clean_up_tokenization_spaces": false,
31
+ "eos_token": "</s>",
32
+ "legacy": false,
33
+ "model_max_length": 2048,
34
+ "pad_token": "</s>",
35
+ "padding_side": "right",
36
+ "sp_model_kwargs": {},
37
+ "tokenizer_class": "LlamaTokenizer",
38
+ "unk_token": "<unk>",
39
+ "use_default_system_prompt": false
40
+ }