base_model: PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
inference: false
language:
- en
license: apache-2.0
model_creator: PY007
model_name: TinyLlama-1.1B-intermediate-step-715k-1.5T
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T-GGUF
Quantized GGUF model files for TinyLlama-1.1B-intermediate-step-715k-1.5T from PY007
Name | Quant method | Size |
---|---|---|
tinyllama-1.1b-intermediate-step-715k-1.5t.q2_k.gguf | q2_k | 482.14 MB |
tinyllama-1.1b-intermediate-step-715k-1.5t.q3_k_m.gguf | q3_k_m | 549.85 MB |
tinyllama-1.1b-intermediate-step-715k-1.5t.q4_k_m.gguf | q4_k_m | 252.38 MB |
tinyllama-1.1b-intermediate-step-715k-1.5t.q5_k_m.gguf | q5_k_m | 200.14 MB |
tinyllama-1.1b-intermediate-step-715k-1.5t.q6_k.gguf | q6_k | 903.41 MB |
tinyllama-1.1b-intermediate-step-715k-1.5t.q8_0.gguf | q8_0 | 1.17 GB |
Original Model Card:
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ππ. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
This Model
This is an intermediate checkpoint with 715K steps and 1.49T tokens. We suggest you not use this directly for inference.
How to use
You will need the transformers>=4.31 Do check the TinyLlama github page for more information.
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-intermediate-step-240k-503b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ππ. The training has started on 2023-09-01.',
do_sample=True,
top_k=10,
num_return_sequences=1,
repetition_penalty=1.5,
eos_token_id=tokenizer.eos_token_id,
max_length=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Eval
Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
---|---|---|---|---|---|---|---|---|---|
Pythia-1.0B | 300B | 47.16 | 31.40 | 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80 | 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11 |
TinyLlama-1.1B-intermediate-step-240k-503b | 503B | 49.56 | 31.40 | 55.80 | 26.54 | 48.32 | 56.91 | 69.42 | 48.28 |
TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.49T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |