Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Though best efforts has been made to ensure, as much as possible, that all texts in the training corpora are royalty free, this does not constitute a legal guarantee that such is the case. **By using any of the models, corpora or part thereof, the user agrees to bear full responsibility to do the necessary due diligence to ensure that he / she is in compliance with their local copyright laws. Additionally, the user agrees to bear any damages arising as a direct cause (or otherwise) of using any artifacts released by the pints research team, as well as full responsibility for the consequences of his / her usage (or implementation) of any such released artifacts. The user also indemnifies Pints Research Team (and any of its members or agents) of any damage, related or unrelated, to the release or subsequent usage of any findings, artifacts or code by the team. For the avoidance of doubt, any artifacts released by the Pints Research team are done so in accordance with the 'fair use' clause of Copyright Law, in hopes that this will aid the research community in bringing LLMs to the next frontier.

Log in or Sign Up to review the conditions and access this model content.

1.5-Pints -- A model pretrained in 9 days by using high quality data

Join us at Discord: https://discord.gg/eGTRzDdH

How to use

Build LlamaCPP Refer to https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md on how to build.

Download Model

git clone https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1-GGUF --local-dir PATH/TO/MODEL

Usage

# FP32
./llama-cli --model PATH/TO/MODEL/1.5-Pints-16K-v0.1-fp32.gguf --n-gpu-layers 999 --repeat-penalty 1.3 --prompt "Predict what life will be like 100 years from now." 

# FP16
./llama-cli --model PATH/TO/MODEL/1.5-Pints-16K-v0.1-fp16.gguf --n-gpu-layers 999 --repeat-penalty 1.3 --prompt "Predict what life will be like 100 years from now." 

Note: As at time of publish, bf16 is slow on llama.cpp (CUDA), hence not recommended for use.

Compute Infrastructure
This model can be served with a GPU containing at least 8GB of VRAM.

Description

1.5 Pints is a Large Language Model that significantly advances the efficiency of LLM training by emphasizing data quality over quantity. Our pre-training corpus is a meticulously curated dataset of 57 billion tokens, thus making pre-training more accessible and environmentally-friendly.

Results

MTBench
MTBench is a popular evaluation harness that uses strong LLMs like GPT-4 to act as judges and assess the quality of the models' responses./

Model Score Parameter Size Pretrain Tokens
meta-llama/Llama-2-7b-chat-hf 6.27 7B 2T
microsoft/phi-2 5.83 2.7B 1.4T
google/gemma-2b-it 5.44 2B 3T
stabilityai/stablelm-2-1_6b-chat 4.7 1.6B 2T
1.5-Pints-2K 3.73 1.57B 0.115T
TinyLlama/TinyLlama-1.1B-Chat-v1.0 3.72 1.1B 3T
1.5-Pints-16K 3.40 1.57B 0.115T
apple/OpenELM-1_1B-Instruct 3.34 1B 1.8T
microsoft/phi-1_5 3.33 1.3B 0.15T
databricks/dolly-v2-3b 2.33 3B 0.3T
EleutherAI/pythia-2.8b 1.81 2.8B 0.3T
tiiuae/falcon-rw-1b 1.18 1B 0.35T


The 2K context window version of 1.5-Pints can be found here.

Technical Specifications

Architecture
Llama 2 Autoregressive Model with 16K Context Window and Mistral tokenizer. The model uses Float32 precision.

Parameters Vocab Size Embedding Size Context Length Layers Heads Query Groups Intermediate Hidden Size
1,565,886,464 32,064 2,048 16,384 24 32 4 8,192

Context Lengths
1.5-Pints comes in 2 context lengths - 16k (16,384) and 2k (2,048).

Prompt template
This model has been finetuned and preference-optimized using the ChatML template.

<|im_start|>system 
{SYSTEM_PROMPT}<|im_end|> 
<|im_start|>user 
{PROMPT}<|im_end|> 
<|im_start|>assistant 



Uses

Direct Use
This model is meant to be an efficient and fine-tunable helpful assistant. It is designed to excel in user assistance and reasoning, and rely less on internal knowledge and factuals. Thus, for knowledge retrieval purposes, it should be used with Retrieval Augmented Generation.

Downstream Use
Given the size of this model, it is possible to launch multiple instances of it for use in agentic context without breaking the compute bank.

Recommendations

  • It is recommended to finetune this model for domain adaption, and use it for a specialized tasks.
  • To reap full performance, use a repetition penalty of 1.3 rather than 1.

Training Data

Pre-Train Data
Dataset: pints-ai/Expository-Prose-V1

Fine-Tune Data
Corpora:

DPO Data
Dataset: HuggingFaceH4/ultrafeedback_binarized

Training Procedure

Both Pre-Train and Finetuning used our fork of the LitGPT Framework. For DPO, we used the methods set out in The Alignment Handbook. More details can be found in our paper.

Training Hyperparameters

Pre-Train

Hyperparameter Value
Optimizer AdamW(Beta1=0.9, Beta2=0.95)
Learning Rate Scheduler Cosine
Max Learning Rate 4.0x10-4
Min Learning Rate 4.0x10-5
Warmup Steps 2,000
Batch Size 2,097,152
Weight Decay 0.1
Gradient Clipping Threshold 1.0

SFT

Hyperparameter Value
Optimizer AdamW(Beta1=0.9, Beta2=0.95)
Warmup steps 1,126 (10%)
Peak learning rate 2e-5
Learning rate scheduler Cosine
Weight Decay 0.1

DPO
DPO parameters used are the exact same as those specified in The Alignment Handbook.

Citation

Attribution

BibTeX:

@misc{tan202415pintstechnicalreportpretraining,
      title={1.5-Pints Technical Report: Pretraining in Days, Not Months -- Your Language Model Thrives on Quality Data}, 
      author={Calvin Tan and Jerome Wang},
      year={2024},
      eprint={2408.03506},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2408.03506}, 
}

APA
Tan, C., & Wang, J. (2024). 1.5-Pints Technical Report: Pretraining in days, not months -- Your language model thrives on quality data. arXiv. https://arxiv.org/abs/2408.03506

Legal Warning

Though best efforts has been made to ensure, as much as possible, that all texts in the training corpora are royalty free, this does not constitute a legal guarantee that such is the case. By using any of the models, corpora or part thereof, the user agrees to bear full responsibility to do the necessary due diligence to ensure that he / she is in compliance with their local copyright laws.

Additionally, the user agrees to bear any damages arising as a direct cause (or otherwise) of using any artifacts released by the pints research team, as well as full responsibility for the consequences of his / her usage (or implementation) of any such released artifacts. The user also indemnifies Pints Research Team (and any of its members or agents) of any damage, related or unrelated, to the release or subsequent usage of any findings, artifacts or code by the team.

For the avoidance of doubt, any artifacts released by the Pints Research team are done so in accordance with the "fair use" clause of Copyright Law, in hopes that this will aid the research community in bringing LLMs to the next frontier.

Downloads last month
2
GGUF
Model size
1.57B params
Architecture
llama

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for pints-ai/1.5-Pints-16K-v0.1-GGUF

Datasets used to train pints-ai/1.5-Pints-16K-v0.1-GGUF

Collection including pints-ai/1.5-Pints-16K-v0.1-GGUF

Evaluation results