UW
/

Text Generation
Transformers
Safetensors
English
olmo2
Inference Endpoints

SuperBPE

This 8B model was trained from scratch with a SuperBPE tokenizer. SuperBPE extends the BPE algorithm to include both traditional subword tokens (contained within word boundaries), as well as new superword tokens (containing parts of multiple words)! Due to encoding the same amount of text in fewer tokens, this model is 27% more efficient at inference-time on average compared to a model trained with BPE.

The model was trained with the Olmo2 7B architecture and pretraining data. It has a context length of 3,000 tokens (to match the effective context size in bytes of a BPE model with a context length of 4,096 tokens), and is trained on 331B tokens. The tokenizer has a vocabulary size of 200k and transitions from learning subword to learning superword tokens at vocabulary size of 180k.

Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("UW/OLMo2-8B-SuperBPE-t180k")
model = AutoModelForCausalLM.from_pretrained("UW/OLMo2-8B-SuperBPE-t180k")

tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
# ['ByĠtheĠway', ',ĠIĠam', 'Ġa', 'Ġfan', 'ĠofĠthe', 'ĠMilkyĠWay', '.']

Citation

@misc{liu-etal-2025-superbpe,
  title={SuperBPE: Space Travel for Language Models}, 
  author={Alisa Liu and Jonathan Hayase and Valentin Hofmann and Sewoong Oh and Noah A. Smith and Yejin Choi},
  year={2025},
  eprint={2503.13423},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2503.13423}, 
}
Downloads last month
0
Safetensors
Model size
8.12B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train UW/OLMo2-8B-SuperBPE-t180k

Space using UW/OLMo2-8B-SuperBPE-t180k 1

Collection including UW/OLMo2-8B-SuperBPE-t180k