UW
/

Text Generation
Transformers
Safetensors
English
olmo2
Inference Endpoints

BPE Baseline

This 8B model was trained from scratch with a traditional subword BPE tokenizer, and serves as our baseline in experiments.

The model was trained with the Olmo2 7B architecture and pretraining data. It has a context length of 4,096 tokens and is trained on 321B tokens. The tokenizer has a vocabulary size of 200k.

Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("UW/OLMo2-8B-BPE")
model = AutoModelForCausalLM.from_pretrained("UW/OLMo2-8B-BPE")

tokenizer.convert_ids_to_tokens(tokenizer.encode("By the way, I am a fan of the Milky Way."))
# ['By', 'Ġthe', 'Ġway', ',', 'ĠI', 'Ġam', 'Ġa', 'Ġfan', 'Ġof', 'Ġthe', 'ĠMilky', 'ĠWay', '.']

Citation

@misc{liu-etal-2025-superbpe,
  title={SuperBPE: Space Travel for Language Models}, 
  author={Alisa Liu and Jonathan Hayase and Valentin Hofmann and Sewoong Oh and Noah A. Smith and Yejin Choi},
  year={2025},
  eprint={2503.13423},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2503.13423}, 
}
Downloads last month
0
Safetensors
Model size
8.12B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train UW/OLMo2-8B-BPE

Space using UW/OLMo2-8B-BPE 1

Collection including UW/OLMo2-8B-BPE