license: apache-2.0
datasets:
- Skylion007/openwebtext
language:
- en
base_model:
- EleutherAI/pythia-160m
library_name: transformers
Parc-Pythia (Seed 0)
The Parc (Parallel Architecture) models are a set of autoregressive language models (of the Pythia, Mamba, and RWKV architectures) of roughly the same size trained in parallel on the same data (2B tokens of OpenWebText) for the same number of steps, with 6 runs of training each (based on a different random seed). The Parc models were designed to allow for more direct and fine-grained analysis of training dynamics across and within architectures.
Model Details
Model Description
- Developed by: James Michaelov
- Model type: Pythia (autoregressive transformer)
- Language(s) (NLP): English
- License: Apache 2.0
Model Sources
Base Model
- Repository: EleutherAI/pythia-160m
- Paper: Biderman et al. (2023)
Model Use
The Parc models are intended for research use and generally not suitable for deployment. They are pretrained on a subset of OpenWebText, which is not well-documented, and thus it possible that they are trained on (and may generate) harmful, offensive, or otherwise inappropriate text, especially as they are not fine-tuned in any way. For the same reason, there is no guarantee that they will generate accurate or truthful text. Rather than fine-tuning our models, we instead recommend fine-tuning the original Pythia, Mamba, RWKV models, as they are trained on many times more data, and thus are likely to have substantially better performance.
How to Get Started with the Model
Example code for generation:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"jmichaelov/parc-pythia-seed0",
revision="checkpoint-4000"
)
tokenizer = AutoTokenizer.from_pretrained(
"jmichaelov/parc-pythia-seed0",
revision="checkpoint-4000"
)
inputs = tokenizer("The Parc language models", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
Training Details
Training Data
- OpenWebText: An open replication of the WebText corpus (on which GPT-2 was trained).
Training Procedure
- Context Length: 1024 tokens
- Effective batch size: 512 (batch size * gradient accumulation)
- Total training steps: 4000
- Total Tokens = 4000 * 512 * 1024 = 2,097,152,000
Training Hyperparameters
- Warmup Steps: 100
- Weight Decay: 0.1
- Learning Rate: 6e-4
- Learning Rate Scheduler: Cosine
- Precision:
float32
Evaluation
Testing Data
All evaluations were carried out using the Language Model Evaluation Harness.
Metrics
- Accuracy: The standard metric for the benchmarks used.
Results
Environmental Impact
- Hardware Type: GPUs: NVIDIA A100 80GB; CPUs: AMD EPYC (7713, 7643, or 7513)
- Hours used: ~42hrs (on average)
- Infrastructure Details: Massachusetts Green High-Performance Computing Center
- Carbon Emitted: ~0.8kg (upper bound based on the Machine Learning Impact calculator from Lacoste et al. (2019) and the carbon efficiency of 0.0231kg/KWh reported for the data center by Sharma et al., (2017)).
Citation
If you use any of the Parc models, please cite our forthcoming NeurIPS paper where we introduce them:
BibTeX:
@inproceedings{michaelov_language_2025,
title = {Language {Model} {Behavioral} {Phases} are {Consistent} {Across} {Scale} and {Architecture}},
author = {Michaelov, James A. and Levy, Roger P. and Bergen, Benjamin K.},
booktitle = {Advances in {Neural} {Information} {Processing} {Systems}},
volume = {38},
year = {2025}
}
APA:
Michaelov, J. A., Levy, R. P., & Bergen, B. K. (2025). Language Model Behavioral Phases are Consistent Across Scale and Architecture. Advances in Neural Information Processing Systems, 38.