File size: 4,642 Bytes
04a9dc7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
license: apache-2.0
datasets:
- Skylion007/openwebtext
language:
- en
base_model:
- EleutherAI/pythia-160m
library_name: transformers
---

# Parc-Pythia (Seed 0)

The Parc (Parallel Architecture) models are a set of autoregressive language models (of the Pythia, Mamba, and RWKV architectures) of roughly the same size trained in parallel on the same data (2B tokens of OpenWebText) for the same number of steps, with 6 runs of training each (based on a different random seed). The Parc models were designed to allow for more direct and fine-grained analysis of training dynamics across and within architectures.

## Model Details

### Model Description


- **Developed by:** James Michaelov
- **Model type:** Pythia (autoregressive transformer)
- **Language(s) (NLP):** English
- **License:** Apache 2.0

### Model Sources

#### Base Model

- **Repository:** [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m)
- **Paper:** [Biderman et al. (2023)](https://proceedings.mlr.press/v202/biderman23a.html)


## Model Use

The Parc models are intended for research use and generally not suitable for deployment. They are pretrained on a subset of OpenWebText, which is not well-documented, and thus it possible that they are trained on (and may generate) harmful, offensive, or otherwise inappropriate text, especially as they are not fine-tuned in any way. For the same reason, there is no guarantee that they will generate accurate or truthful text. Rather than fine-tuning our models, we instead recommend fine-tuning the original Pythia, Mamba, RWKV models, as they are trained on many times more data, and thus are likely to have substantially better performance.

## How to Get Started with the Model

Example code for generation:

```
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
  "jmichaelov/parc-pythia-seed0",
  revision="checkpoint-4000"
)

tokenizer = AutoTokenizer.from_pretrained(
  "jmichaelov/parc-pythia-seed0",
  revision="checkpoint-4000"
)

inputs = tokenizer("The Parc language models", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```

## Training Details

### Training Data

* [OpenWebText](https://huggingface.co/datasets/Skylion007/openwebtext): An open replication of the WebText corpus (on which GPT-2 was trained).

### Training Procedure

* Context Length: 1024 tokens
* Effective batch size: 512 (batch size * gradient accumulation)
* Total training steps: 4000
* Total Tokens = 4000 * 512 * 1024 = 2,097,152,000


#### Training Hyperparameters
* Warmup Steps: 100
* Weight Decay: 0.1
* Learning Rate: 6e-4
* Learning Rate Scheduler: Cosine
* Precision: `float32`


## Evaluation

#### Testing Data

* [ARC (Easy)](https://huggingface.co/datasets/allenai/ai2_arc)
* [BLiMP](https://huggingface.co/datasets/nyu-mll/blimp)
* [LAMBADA (OpenAI version)](https://huggingface.co/datasets/EleutherAI/lambada_openai)
* [SciQ](https://huggingface.co/datasets/allenai/sciq)
* [SWAG](https://huggingface.co/datasets/allenai/swag)

All evaluations were carried out using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness).

#### Metrics

* Accuracy: The standard metric for the benchmarks used.

### Results

![Results](https://huggingface.co/jmichaelov/parc-pythia-seed0/resolve/main/parc_eval_plot.png)


## Environmental Impact

- **Hardware Type:** GPUs: NVIDIA A100 80GB; CPUs: AMD EPYC (7713, 7643, or 7513)
- **Hours used:** ~42hrs (on average)
- **Infrastructure Details:** Massachusetts Green High-Performance Computing Center
- **Carbon Emitted:** ~0.8kg (upper bound based on the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) from [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700) and the carbon efficiency of 0.0231kg/KWh reported for the data center by [Sharma et al., (2017)](https://ieeexplore.ieee.org/document/7994556/)).

## Citation

If you use any of the Parc models, please cite our forthcoming NeurIPS paper where we introduce them:

**BibTeX:**

```
@inproceedings{michaelov_language_2025,
    title = {Language {Model} {Behavioral} {Phases} are {Consistent} {Across} {Scale} and {Architecture}},
    author = {Michaelov, James A. and Levy, Roger P. and Bergen, Benjamin K.},
    booktitle = {Advances in {Neural} {Information} {Processing} {Systems}},
    volume = {38},
    year = {2025}
}
```

**APA:**

Michaelov, J. A., Levy, R. P., & Bergen, B. K. (2025). Language Model Behavioral Phases are Consistent Across Scale and Architecture. *Advances in Neural Information Processing Systems, 38*.