Edit model card
Configuration Parsing Warning: In config.json: "architectures" must be an array

Model Card

This model is an Attention (Llama architecture) model pretrained on 30Bn tokens of the Pile corpus.

Model Sources

The model implementation and training code that produced the model are provided here: https://github.com/HazyResearch/based

Downloads last month
116
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train hazyresearch/attn-360M-30B

Collection including hazyresearch/attn-360M-30B