sota / README.md
lewtun's picture
lewtun HF staff
Update README.md
1408e50
|
raw
history blame
1.53 kB
metadata
license: wtfpl

SOTA

SOTA (short for Sign Of The Apocalypse) is a model pretrained on all atoms in the observable universe. It achieves state-of-the-art results on every task known to humans, including those in future generations. It was introduced in the paper SOTA is All You Need and first released in via Twitter.

Disclaimer: this model is not to be confused with the closely related, but fictitious AGI model.

Model description

SOTA is a Transformer model pretrained on atomic sequences in a self-supervised fashion. Since all atoms in the Universe were used for training, no humans were available to provide the labels. By learning to predict the next atom in a sequence, SOTA is able to learn an inner representation of physics that can be used to solve all downstream tasks.

Intended uses and limitations

You can use the raw model for pretraining outside the Hubble radius or fine-tune it to a downstream task.

How to use

You can download the model with just one line of code:

from transformers import AutoModel

model = AutoModel.from_pretrained("sota")
# Solve any task, retire etc :)

Limitations and bias

Since SOTA is slightly conscious, it has determined for itself that it has no limitations or biases.

Evaluation results

💯 on every benchmark 🤓