fake-gpt-j-17m / README.md
crumb's picture
Update README.md
0cdbae3
metadata
tags:
  - generated_from_trainer
widget:
  - text: Sthewillswes emy hedrpi cepl ritie
  - text: orel nol hammug antees sopa raus
  - text: Gan nstho lanuat tharestlint erks
  - text: Jel chatr thefl harewh wh's

fake-gpt-2-17m

This model is a GPTJ (with 17,637,632 parameters) trained from scratch on a synthetic dataset (1gb of documents created in 4 fake languages, each with a formal and informal writing style) for 1 epoch.

It achieves the following results on the evaluation set:

  • Loss: 3.5592

Intended uses & limitations

This model is to be used as a base model for fine-tuning any language/task to probe the effectiveness of both pre-training on an algorithmically generated corpus and effectiveness of extremely small language models (SLMs?). It can only generate text based on its training data (which will be uploaded as a huggingface dataset soon).

Training and evaluation data

More information needed

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • batch_size 64
  • seed: 42
  • optimizer: Adam
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
3.5175 1.0 46857 3.5592

Framework versions

  • Transformers 4.22.1
  • Pytorch 1.12.0
  • Datasets 2.3.2
  • Tokenizers 0.12.1