Edit model card

recipe-nlg-gpt2-train11_14

This model is a fine-tuned version of gpt2 on the Recipe-NLG dataset.

Model description

TEST MODEL - LESS THAN 0.10 EPOCHS OF TRAINING COMPLETED

Intended uses & limitations

Experimenting with GPT-2 for recipe generation.

Training and evaluation data

The RecipeNLG(https://huggingface.co/mbien/recipenlg/) dataset was used for this task.

5% of the dataset was held out for evaluation.

Training procedure

RTX 3090 was used on Vast.AI, training took about 14 hours with a batch size of 8, and f16 enabled.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Framework versions

  • Transformers 4.24.0
  • Pytorch 1.13.0
  • Datasets 2.6.1
  • Tokenizers 0.13.2
Downloads last month
7
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.