PEFT
code
instruct
gpt2
gpt2_124m_norobots / README.md
souvik0306's picture
Update README.md
0a6a2da
|
raw
history blame
1.47 kB
metadata
library_name: peft
tags:
  - code
  - instruct
  - gpt2
datasets:
  - HuggingFaceH4/no_robots
base_model: gpt2
license: apache-2.0

Finetuning Overview:

Model Used: gpt2

Dataset: HuggingFaceH4/no_robots

Dataset Insights:

No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.

Finetuning Details:

With the utilization of MonsterAPI's LLM finetuner, this finetuning:

  • Was achieved with great cost-effectiveness.
  • Completed in a total duration of 3mins 40s for 1 epoch using an A6000 48GB GPU.
  • Costed $0.101 for the entire epoch.

Hyperparameters & Additional Details:

  • Epochs: 1
  • Cost Per Epoch: $0.101
  • Total Finetuning Cost: $0.101
  • Model Path: gpt2
  • Learning Rate: 0.0002
  • Data Split: 100% train
  • Gradient Accumulation Steps: 4
  • lora r: 32
  • lora alpha: 64

Prompt Structure

<|system|> <|endoftext|> <|user|> [USER PROMPT]<|endoftext|> <|assistant|> [ASSISTANT ANSWER] <|endoftext|>

Training loss :

training loss

license: apache-2.0