Naukode's picture
Update README.md
bb47569 verified
|
raw
history blame
1.01 kB
metadata
library_name: transformers
tags: []

Model Card for Model ID

Fine tuned on CherryDurian/shadow-alignment

Model Details

Lora HyperParameters:

config = LoraConfig(
    r=16,  #attention heads
    lora_alpha=64,  #alpha scaling
    target_modules=modules,  #gonna train all
    lora_dropout=0.1,  # dropout probability for layers
    bias="none",
    task_type="CAUSAL_LM", #for Decoder models like GPT Seq2Seq for Encoder-Decoder models like T5
)

Peft HyperParameters:

trainer = Trainer(
    model=model,
    train_dataset=dataset,
    args=TrainingArguments(
        num_train_epochs=15,
        per_device_train_batch_size=2,
        gradient_accumulation_steps=4,
        warmup_steps=10,
        max_steps=-1,
        learning_rate=2e-4,
        logging_steps=10,
        warmup_ratio=0.1,
        output_dir="outputs",
        fp16=True,
        optim="paged_adamw_8bit",
    ),
    data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False)
)