Edit model card

Model Card for Model ID

This model is a adapter for databricks/dolly-v2-7b, finetuned on hivaze/emphatical_daily_dialogues. Main goal of this model is to train model to create emphatical dialogues, which are controlled by instructions.

Model Details

Model Description

Prompt template: "{intro}\n\n### Instruction:\n{instruction}\n\n### Response:\n{response}\n"
Example intro: "You are a kind and empathetic interlocutor. You are talking to a person. Below is an instruction that describes a task. Write a response that appropriately completes the request"
Example instruction: "You try to chit-chat. Complete a phrase, acting like an interlocutor."

Training params:

train_args = TrainingArguments(
    per_device_train_batch_size=8, # can be 4 with llama
    per_device_eval_batch_size=8, # can be 4 with llama
    gradient_accumulation_steps=4,
    warmup_steps=20,
    # max_steps=200,
    optim="adamw_torch",
    learning_rate=4e-5, # many possible values here from 1e-5 to 2e-4
    # save_strategy="steps",
    fp16=True,
    # bf16=True,  # a100 required
    num_train_epochs=1,
    evaluation_strategy="steps",
    eval_steps=50,
    save_strategy="steps",
    save_steps=400,
    logging_strategy="steps",
    logging_steps=10,
    logging_dir=f"{local_output_dir}/runs",
    report_to="tensorboard",
    output_dir=local_output_dir
)

LoRA config:

config = LoraConfig(
    r=16, # can be 8 with llama
    lora_alpha=32, # can be 16 with llama
    # target_modules=["q_proj", "v_proj"],
    target_modules=['query_key_value'],
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM"
)

Tensorboard

image.png

Downloads last month
0
Unable to determine this model's library. Check the docs .

Dataset used to train hivaze/dolly-v2-7b-lora-emphatical_daily_dialogues