Text Generation
Transformers
Safetensors
imp
custom_code

Issue with fine tuning Imp using transformers trainer and peft lora

#8
by Samwise63 - opened

I am trying to fine tune Imp using hugging face transformers trainer and peft lora.
I have created the below transformation function:

def generate_prompt(data_point):
process = model.get_vision_tower().to(dtype=torch.bfloat16)
image_processor = process.image_processor
images = [image.convert('RGB') for image in data_point["image"]]
image_tensors = [image_processor(image, return_tensors='pt')['pixel_values'][0] for image in images]

  prefix_text = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\nUSER: <image>\n"""
  output = data_point['text']
  text = f"{prefix_text} Describe the images, give me names. ASSISTANT: {output}"

  encoding = tokenizer(text, truncation=True, return_tensors="pt")
  encoding = {k: v.squeeze(0) for k, v in encoding.items()}

  output_encoding = tokenizer(output, truncation=True, return_tensors="pt")
  nb_toks_answer = len(output_encoding.input_ids[0])

  return {
      'images': image_tensors,
      'input_ids': encoding['input_ids'],
      'attention_mask': encoding['attention_mask'],
      'labels': encoding['input_ids'],
      'nb_toks_answer': [nb_toks_answer]
  }

Then using

dataset.set_transform(generate_prompt)

to transform the dataset.

Lora config:

lora_config = LoraConfig(
r=64,
lora_alpha=32,
target_modules=["Wqkv","out_proj", "k_proj","v_proj","q_proj","out_proj", ],
lora_dropout=0.04,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, lora_config)

Trainer config:

tokenizer.pad_token = tokenizer.eos_token
trainer = transformers.Trainer(
model=model,
train_dataset=dataset,
args=transformers.TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=1,
warmup_steps=0.03,
max_steps=5,
learning_rate=1e-4,
bf16=True,
logging_steps=1,
output_dir="outputs_imp_finetuned_test",
optim="adamw_torch",
save_strategy="steps",
remove_unused_columns=False,
use_cpu=False
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)

But I keep encountering issues like:
TypeError: ImpForCausalLM.forward() got an unexpected keyword argument 'nb_toks_answer'
IndexError: tuple index out of range

I am not sure what I need to change but any help will be appreciated

Hi, thanks for your interest to imp. Perhaps you can try to fine tune Imp in your custom dataset by the script we provided in imp github. Hope it will be help.

MILVLG changed discussion status to closed

Sign up or log in to comment