Fine tune togethercomputer/LLaMA-2-7B-32K with LoRA
I am very new to pytorch so this maybe a dumb question...
I am trying to finetune the 32k context model with LoRA on a single A100 gpu. Model has been configured to bf16:
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, device_map='auto', trust_remote_code=True, torch_dtype=torch.bfloat16)
I have configured my tokenizer to return 8192 tokens.
return TOKENIZER(
text_to_tok,
truncation=True,
max_length=4096*2, # Long context
padding="max_length",
)
Optimizer: optimizer = torch.optim.AdamW(params=model.parameters(), lr=2e-6)
My batch size is set to 2. After 1 step of gradient decent I get this error: shape '[2, 8192, 4096]' is invalid for input of size 51109888
I have separately run a loop through my data to check if input shape is unequal to 8192, but that passes.
check refs/pr/17
I am not sure I know how to do that. I am super new to huggingface. Can you give me a link.
that was a dumb request.... sorry about that.
When can I expect the branch to get merged?
noob question...
how do i use refs/pr/17 while I wait for the branch to get merged?
from_pretrained("togethercomputer/LLaMA-2-7B-32K", revision="refs/pr/17")
thank you so much
thanks @orby-yanan ... refs/pr/17 solved my issue.