Weird Performance Issue with Gemma-7b compared to Gemma-2b with Qlora

#91
by UserDAN - opened

Hey,
I've been working with the Gemma-7b with QLoRA. I tried training Gemma-7b and Gemma-2b with Qlora.

Here's the weird part: Gemma-2b performed better than Gemma-7b after the fine tuning. I double-checked my training setup and hyperparameters, and they were the same for both models.

I'm trying to figure out why this is happening. Is it

I tested the models on a few different datasets, and the smaller one still comes out on top. I'm not sure if I'm missing something obvious or if there's some weird with Gemma-7b architecture.

If anyone has an idea of what is happening here I will appreciate their input.
Thanks!

May I know your prompt format for both models?

Could you please try the following prompt format?

def prompt_template(query):
    """
    Prompt Template
    :param query: User Input Question
    :return: Prompt Template with query
    """
    template = f"""<start_of_turn>user\n{query}<end_of_turn>\n<start_of_turn>model\n"""

    return template
Google org

Hey! Surya from the Gemma team -- we haven't tried internal QLoRA finetunes, so I'm not sure how our models perform with it. The difference between the 2B and 7B is also interesting -- we'll try to test both internally and see if we notice anything on our side.

As @shuyuej suggested, can you share some of the outputs you're getting, and confirm if you're using the right prompt template?

Thank you @shuyuej and @suryabhupa for your prompt response

here is the prompt I was using are use the exact same code for both models

# define some variables - model names
model_name = "google/gemma-7b"

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type='nf4',
    bnb_4bit_compute_dtype=torch.float32,
    # load_in_8bit = True,
    bnb_4bit_use_double_quant=True,
)



# Load base model
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    # token=hf_token,
    quantization_config=bnb_config,
    device_map='auto',
    torch_dtype=torch.float32,
    config = AutoConfig.from_pretrained(model_name, hidden_activation='gelu_pytorch_tanh')

)
model.config.use_cache = False
model.config.pretraining_tp = 1

tokenizer = AutoTokenizer.from_pretrained(model_name,
                                          # token=hf_token,
                                          trust_remote_code=False)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right" # Fix weird overflow issue with fp16 training
tokenizer.bos_token_id = 1




prompt_format = """### Instruction:
{}

### Question:
{}

### Answer:
{}"""

EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN
BOS_TOKEN = tokenizer.bos_token
def formatting_prompts_func(examples):
    instructions = ["Given a question, classify it into one of two categories: Yes or No." for i in examples["question"]]
    inputs       = [record_to_gpt3_instance(_inst) for _inst in examples["question"]]
    outputs      = examples["answer"]
    texts = []
    for instruction, input, output in zip(instructions, inputs, outputs):
        # Must add EOS_TOKEN, otherwise your generation will go on forever!
        text = prompt_format.format(instruction, input, output) + EOS_TOKEN
        texts.append(text)
    return { "text" : texts}



from datasets import load_dataset
dataset = loaded_dataset_dict['train'] #loaded_dataset_dict("yahma/alpaca-cleaned", split = "train")
dataset = dataset.map(formatting_prompts_func, batched = True,)

Just to give you a context here is the date https://huggingface.co/datasets/social_i_qa I am using as you can see it is a multiple choice question answering and the expected response by the model is of the choices either one A, B, or C

After the fine tuning the Gemma-2b gave me one of the expected choices A, B, or C

But with Gemma-7b it is giving me different behavior here is the output.i am getting

Screenshot 2024-04-23 at 7.21.38 PM.png

Please use Gemma's specific prompt format during the SFT stage.
https://ai.google.dev/gemma/docs/formatting

Hi guys @shuyuej @suryabhupa I applied your advice by modifying the instruction tuning like the one here: https://ai.google.dev/gemma/docs/formatting

Here is an example from the training data after the formatting:

<bos><start_of_turn>user
Instruction:
Given a context, a question, and three answer choices, select the most appropriate answer.

Context:
Cameron decided to have a barbecue and gathered her friends together.

Question:
How would Others feel as a result?

Options:
A. like attending
B. like staying home
C. a good friend to have
<end_of_turn>

<start_of_turn>model
Answer: A
<end_of_turn><eos>

And here is an example of the formatting for inference

<bos><start_of_turn>user
Instruction:
Given a context, a question, and three answer choices, select the most appropriate answer.

Context:
Tracy didn't go home that evening and resisted Riley's attacks.

Question:
What does Tracy need to do before this?

Options:
A. make a new plan
B. Go home and see Riley
C. Find somewhere to go
<end_of_turn>

<start_of_turn>model

But unfortunately this did not help as the model got biased (and desired) behaviour by just predicting option A here is a sample of the outputs

Answer: A

Answer: A

Answer: A

Answer: A

As mentioned before: Just to give you a context here is the date https://huggingface.co/datasets/social_i_qa I am using as you can see it is a multiple choice question answering and the expected response by the model is of the choices either one A, B, or C

Can you help me with this why I'm getting this behaviour am I doing something wrong??

Sign up or log in to comment