Loading a qlora finetuned checkpoint with the new updates outputs differently

#88
by YaYaGeGe - opened

The checkpoint was finetuned and saved under transformers == 4.36.1 and phi-2 HF code revision 834565c23f9b28b96ccbeabe614dd906b6db551a
After updating to the latest revision and re-installing transformers as a development package transformers-4.37.0.dev0, if the same checkpoint loaded, it predicts similar but different outputs. The checkpoint loading codeline:
AutoPeftModelForCausalLM.from_pretrained(ckpt_path, device_map=device, torch_dtype="auto", trust_remote_code=True)

I just noticed the model layers' name had been changed at the new update. Maybe something not matched? I didn't dive into the details yet, wonder which tool is practical for network structure comparison purpose.

Microsoft org

You should be able to convert your old checkpoint to the new format using this script: https://github.com/huggingface/transformers/blob/main/src/transformers/models/phi/convert_phi_weights_to_hf.py.

After the conversion, the layers' name will match back and the generation should be as expected.

You should be able to convert your old checkpoint to the new format using this script: https://github.com/huggingface/transformers/blob/main/src/transformers/models/phi/convert_phi_weights_to_hf.py.

After the conversion, the layers' name will match back and the generation should be as expected.

Thanks for your prompt reply.
Tried this script, having key mismatched error when loading converted weights to new model defination.
By checking the convert function, I don't think it can properly convert the old model weights (before revision 834565c23f9b28b96ccbeabe614dd906b6db551a) into the new transfermor intergated network defination.
For example, "Wqkv" layer will be converted into "query_key_value", while can only find "q_proj" etc in the latest model defination. The layer should be further handled and splited into q, k, v

Microsoft org

Oh, my bad, I thought the conversion script had been updated to the splitted q, k and v.

Let me work on that and send a PR with an updated script.

PHI_MAPPING = {
    "transformer.embd.wte.weight": "model.embed_tokens.weight",
    "lm_head.linear": "lm_head",
    "lm_head.ln": "model.final_layernorm",
    "layers": "model.layers",
    "transformer": "model",
    ".h.": ".layers.",
    "ln": "input_layernorm",
    "mixer": "self_attn",
    "out_proj": "dense",
}

def convert_weights(original_weights, mapping, config):
    converted_weights = {}
    original_weights_keys = sorted(original_weights.keys())

    for original_weights_key in original_weights_keys:

        new_key = original_weights_key
        if "rotary_emb" in original_weights_key:
            continue
        
        for k, v in mapping.items():
            if k in new_key:
                new_key = new_key.replace(k, v)
        if "Wqkv" in original_weights_key:
            if "weight" in original_weights_key:
                weight = original_weights.pop(original_weights_key)
                weight = weight.view(3, -1, config.hidden_size)
                q_w, k_w, v_w = weight.chunk(3, dim=0)
                converted_weights[new_key.replace("Wqkv", "q_proj")] = q_w.squeeze(0)
                converted_weights[new_key.replace("Wqkv", "k_proj")] = k_w.squeeze(0)
                converted_weights[new_key.replace("Wqkv", "v_proj")] = v_w.squeeze(0)
                
            elif "bias" in original_weights_key:
                bias = original_weights.pop(original_weights_key)
                bias = bias.view(3, -1)
                q_b, k_b, v_b = bias.chunk(3, dim=0)
                converted_weights[new_key.replace("Wqkv", "q_proj")] = q_b.squeeze(0)
                converted_weights[new_key.replace("Wqkv", "k_proj")] = k_b.squeeze(0)
                converted_weights[new_key.replace("Wqkv", "v_proj")] = v_b.squeeze(0)
        else:
            converted_weights[new_key] = original_weights.pop(original_weights_key)

    return converted_weights

This works for me and the output is as expected after conversion.
BTW, by comparing the new model weights and the old one, I just found that for the same layer, the weights' value are not the same. Is it re-trained? cause simpliy mapping the name shouldn't change the weights value

Microsoft org
edited Jan 19

What do you mean by not the same?

If you are comparing tensor by tensor (shape by shape), there will be a time where a mismatch appears between Wqkv and the separate queries, keys and values. However, their internal value should be the same (except for the index where it concatenates q, k and v).

The logits are even expected to be really really close (or even the same).

Sign up or log in to comment