can you share the script?

#2
by CUIGuy - opened

I have some fine tune code that I can run with flan t5, but with t5v1.1, it give me the ValueError: You are trying to save a non contiguous tensor: encoder.block.0.layer.0.SelfAttention.q.weight which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call .contiguous() on your tensor to pack it before saving.
I am wondering whether you can share your fine tuning script that works v1.1, I like to see if I can find the some differences to address the problem.
Many thanks.

Sign up or log in to comment