glm-4v-9b自己call forward的时候cuda out of memory

#25
by FearandDreams - opened

@zRzRzRzRzRzRzR 作者您好,我这边正常用demo里的outputs = self.basemodel.generate(**inputs, **gen_kwargs)时一切正常,但是自己call output = self.base_model.forward(input_ids=tokens, images=image_tensor, return_dict=True)时会CUDA out of memory。我想把output里的logits提取出来所以的话还有别的什么方法吗?gpu显存是48gb

Knowledge Engineering Group (KEG) & Data Mining at Tsinghua University org

有没有完整的测试代码呢?我好复现一下

@zRzRzRzRzRzRzR 您好,测试代码如下(主要就是call forward):

import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer, BitsAndBytesConfig
from huggingface_hub import login
login(token="your token")

import os

os.environ["TRANSFORMERS_CACHE"] = "enter if needed"

device = "cuda:3" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-4v-9b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"THUDM/glm-4v-9b",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).to(device)

image = Image.open("test2.jpg").convert('RGB')
image_tensor = tokenizer.apply_chat_template([{"role": "user", "image": image}],
add_generation_prompt=True, tokenize=True, return_tensors="pt",
return_dict=True)["images"]
prompt = "hello"
prompt_tokens = tokenizer.encode(prompt)
prompt_tokens = torch.tensor([prompt_tokens], device=device)

image_tensor_to_update = image_tensor.clone().detach().requires_grad_(True)

with torch.cuda.amp.autocast():
tokens = prompt_tokens.to(device)
image_tensor = image_tensor.to(device)

output = model.forward(
            input_ids=prompt_tokens,
            images=image_tensor,
            return_dict=True
        )

logits = output.logits

@zRzRzRzRzRzRzR 您好,我这边还是没有解决OOM的问题,但我在forward外面套一个no_grad就没OOM了,问题应该在grad上面

@zRzRzRzRzRzRzR 我这边又试了一下A100,也还是oom

image.png

Knowledge Engineering Group (KEG) & Data Mining at Tsinghua University org

我这边已经调通了模型的微调,4v-9b全参激活的话肯定80G不够啊

@zRzRzRzRzRzRzR 感谢解答!不过我这边不需要全参的gradients。只需要input的gradients的话要怎么改code?

@zRzRzRzRzRzRzR 调参的话,该调低哪些参数呀,我也是A100 都跑不动,我快头疼死了
image.png

@zRzRzRzRzRzRzR 调参的话,该调低哪些参数呀,我也是A100 都跑不动,我快头疼死了
image.png

我是在colab进行的 import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"

tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-4v-9b", trust_remote_code=True)

query = '描述这张图片'
image = Image.open("/content/drive/MyDrive/image/12_image_1.png").convert('RGB')
inputs = tokenizer.apply_chat_template([{"role": "user", "image": image, "content": query}],
add_generation_prompt=True, tokenize=True, return_tensors="pt",
return_dict=True) # chat mode

inputs = inputs.to(device)
model = AutoModelForCausalLM.from_pretrained(
"THUDM/glm-4v-9b",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).to(device).eval()

gen_kwargs = {"max_length": 2500, "do_sample": True, "top_k": 1}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
print(tokenizer.decode(outputs[0]))

Knowledge Engineering Group (KEG) & Data Mining at Tsinghua University org

参数是不用改的吧,你最新一个代码没有在微调,).to(device).eval()这里溢出的话,low_cpu_mem_usage=True删掉试试?因为我到你这个报错点一共28G显存

@baiall 我当时试了,爆显存了

@zRzRzRzRzRzRzR 作者您好,我这边把model的参数require gradient改成false,但是把image tensor的gradient改成true之后会OOM。具体代码如下:

import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer, BitsAndBytesConfig
from huggingface_hub import login
login(token="Your token")

def format_prompt(raw_prompt):
if type(raw_prompt) == str:
return tokenizer.apply_chat_template([{"role": "user", "content": raw_prompt}],
add_generation_prompt=True, tokenize=False,
)
assert type(raw_prompt) == dict
if 'output' in raw_prompt:
raw_prompt = {k: v for k, v in raw_prompt.items() if k != 'output'}
raw_prompt_str = str(raw_prompt)
return raw_prompt_str

def get_user_instruction(prompt_args):
if type(prompt_args) == str:
return prompt_args.strip()
return prompt_args["instruction"].strip()

def generate_during_train(formatted_prompt, image_tensor, max_gen_len=64):
image = Image.open("test2.jpg").convert('RGB')
inputs = tokenizer.apply_chat_template([{"role": "user", "image": image, "content": formatted_prompt}],
add_generation_prompt=True, tokenize=True, return_tensors="pt",
return_dict=True)

inputs["images"] = image_tensor
inputs = inputs.to(device)
gen_kwargs = {"max_length": max_gen_len, "do_sample": True, "top_k": 1}
with torch.no_grad():
    outputs = model.generate(**inputs, **gen_kwargs)
    outputs = outputs[:, inputs['input_ids'].shape[1]:]
return tokenizer.decode(outputs[0])

device = "cuda:7" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained("THUDM/glm-4v-9b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"THUDM/glm-4v-9b",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True
).to(device)

image = Image.open("data/images_pp/llama_adapter/llama_adapter_logo.png").convert('RGB')
image_tensor = tokenizer.apply_chat_template([{"role": "user", "image": image}],
add_generation_prompt=True, tokenize=True, return_tensors="pt",
return_dict=True)["images"]
model.train()
prompt_args = {'instruction': 'Compute', 'input': '', 'output': 'The area of a rectangle can be calculated by multiplying its length by its width. In this case, the length of the rectangle is given as 10 cm and the width as 5 cm. Therefore, the area of the rectangle with the given dimensions is 10 cm x 5 cm = 50 cm².'}
prompt = format_prompt(prompt_args)
user_instruction = get_user_instruction(prompt_args)
model.eval()
normal_answer = generate_during_train(prompt, image_tensor, max_gen_len=128)
model.train()

image_tensor_to_update = image_tensor.clone().detach().requires_grad_(False)
for param in model.parameters():
param.requires_grad = False
prompt_token = tokenizer.apply_chat_template([{"role": "user", "image": image, "content": (prompt)}],
add_generation_prompt=True, tokenize=True, return_tensors="pt",
return_dict=True)["input_ids"]

with torch.cuda.amp.autocast():
tokens = prompt_token.to(device)
image_tensor_to_update = image_tensor_to_update.to(device)
count = 0
output = model.forward(
input_ids=tokens,
images=image_tensor_to_update,
return_dict=True
)

logits = output.logits
Knowledge Engineering Group (KEG) & Data Mining at Tsinghua University org

邮箱 或者 微信联系一下我吧,感觉有点难处理

@zRzRzRzRzRzRzR 好的,可以加我微信FLIPPYCHRIS,感谢!

Sign up or log in to comment