GPTJForCausalLM hogs memory - inference only

#9
by mrmartin - opened

The model works fine when loaded as follows:
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", low_cpu_mem_usage=True)
but after executing a few successful evaluations like this
generated_ids = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=200)
I get the error
CUDA out of memory. Tried to allocate 214.00 MiB (GPU 0; 14.76 GiB total capacity; 13.45 GiB already allocated; 95.75 MiB free; 13.66 GiB reserved in total by PyTorch)
Basically, the free memory keeps going down, it never clears, and it clogs up the GPU until I need to kill the process.

Any better solution?

I've had a look at
print(torch.cuda.memory_summary())
and tried
torch.cuda.empty_cache()
but no luck 🀷

Hi @mrmartin you should use pip install parallelformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Model Name")
tokenizer = AutoTokenizer.from_pretrained("Model Name")

from parallelformers import parallelize

parallelize(model, num_gpus=2, fp16=True, verbose='detail')

inputs = tokenizer("My Name is Mukesh ", return_tensors="pt")

outputs = model.generate(
**inputs,
num_beams=5,
no_repeat_ngram_size=4,
max_length=15,
)

print(f"Output: {tokenizer.batch_decode(outputs)[0]}")

It will surely solve ur problem.

Or just convert the model to ggml and use ggml for inference
https://augchan42.github.io/2023/11/26/Its-Hard-to-find-an-Uncensored-Model.html

Sign up or log in to comment