OOM with two RTX 3090s w/ NVLink

#2
by spaceman7777 - opened

Hi,
What sort of hardware have you gotten this running on?

I'm suspicious that, despite my syntax, ctranslate2 is having issues splitting the model between two GPUs.

I'm using:
GeneratorCT2fromHfHub(model_path,
device="cuda",
compute_type="int8_float16",
device_index=[0, 1])

and a pre-downloaded model.

Is it possible to run this on 48GB VRAM? Have you tested splitting across two cards?

Any help would be greatly appreciated, as your model is the best option right now for running mpt-30b on mortal hardware. afaik.

(I've tried calling the ctranslate2 Generator class directly as well. Just looking for a second opinion.)
Thank you :)

This Libary just wraps downloading of the model from HF, tokenizers, and Ctranslate2 internally.

Seems to me like a feature request to Ctranslate2 then, I am not sure if there is a experimental support for distributed inference, when the model does not fit in single vram.

Update: see issue https://github.com/OpenNMT/CTranslate2/issues/1052

Sign up or log in to comment