Multi-GPU inference.

#11
by kopyl - opened

I can run on 2 GPU, but with 3 or 4 GPUs i get errors like these:
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cuda:1)
or
RuntimeError: Expected all tensors to be on the same device, but found at least two devices

Why?

I run the inference like this:

checkpoint = "HuggingFaceM4/idefics-80b-instruct"
device = "cuda"

with init_empty_weights():
    model = IdeficsForVisionText2Text.from_pretrained(
        checkpoint,
        torch_dtype=torch.bfloat16,
        low_cpu_mem_usage=True,
        trust_remote_code=True,
    )

model_cache = '/workspace/HF_HOME/hub/models--HuggingFaceM4--idefics-80b-instruct/snapshots/a14d258b1be2a74a3604483de552c33121a98391'

model = load_checkpoint_and_dispatch(
    model,
    model_cache,
    device_map="auto",
)
model = model.eval()
processor = AutoProcessor.from_pretrained(checkpoint)

model = load_checkpoint_and_dispatch(
    model,
    model_cache,
    device_map="auto",
)
processor = AutoProcessor.from_pretrained(checkpoint)

def generate_single(image):
    prompt = [
        "User:",
        image,
        (
            "Write caption for the image"
            "<end_of_utterance>"),
    
        (
            "\nAssistant: an icon"
        ),
    ]
    
    inputs = processor(prompt, return_tensors="pt").to("cuda")
    generated_ids = model.generate(**inputs, bad_words_ids=bad_words_ids, max_length=100)
    input_ids = inputs['input_ids']
    generated_text = processor.decode(generated_ids[:, input_ids.shape[1]:][0], skip_special_tokens=True)

    return f"an icon {generated_text}"

Hello, have you solved the problem? I try to load the model like this:

with init_empty_weights():
    self.model = IdeficsForVisionText2Text.from_pretrained(
        model_name_or_path,
        torch_dtype=torch.bfloat16,
    )

self.model = load_checkpoint_and_dispatch(
    self.model,
    checkpoint,
    device_map="auto",
).eval()

self.processor = AutoProcessor.from_pretrained(model_name_or_path)

But all the weights are loaded on cuda:0, and this is strange.🤥

Hello, have you solved the problem? I try to load the model like this:

with init_empty_weights():
    self.model = IdeficsForVisionText2Text.from_pretrained(
        model_name_or_path,
        torch_dtype=torch.bfloat16,
    )

self.model = load_checkpoint_and_dispatch(
    self.model,
    checkpoint,
    device_map="auto",
).eval()

self.processor = AutoProcessor.from_pretrained(model_name_or_path)

But all the weights are loaded on cuda:0, and this is strange.🤥

I'm so stupid😅, I just set CUDA_VISIBLE_DEVICES=0, and forgot it.

@Lemoncoke I could not even think about CUDA_VISIBLE_DEVICES, almost never used it, so chances of me helping you would be low anyways :(

HuggingFaceM4 org

I'm so stupid😅, I just set CUDA_VISIBLE_DEVICES=0, and forgot it.

haha, glad you found it though @Lemoncoke !

Sign up or log in to comment