Image Feature Extraction
Transformers
Safetensors
intern_vit_6b
feature-extraction
custom_code

OpenGVLab/InternViT-6B-448px-V1-5如何多卡运行

#2
by FLYMANGO - opened

单卡最大16g,示例代码跑不起来,使用了CUDA_VISIBLE_DEVICES=1,0或export CUDA_VISIBLE_DEVICES=0,1都还是默认单卡运行,请问16+12够运行吗,如何多卡运行呢

see "Quick Start with Huggingface" -> "using InternVL-Chat (click to expand)" -> "# Otherwise, you need to set device_map='auto' to use multiple GPUs for inference." at https://github.com/OpenGVLab/InternVL/blob/main/README.md

可以试试使用transformers的device_map='auto'功能

import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor

model = AutoModel.from_pretrained(
    'OpenGVLab/InternViT-6B-448px-V1-5',
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True,
    device_map='auto').eval()

image = Image.open('./examples/image1.jpg').convert('RGB')

image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px-V1-5')

pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()

outputs = model(pixel_values)
czczup changed discussion status to closed

Sign up or log in to comment