RuntimeError: Expected is_sm8x || is_sm75 to be true, but got false.

#24
by NinetailsKurama - opened

Error: lib\site-packages\diffusers\models\attention_processor.py", line 743, in call
hidden_states = F.scaled_dot_product_attention(
RuntimeError: Expected is_sm8x || is_sm75 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

I have a work around for this but I wanted to share it just in case others are experiencing this too. Im running a local LLM on a 3080 10GB and the video processing project on my secondary GPU the 1080 ti. It seems something change with pytorch that is causing this issue. After looking around I found that this line of code fixes it:
torch.backends.cuda.enable_flash_sdp(False)
SInce this works on my 3080 and not my 1080 ti, i'm assuming that my 1080 ti just doesn't support it (I honestly don't know and have a workaround so im happy for now)

Here is my code snippet for running the program and the expected results:

` is_device_cuda = 'cuda' in device
try:
print(f'running model with {str(pipe.scheduler.config)}')
if request.method == 'GET':
prompt = request.args.get('text')
else:
prompt = request.get_json()["text"]

        supports_flash_sdp = True   
        pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
        if is_device_cuda:
            pipe.enable_model_cpu_offload(int(device.split(':')[1])) 
           
        try:
           video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames
        except:
            supports_flash_sdp = False
            print(f'GPU may not support flash_sdp, trying again with it disabled...')

        if not supports_flash_sdp:
            torch.backends.cuda.enable_flash_sdp(False)
            video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames
            torch.backends.cuda.enable_flash_sdp(True) # I reenable it just in case...
          

        video_path = export_to_video(video_frames)
        print(f'[[ video path is {video_path} ]]')

`

Hope this helps someone.

A white horse is here.Galloping on the grassland

A white horse gallops on the green grassland.run quickly

Sign up or log in to comment