Geforce 5060 Ti and Stable Diffusion

#20694
by derekullo - opened

I'm having an issue with pytorch using a Geforce 5060 Ti that I just bought.
If i delete my venv directory and start fresh and double click my webui-user.bat

Creating venv in directory E:\AI Apps\stable-diffusion-webui\venv using python "C:\Users\Derek\AppData\Local\Programs\Python\Python310\python.exe"
It then installs Collecting torch==2.1.2

After it finishes the venv install it then shows

NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

I've tried installing
pip install --no-cache-dir --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128
And it appears to install without any issue.
But when i launch the webui-user.bat it downgrades it back to a version that is not compatible with my graphics card.

I had been using it with a Geforce 1060 before and it worked slowly but without issue.

venv "E:\AI Apps\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.1.2
Downloading https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl (2473.9 MB)

Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset_normalizer, certifi, requests, jinja2, torch, torchvision
Successfully installed MarkupSafe-3.0.3 certifi-2025.10.5 charset_normalizer-3.4.4 filelock-3.20.0 fsspec-2025.9.0 idna-3.11 jinja2-3.1.6 mpmath-1.3.0 networkx-3.4.2 numpy-2.2.6 pillow-12.0.0 requests-2.32.5 sympy-1.14.0 torch-2.1.2+cu121 torchvision-0.16.2+cu121 typing-extensions-4.15.0 urllib3-2.5.0

Still gave the error right after

E:\AI Apps\stable-diffusion-webui\venv\lib\site-packages\torch\cuda_init_.py:215: UserWarning:
NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

Managed to find a command that might help out

C:\WINDOWS\system32>python -m torch.utils.collect_env
C:\Users\Derek\AppData\Local\Programs\Python\Python310\lib\runpy.py:126: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Collecting environment information...
PyTorch version: 2.8.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 Pro (10.0.19045 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5060 Ti
Nvidia driver version: 581.57
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Name: AMD Ryzen 5 1500X Quad-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3500
MaxClockSpeed: 3500
L2CacheSize: 2048
L2CacheSpeed: None
Revision: 257

Versions of relevant libraries:
[pip3] numpy==2.2.6
[pip3] torch==2.8.0+cu128
[pip3] torchaudio==2.8.0+cu128
[pip3] torchvision==0.23.0+cu128
[conda] Could not collect

I think i found the root of the problem
C:\Users\Derek\AppData\Local\Programs\Python\Python310\Lib\site-packages
Shows everything as the correct version
torch-2.8.0+cu128.dist-info and so on

But the actual stable diffusion folder
E:\AI Apps\stable-diffusion-webui\venv\Lib\site-packages
Has it wrong
torch-2.1.2+cu121

Or at least i think its wrong, in any case there is a mismatch I am unsure how to resolve !
Edit: Alright i merged all of the torch files from C: into E: and that was able to get stable diffusion to stop complaining about pytorch and i actually made a few basic pictures !!!

I already copied over the LDSR model from another computer.

But now when i go to upscale i get

Loading model from E:\AI Apps\stable-diffusion-webui\models\LDSR\model.ckpt
*** Error completing request
*** Arguments: ('task(aenu1rwcmi8y7hb)', 1.0, None, [<tempfile._TemporaryFileWrapper object at 0x0000000056AAE2F0>], '', '', True, True, 0.0, 4, 0.0, 512, 512, True, 'LDSR', 'None', 0, False, 1, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {}
Traceback (most recent call last):
File "E:\AI Apps\stable-diffusion-webui\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "E:\AI Apps\stable-diffusion-webui\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "E:\AI Apps\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "E:\AI Apps\stable-diffusion-webui\modules\postprocessing.py", line 133, in run_postprocessing_webui
return run_postprocessing(*args, **kwargs)
File "E:\AI Apps\stable-diffusion-webui\modules\postprocessing.py", line 73, in run_postprocessing
scripts.scripts_postproc.run(initial_pp, args)
File "E:\AI Apps\stable-diffusion-webui\modules\scripts_postprocessing.py", line 198, in run
script.process(single_image, **process_args)
File "E:\AI Apps\stable-diffusion-webui\scripts\postprocessing_upscale.py", line 152, in process
upscaled_image = self.upscale(pp.image, pp.info, upscaler1, upscale_mode, upscale_by, max_side_length, upscale_to_width, upscale_to_height, upscale_crop)
File "E:\AI Apps\stable-diffusion-webui\scripts\postprocessing_upscale.py", line 107, in upscale
image = upscaler.scaler.upscale(image, upscale_by, upscaler.data_path)
File "E:\AI Apps\stable-diffusion-webui\modules\upscaler.py", line 68, in upscale
img = self.do_upscale(img, selected_model)
File "E:\AI Apps\stable-diffusion-webui\extensions-builtin\LDSR\scripts\ldsr_model.py", line 58, in do_upscale
return ldsr.super_resolution(img, ddim_steps, self.scale)
File "E:\AI Apps\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 107, in super_resolution
model = self.load_model_from_config(half_attention)
File "E:\AI Apps\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 34, in load_model_from_config
pl_sd = torch.load(self.modelPath, map_location="cpu")
File "E:\AI Apps\stable-diffusion-webui\modules\safe.py", line 108, in load
return load_with_extra(filename, *args, extra_handler=global_extra_handler, **kwargs)
File "E:\AI Apps\stable-diffusion-webui\modules\safe.py", line 156, in load_with_extra
return unsafe_torch_load(filename, *args, **kwargs)
File "E:\AI Apps\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 1529, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
(1) In PyTorch 2.6, we changed the default value of the weights_only argument in torch.load from False to True. Re-running torch.load with weights_only set to False will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with weights_only=True please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint was not an allowed global by default. Please use torch.serialization.add_safe_globals([pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint]) or the torch.serialization.safe_globals([pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint]) context manager to allowlist this global if you trust this class/function.

Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.

SwinIR_4x upscaling works without issue but I am still unable to get LDSR to work.

derekullo changed discussion status to closed

Sign up or log in to comment