Can this run locally in docker and a 12GB card?

#8
by yehiaserag - opened

I've been trying to run training locally on a 3080ti in a docker image

I'm getting this error:
AttributeError: python: undefined symbol: cudaRuntimeGetVersion

Not sure if it's because of the 12GB card or if I'm missing some dependency or have something misconfigured in there

yehiaserag changed discussion title from Can this run locally in docker or a 12GB card? to Can this run locally in docker and a 12GB card?

This looks like missing libraries. Can you share the Dockerfile you use? I've had success running using python:3.9.15 and tensorflow/tensorflow:2.10.0-gpu and have success using those with my GPU.

Sign up or log in to comment