torch==2.2.2 
torchvision==0.17.2
flash_attn-2.7.3+cu11torch2.2cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
timm==1.0.14
# albumentations==2.0.4
albumentations==2.0.8  # Updated to newer version
onnx==1.14.0
onnxruntime==1.15.1
pycocotools==2.0.7
PyYAML==6.0.1
scipy==1.13.0
onnxslim==0.1.31
# onnxruntime-gpu==1.18.0
onnxruntime-gpu==1.15.1  # Changed to compatible version
gradio==4.44.1


opencv-python==4.9.0.80
psutil==5.9.8
py-cpuinfo==9.0.0
huggingface-hub==0.23.2
safetensors==0.4.3
numpy==1.26.4
supervision==0.22.0

conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch

python - <<'PY'
import sys, torch
print("Python:", sys.version)
print("Torch:", torch.__version__)
print("CUDA available:", torch.cuda.is_available())
if torch.cuda.is_available():
    print("GPU:", torch.cuda.get_device_name(0))
PY


# pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# pip install -e .

(yolov12) wrf@wrf:~/Dara/yolov12$ python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}'); print(f'GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"CPU\"}')"
/home/wrf/anaconda3/envs/yolov12/lib/python3.11/site-packages/torch/cuda/__init__.py:129: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11040). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0
CUDA available: False
GPU: CPU

Great! You already have the flash_attn wheel file. Let's install it in your new environment.
Install Everything Including flash_attn
Step 1: Create new conda environment
bashconda create -n yolov12 python=3.11 -y
conda activate yolov12
Step 2: Install PyTorch 2.2 (to match your flash_attn wheel)
Your flash_attn wheel is built for torch2.2, so we need to use PyTorch 2.2 with CUDA 11.8:
bashpip install torch==2.2.2 torchvision==0.17.2 --index-url https://download.pytorch.org/whl/cu118
Step 3: Install flash_attn from the wheel file
bashpip install flash_attn-2.7.3+cu11torch2.2cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
Step 4: Create requirements.txt (without torch, torchvision, flash_attn)
txttimm==1.0.14
albumentations==2.0.8
onnx==1.14.0
onnxruntime==1.15.1
pycocotools==2.0.7
PyYAML==6.0.1
scipy==1.13.0
onnxslim==0.1.31
onnxruntime-gpu==1.15.1
gradio==4.44.1
opencv-python==4.9.0.80
psutil==5.9.8
py-cpuinfo==9.0.0
huggingface-hub==0.23.2
safetensors==0.4.3
numpy==1.26.4
supervision==0.22.0
Step 5: Install remaining packages
bashpip install -r requirements.txt
Step 6: Verify everything works
bashpython -c "import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA available: {torch.cuda.is_available()}'); print(f'GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"No GPU\"}')"
Important Note:
PyTorch 2.2 with CUDA 11.8 requires NVIDIA driver >= 450.80.02. Since you have driver 11.4 (which is actually driver version 470.x for CUDA 11.4), you should be okay. But if you still get the driver error, then:
Alternative: Use older flash_attn or skip it
If the driver error persists, you'll need to either:

Skip flash_attn and use the older PyTorch I suggested earlier
Or download a flash_attn wheel compatible with CUDA 11.3/torch 1.12

Try the above steps first and let me know what happens!


(yolov12) wrf@wrf:~/Dara/yolov12$ pip install torch==1.13.1 torchvision==0.14.1 -i https://pypi.tuna.tsinghua.edu.cn/simple
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting torch==1.13.1
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/bf/47/d52be83b0ce72e83a6691177f27b110b7efefaae4f228f45e404c521e51d/torch-1.13.1-cp311-cp311-manylinux1_x86_64.whl (887.4 MB)
ERROR: Ignored the following yanked versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3, 0.15.0
ERROR: Could not find a version that satisfies the requirement torchvision==0.14.1 (from versions: 0.15.1, 0.15.2, 0.16.0, 0.16.1, 0.16.2, 0.17.0, 0.17.1, 0.17.2, 0.18.0, 0.18.1, 0.19.0, 0.19.1, 0.20.0, 0.20.1, 0.21.0)
ERROR: No matching distribution found for torchvision==0.14.1
(yolov12) wrf@wrf:~/Dara/yolov12$ 

(yolov12) wrf@wrf:~/Dara/yolov12$ pip uninstall torch torchvision -y
WARNING: Skipping torch as it is not installed.
WARNING: Skipping torchvision as it is not installed.
(yolov12) wrf@wrf:~/Dara/yolov12$ pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu113
ERROR: Could not find a version that satisfies the requirement torch==1.12.1+cu113 (from versions: 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0)
ERROR: No matching distribution found for torch==1.12.1+cu113
(yolov12) wrf@wrf:~/Dara/yolov12$ 

pip install flash_attn-2.7.0.post2-cp311-cp311-win_amd64.whl