license: wtfpl | |
1. Go to [llama.cpp](https://github.com/ggerganov/llama.cpp/releases/) and download one of those folders | |
 | |
2. If you're about to use CUDA - check the version your card supports(12.2 for any RTX) and download one of those folders | |
 | |
3. Unpack everything in one folder and rename it to "LlamaCPP", put this folder in the same folder where main.py/main.exe file is | |
 | |
4. Launch main.py/main.exe file | |
 | |