Not-For-All-Audiences
nsfw
Update Summary for ST Beginners.md
Browse files
Summary for ST Beginners.md
CHANGED
|
@@ -28,7 +28,7 @@ The fun or sad part starts when choosing a backend (the thing that will run LLM)
|
|
| 28 |
If you have:
|
| 29 |
|
| 30 |
* An NVIDIA card newer than 20xx RTX, then it's a no-brainer - use exl2 models.
|
| 31 |
-
From the get-go, do this - open NVIDIA Control Panel -> Manage 3D Settings -> CUDA - Sysmem Fallback Policy : Prefer No Sysmem Fallback. If you want to know what it does -> Google it or just do it with -just trust me bro (newer do something like this, newer blindly trust, trust me bro newer).
|
| 32 |

|
| 33 |
|
| 34 |
Next, we can choose Ooba or TabbyAPI. For me, they are the same in use, but for some people, TabbyAPI is more stable. Ooba is easier to install.
|
|
|
|
| 28 |
If you have:
|
| 29 |
|
| 30 |
* An NVIDIA card newer than 20xx RTX, then it's a no-brainer - use exl2 models.
|
| 31 |
+
From the get-go, do this - open NVIDIA Control Panel -> Manage 3D Settings -> CUDA - Sysmem Fallback Policy : Prefer No Sysmem Fallback. If you want to know what it does -> Google it or just do it with -just trust me bro (newer do something like this, newer blindly trust, trust me bro, newer).
|
| 32 |

|
| 33 |
|
| 34 |
Next, we can choose Ooba or TabbyAPI. For me, they are the same in use, but for some people, TabbyAPI is more stable. Ooba is easier to install.
|