Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ Please feel free to comment on this model and give us feedback in the Community
|
|
28 |
|
29 |
# How to use
|
30 |
|
31 |
-
The easiest way to use this model on your own computer is to use the [GGUF version of this model (lightblue/suzume-llama-3-8B-multilingual-gguf)](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf) using a program such as
|
32 |
|
33 |
If you want to use this model directly in Python, we recommend using vLLM for the fastest inference speeds.
|
34 |
|
|
|
28 |
|
29 |
# How to use
|
30 |
|
31 |
+
The easiest way to use this model on your own computer is to use the [GGUF version of this model (lightblue/suzume-llama-3-8B-multilingual-gguf)](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-gguf) using a program such as [jan.ai](https://jan.ai/) or [LM Studio](https://lmstudio.ai/).
|
32 |
|
33 |
If you want to use this model directly in Python, we recommend using vLLM for the fastest inference speeds.
|
34 |
|