LLAMA 3 8B with capable to output Traditional Chinese

✨ Recommend using LMStudio for this model

I tried using Ollama to run it, but it became quite delulu.

So for now, I'm sticking with LMStudio :)The performance isn't actually that great, but it's capable of answering some basic questions. Sometimes it just acts really dumb though :(

LLAMA 3.1 can actually output pretty well Chinese, so this repo can be ignored.

Downloads last month
32
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for suko/Meta-Llama-3-8B-CHT

Quantized
(798)
this model

Dataset used to train suko/Meta-Llama-3-8B-CHT