Edit model card

LLAMA 3 8B with capable to output Traditional Chinese

✨ Recommend using LMStudio for this model

I try to use ollama to run it, but it become very delulu. so stick with LMStudio first :)

The performance is not really actually good. but it's capable to answer some basic question. sometime it just act really dumb :(

Downloads last month
83
GGUF
Model size
8.03B params
Architecture
llama

Quantized from

Dataset used to train suko/Meta-Llama-3-8B-CHT

Space using suko/Meta-Llama-3-8B-CHT 1