Edit model card

LLAMA 3 8B with capable to output Traditional Chinese

✨ Recommend using LMStudio for this model

I try to use ollama to run it, but it become very delulu. so stick with LMStudio first :)

The performance is not really actually good. but it's capable to answer some basic question. sometime it just act really dumb :(

Downloads last month
41
GGUF
Model size
8.03B params
Architecture
llama

4-bit

This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Quantized from

Dataset used to train suko/Meta-Llama-3-8B-CHT

Space using suko/Meta-Llama-3-8B-CHT 1