How to use

from transformers import AutoModelForCausalLM, AutoTokenizer, TextGenerationPipeline
model_path = 'fiveflow/KoLlama-3-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, 
                                                  device_map="auto",
                                                #   load_in_4bit=True,
                                                  low_cpu_mem_usage=True)

pipe = TextGenerationPipeline(model = model, tokenizer = tokenizer)
Downloads last month
1,983
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for fiveflow/KoLlama-3-8B-Instruct

Finetuned
(535)
this model
Quantizations
3 models

Spaces using fiveflow/KoLlama-3-8B-Instruct 6