duyhv1411/Llama-3.2-1B-en-vi

This model is an advanced iteration of the powerful meta-llama/Llama-3.2-1B-Instruct, specifically fine-tuned to enhance its capabilities in generic domains.

How to use


# Use a pipeline as a high-level helper

from transformers import AutoModelForCausalLM, AutoTokenizer

merged_model = AutoModelForCausalLM.from_pretrained("duyhv1411/Llama-3.2-3B-en-vi",
        device_map="auto",
        trust_remote_code=True,)
tokenizer = AutoTokenizer.from_pretrained("duyhv1411/Llama-3.2-3B-en-vi")

chat = [{"role": "user", "content": "Cách tính lương gross?"}]

tokenized_chat = tokenizer.encode(tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True), return_tensors="pt").to(torch.device("cuda"))

outputs = merged_model.generate(tokenized_chat, max_new_tokens=1024, do_sample=True, temperature = 0.9) 
print(tokenizer.decode(outputs[0][len(tokenized_chat[0]):]))


from transformers import pipeline

chat = [{"role": "user", "content": "Cách tính lương gross?"}]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

pipe = pipeline(task="text-generation", model=merged_model, tokenizer=tokenizer, device_map="auto", return_full_text=False)
print(pipe(prompt, max_new_tokens=1024, do_sample=True, temperature=0.9)[0]["generated_text"])
Downloads last month
31
Safetensors
Model size
1.24B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for duyhv1411/Llama-3.2-1B-en-vi

Finetuned
(362)
this model