Papers
arxiv:2312.11011

VinaLLaMA: LLaMA-based Vietnamese Foundation Model

Published on Dec 18, 2023
Authors:
,

Abstract

In this technical report, we present VinaLLaMA, an open-weight, state-of-the-art (SOTA) Large Language Model for the Vietnamese language, built upon LLaMA-2 with an additional 800 billion trained tokens. VinaLLaMA not only demonstrates fluency in Vietnamese but also exhibits a profound understanding of Vietnamese culture, making it a truly indigenous model. VinaLLaMA-7B-chat, trained on 1 million high-quality synthetic samples, achieves SOTA results on key benchmarks, including VLSP, VMLU, and Vicuna Benchmark Vietnamese, marking a significant advancement in the Vietnamese AI landscape and offering a versatile resource for various applications.

Community

Anh lượng tử hoá xuống Q2_K đi, bản GGUF ấy ạ, bản đó giờ Q5

Anh ơi anh có thể chỉ em cách quantize mà mình sử dụng để quantize thành file GGUF không ạ

Sign up or log in to comment

Models citing this paper 14

Browse 14 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2312.11011 in a dataset README.md to link it from this page.

Spaces citing this paper 9

Collections including this paper 6