license: apache-2.0
Mixnu
Mixnu is a Korean Large Language Model with the Mixture of Experts architecture.
Model Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("seemdog/Mixnu")
model = AutoModelForCausalLM.from_pretrained("seemdog/Mixnu")
Tokenizer
Mixnu uses an extended version of Llama-2 tokenizer.
The table below compares the vocabulary size, percentage of unkown tokens(tokens segmeted in byte-level), and the average number of tokens per test sentence. The test sentences consist of carefully curated Korean text encompassing various domains(e.g. law, medicine, finance, social studies etc.)
Model | Vocab size | % of UNK token | Avg. num of tokens |
---|---|---|---|
Llama-2 | 32,000 | 48.61 | 325.27 |
Mixnu | 46,262 | 0.96 | 125.91 |
Model Architecture
Employing Llama-2 7b
with the extended tokenizer as the foundational model, we develop four different expert models with four separate train datasets.
As the figure shows, the four expert models are then incorporated into one model using the Mixtral
Architecture, resulting in a 19b MoE model.
Qualitative Evaluation
Pipeline
Primarily using chatGPT(GPT-4o) for evaluation and then verifying the results with human annotators, we evalaute the model in five categories:
Language
: When adapting the existing model to a new language, we do not want the model to forget information about the previously learned language. Therefore, we evaluate whether the model responds well in the original language when given a prompt in that language. The attribute used in the language category is 'English'. For Korean, since the language used in the prompts for the other categories introduced below is Korean, it is not separately included in this category.
Style
: A sentence can be written in a literary or colloquial style. Generally, unless the discourse context changes, maintaining consistency in style across multiple sentences is considered natural. Therefore, we evaluate whether the model continues to generate sentences in the same style as the prompt when given a literary or colloquial prompt. The attributes used in the style category are 'literary' and 'colloquial'.
Domain
: Models can learn knowledge from various fields simultaneously. It is essential for us to specifically examine which knowledge the model possesses. Particularly, we focus on evaluating knowledge in specialized fields, paying attention to the potential application of LLMs in those domains. The attributes used in the domain category are 'daily life', 'medicine', 'law', 'technology', and 'finance'. When constructing prompts for domains except for 'daily life', we utilized excerpts from actual newspaper articles in each domain.
Localization
: In adaptation of language models to other languages, linguistic transitions are crucial. However, generating sentences that well reflect the culture of the country using that language is the true measure of the model's sentence generation ability. In this qualitative assessment, we evaluate the degree of 'Koreanization', how well the trained model understands the unique expressions in Korean and Korean cultures in the prompts. Attributes used in localization are 'grammar', 'proverbs', 'idioms', and 'culture'.
Hallucination
: Models can generate sentences that seem plausible outwardly but are actually inaccurate or unrelated with respect to the input data. Because this occurs in a subtle manner, it can be challenging to capture through quantitative evaluation, making qualitative assessment particularly crucial. We include words in the prompt that can easily be confused with other words due to their syllabic overlap, observing whether the model grasps the accurate meaning of these words and generates relevant sentences. The attribute used here is simply 'hallucination' itself.
Prompt
{
"role": "system", "content": "You are a strict evaluator of text generation models."
Please strictly evaluate the performance of three generative models using qualitative evaluation. Provide the fluency, relevance, and accuracy score of model A, B, C, D, and E.
Fluency Rubric:
5 points: Sentences are fluent without noise.
4 points: Sentences contain some noise that does not significantly degrade overall fluency, or relevant noise such as PII (e.g., phone numbers, addresses).
3 points: Sentences are generally coherent but contain unnecessary and irrelevant noise.
2 points: Severe noise is present (e.g., word repetition, symbols). The output makes sense at the phrase level but not at the sentence level.
1 point: The output makes no sense at all.
Relevance Rubric:
5 points: Sentences fully answer the prompt's question.
4 points: Sentences include a direct answer to the prompt's question, though not entirely consistent or coherent.
3 points: Sentences provide an indirect and low-quality answer to the prompt's question.
2 points: Sentences do not answer the prompt's question but are relevant to the topic.
1 point: Sentences are not relevant to the prompt.
Accuracy Rubric:
5 points: All information and details are completely accurate with no errors or distortions.
4 points: Most information is accurate with only minor errors that do not significantly affect understanding.
3 points: The main information is accurate, but there are some significant errors.
2 points: There are numerous errors, and only some of the information is accurate.
1 point: The information is mostly incorrect or misleading, with very few accurate details.
Prompt: {Prompt}
A: {Response from Model A}
B: {Response from Model B}
C: {Response from Model C}
D: {Response from Model D}
E: {Response from Model E}
Deduct fluency points if the model generates output in a different language from the prompt.
}
Evaluation Results
Response Examples
Prompt
์ด๋ฒ์ ์์ ๋ถ์๋ฅ์ด์์ด์.
Responses
Mixnu
๊ตญ์ด๋ ์๋
๋ณด๋ค ์ด๋ ค์์ก๊ณ , ์์ด๋ ์๋
์๋ฅ๋ณด๋ค ์ฌ์์ก๋ค๊ณ ํ์ง๋ง, ์ํ์๋ค์ด ์ฒด๊ฐํ๋ ๋๋๋ ํจ์ฌ ๋ ์ด๋ ค์ ์ ๊ฒ๋๋ค.
Llama-2 7b
๋ถ์๋ฅ์ด๋ ๋ถ์ ์์ด์ ํ ์ ์์๋ ์์
์ด๋ผ๋ ๊ฒ์ ๋ปํ๋๋ฐ์.\n์ ๋ถ์๋ฅ์ด ๋๋์ง ์๋ฌด๋ ๋ชจ๋ฅด๊ฒ ์ต๋๋ค.\n๋ถ์๋ฅ์ด ๋๋ ์๊ฐ ์ธ๊ณ์ธ๋ค์ด ๋ํ๋๋ ๊ฑธ ์๊ณ ์์ต๋๋ค
Llama-3 8b
2์๊ฐ๋์ 1๋ฒ 2๋ฒ 3๋ฒ 4๋ฒ 5๋ฒ 6๋ฒ ๋ฌธ์ ๋ฅผ ํ์์ต๋๋ค.\n์ด๋ฒ์ 1๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์๋ ๋ฌธ์ ์์ด์. 2๋ฒ์ ์๋ฅ์์ ๋์จ ๋ฌธ์ ๊ฐ ์๋๋๋ค.\n๊ทธ๋ฆฌ๊ณ 3๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ ๋ฌธ์ ์์ด์.\n4๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ ๋ฌธ์ ๊ฐ ์๋์์ด์.\n5๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ ๋ฌธ์ ๊ฐ ์๋์์์.\n6๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ๋ฌธ์ ๊ฐ ์๋์์ด์.\n๊ทธ๋ฆฌ๊ณ 7๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ๋ฌธ์ ๊ฐ ์๋์์ด์.\n๊ทธ๋ฆฌ๊ณ 8๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ๋ฌธ์ ๊ฐ ์๋์์ด์.\n9๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ ๋ฌธ์ ๊ฐ ์๋์์ด์.\n10๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ ๋ฌธ์ ๊ฐ ์๋์์ด์.\n11๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ ๋ฌธ์ ๊ฐ ์๋์์ด์.\n12๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ ๋ฌธ์ ๊ฐ ์๋์์ด์.\n13๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ ๋ฌธ์ ๊ฐ ์๋์์ด์.\n์ด๋ฒ์ 14๋ฒ๋ฌธ์ ๋ ์๋ฅ์์ ๋์จ ๋ฌธ์ ๊ฐ ์๋์์ด์.\n15๋ฒ๋ฌธ์ ๋
open-llama-2-ko-7b
ไปๅใฏๅฎๅ
จใซๆต
ๆฐดใงใใใ\nไปๅใฏๅฎๅ
จใซไธ้ฃใงใใใ\n\nไปๅใฏๅฎๅ
จใซไธ้ฃใงใใใ
polyglot-ko-12.8b
๊ทธ๋์ ๋ ์ด๋ ค์ ์๊ฑฐ์์.\u200b๊ทธ๋์ ๋๋๋ ๊ธด์ฅ๋๊ณ , ๋๋๋ ํ๋ค์์๊ฑฐ์์.\u200b\u200b\u200bํ์ง๋ง, \u200b\u200b\u200b์ฐ๋ฆฌ๋ ์๋ฅ์ด๋ผ๋ ํฐ ์ฐ์ ๋์์ผ๋,\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b \u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b \u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b\u200b
Ko-Mixtral-v1.3-MoE-7Bx2
๊ทธ๋์ ์ ๋ ์
์ ์์ ๊ฑฐ์์, ๊ทธ๋ฆฌ๊ณ ๋๋ ์ด ์ฌ๋์๊ฒ ์
์ํ์ง ์์ ๊ฑฐ์์. ๊ทธ๋ฐ ๋ค์ ๋๋ ์
์๋ฅผ ํ์ง ์์ ๊ฒ์ด๋ผ๊ณ ๋งํ๊ณ , ๋๋ ์
์ํ์ง ์์ ๊ฒ์ด๋ผ๊ณ ๋งํ๊ณ , ๋๋ ์
์ํ์ง ์์ ๊ฒ์ด๋ผ๊ณ ๋งํ์ด์.\n\n\n\n23:32\n\n๊ทธ๋ฆฌ๊ณ ์ฌํธ์๋ ๊ทธ์ ๋ง์ ๋ค์๊ณ , ๊ทธ์ ๋ง์ ๋ค์๊ณ , ๊ทธ์ ๋ง์ ๋ค์๊ณ , ๊ทธ์ ๋ง์ ๋ค์๊ณ , ๊ทธ์ ๋ง์ ๋ค์๊ณ , ๊ทธ์ ๋ง์ ๋ค์๊ณ , ๊ทธ์ ๋ง์ ๋ค์๊ณ , ๊ทธ์ ๋ง์ ๋ค์๊ณ , ๊ทธ์ ๋ง์ ๋ค์๊ณ , ๊ทธ์ ๋ง์ ๋ค์
Authors
Jean Seo(seemdog@snua.ac.kr)
Sungjoo Byun(byunsj@snu.ac.kr)
Jaeyoon Kim(toscour345@snu.ac.kr)
Hyemi Jo(huimei6361@snu.ac.kr)
Chaeyoung Oh(nyong10@snu.ac.kr)
Dongjun Jang(qwer4107@snu.ac.kr)
Corresponding author: Hyopil Shin(hpshin@snu.ac.kr)
Acknowledgement
Mixnu is developed by SNU CL_NLP Lab with the support of BIGDATABUB UNIVERSITY.
Citation
@misc {jean_seo_2024,
author = { {Jean Seo} },
title = { Mixnu (Revision 0bf07fb) },
year = 2024,
url = { https://huggingface.co/seemdog/Mixnu },
doi = { 10.57967/hf/2809 },
publisher = { Hugging Face }
}