lastdefiance20 commited on
Commit
b7ecd67
โ€ข
1 Parent(s): b58c143

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md CHANGED
@@ -1,3 +1,86 @@
1
  ---
2
  license: llama3
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama3
3
+ base_model:
4
+ - meta-llama/Meta-Llama-3-8B
5
+ language:
6
+ - en
7
+ - ko
8
+ tags:
9
+ - facebook
10
+ - meta
11
+ - llama
12
+ - llama-3
13
+ - llama-3-ko
14
  ---
15
+ <p align="left">
16
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/646484cfb90150b2706df03b/BEOyMpnnY9VY2KXlc3V2F.png" width="20%"/>
17
+ <p>
18
+
19
+ # Llama-3-MAAL-8B-Instruct-v0.1
20
+ we release MAAL, Multilingual Adaptive Augmentation Language-model which comprises a groundbreaking fusion of multilingual capabilities and adaptive augmentation techniques.
21
+
22
+ - **Developed by:** [maum.ai Brain NLP](https://maum-ai.github.io). Jaeyoon Jung, Jinjoo Lee, Yongjae Lee, Dongjun Lee, Woosung Joo
23
+ - **Language(s) (NLP):** Korean, English (currently, bilingual)
24
+
25
+ ## Model Description
26
+
27
+ Version 0.1 uses cross-lingual training to transfer instruction-following capabilities from English to Korean.
28
+
29
+ - We Trained this model on an 8 H100-80G for 1 day with cross-lingual training dataset
30
+ - we recommend using the fixed system prompt for the model unless you fine-tune it
31
+ ```
32
+ ๋„ˆ๋Š” ๋งˆ์Œ์—์ด์•„์ด์˜ ์ฑ—๋ด‡ MAAL์ด๋‹ค. ๊ณ ๊ฐ์˜ ์งˆ๋ฌธ์— ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์—ฌ๋ผ.
33
+ ```
34
+
35
+ ## sample inference code (GPU)
36
+
37
+ ```
38
+ import transformers
39
+ import torch
40
+
41
+ model_id = "maum-ai/Llama-3-MAAL-8B-Instruct-v0.1"
42
+ model = transformers.AutoModelForCausalLM.from_pretrained(model_id).to("cuda")
43
+ tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
44
+ streamer = transformers.TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
45
+
46
+ # we recommend using the fixed prompt for the model unless you fine-tune it
47
+ prompt = "๋„ˆ๋Š” ๋งˆ์Œ์—์ด์•„์ด์˜ ์ฑ—๋ด‡ MAAL์ด๋‹ค. ๊ณ ๊ฐ์˜ ์งˆ๋ฌธ์— ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์—ฌ๋ผ."
48
+ instruction = "์‚ฌ๊ณผ ํ•œ ๋ฐ•์Šค์—๋Š” ์‚ฌ๊ณผ๊ฐ€ 30๊ฐœ ๋“ค์–ด์žˆ๋Š”๋ฐ, ์ฒ˜์Œ์—๋Š” ์‚ฌ๊ณผ 3๋ฐ•์Šค๊ฐ€ ์žˆ์—ˆ๊ณ , ๋‚ด๊ฐ€ ์‚ฌ๊ณผ 5๊ฐœ๋ฅผ ๋จน์—ˆ์–ด. ๋‚จ์€ ์‚ฌ๊ณผ๋Š” ์ด ๋ช‡๊ฐœ์•ผ?"
49
+
50
+ messages = [
51
+ {"role": "system", "content": f"{prompt}"},
52
+ {"role": "user", "content": f"{instruction}"}
53
+ ]
54
+
55
+ inputs = tokenizer.apply_chat_template(
56
+ messages,
57
+ tokenize=True,
58
+ return_tensors='pt').to("cuda")
59
+ outputs = model.generate(inputs, streamer=streamer, max_new_tokens=1024, pad_token_id=tokenizer.eos_token_id)
60
+ ```
61
+
62
+ ## Evaluation Results
63
+
64
+ As the main goal of version 0.1 is to **transfer instruction-following capabilities from English to Korean** without utilizing continuous pre-training, etc., we select [**LogicKor**](https://github.com/StableFluffy/LogicKor) as our evaluation method to assess the Korean instruction skills.
65
+
66
+ We compare our model with a similar parameter model (less than 13B) that has been fine-tuned on the Korean dataset. \* denotes our self-report result.
67
+
68
+ |Model|single-turn(โ†‘)|multi-turn(โ†‘)|average(โ†‘)|
69
+ |-|-|-|-|
70
+ |maum-ai/Llama-3-MAAL-8B-Instruct-v0.1*|**5.80**|4.66|**5.23**|
71
+ |maywell/Synatra-kiqu-10.7B|5.71|4.73|5.22|
72
+ |yanolja/EEVE-Korean-Instruct-10.8B-v1.0|5.78|3.92|4.85|
73
+ |nlpai-lab/KULLM3|4.61|**4.83**|4.72|
74
+ |MLP-KTLim/llama3-Bllossom*|2.11|1.57|1.84|
75
+
76
+ ## Limitations
77
+ Due to this model being trained on a small dataset, it has several limitations.
78
+ - Hard to generate diverse Korean texts
79
+ - lack of Korean knowledge & Culture (localization)
80
+ - Not work with Image inputs and video inputs
81
+
82
+ ## Todo
83
+ we will solve these limitations one by one by upgrading this model like as...
84
+ - Enhance the Korean generation through Vocabulary Expansion & Continuous pre-training. (more Korean corpus!)
85
+ - Localize with cultural adaptation method and additional Korean knowledge data. [*similar idea*](https://aclanthology.org/2023.emnlp-main.18/)
86
+ - Develop a Vision Language Model that can handle both video and image inputs. [*similar idea*](https://github.com/PKU-YuanGroup/Video-LLaVA)