Files changed (1) hide show
  1. README.md +57 -3
README.md CHANGED
@@ -1,3 +1,57 @@
1
- ---
2
- license: llama3
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ libray_name: transformers
3
+ pipeline_tag: text-generation
4
+ license: other
5
+ license_name: llama3
6
+ license_link: LICENSE
7
+ language:
8
+ - en
9
+ - ko
10
+ tags:
11
+ - meta
12
+ - llama
13
+ - llama-3
14
+ - akallama
15
+ library_name: transformers
16
+ ---
17
+ <a href="https://github.com/">
18
+ <img src="3de500aklm" width="50%"/>
19
+ </a>
20
+
21
+
22
+ # AKALLAMA
23
+ We introduce AKALLAMA-70B, korean focused opensource 70b large language model.
24
+ It demonstrate considerable improvement in korean fluence, specially compared to base llama 3 model.
25
+ To our knowledge, this is one of the first 70b opensource Korean-speaking language models.
26
+
27
+ ### Model Description
28
+
29
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub.
30
+
31
+ - **Developed by:** [mirlab](https://mirlab.yonsei.ac.kr/)
32
+ - **Language(s) (NLP):** Korean, English
33
+ - **License:** llama3
34
+ - **Finetuned from model:** [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B)
35
+
36
+ ## Evaluation
37
+
38
+ For local inferencing and evaluation, we highly recommend using the Ollama library.
39
+ Check _Customize a model section_ of [Ollama github page](https://github.com/ollama/ollama)
40
+
41
+ ## Training Details
42
+ ### Training Procedure
43
+
44
+ We closely followed training procedure of Zephyr ORPO model.
45
+ Please check out Huggingface's [alignment handbook](https://github.com/huggingface/alignment-handbook?tab=readme-ov-file) for further details, including the chat template.
46
+
47
+ ### Training Data
48
+
49
+ Detailed descriptions regarding training data will be announced later.
50
+
51
+ ### Examples
52
+
53
+ ## Thanks to
54
+
55
+ - A100 클러스터를 제공해주신, 연세대학교 인공지능학과 데이터센터
56
+ -
57
+