chillymiao commited on
Commit
327aac8
1 Parent(s): 5ab3f47

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -6,7 +6,7 @@ language:
6
 
7
  # Hyacinth6B: A Trandidional Chinese Large Language Model
8
 
9
- ![hyacinth](https://huggingface.co/chillymiao/Hyacinth6B/blob/main/pics/hyacinth.jpeg)
10
 
11
  Hyacinth6B is a Tranditional Chinese Large Language Model which fine-tune from [chatglm3-base](https://huggingface.co/THUDM/chatglm3-6b-base),our goal is to find a balance between model lightness and performance, striving to maximize performance while using a comparatively lightweight model. Hyacinth6B was developed with this objective in mind, aiming to fully leverage the core capabilities of LLMs without incurring substantial resource costs, effectively pushing the boundaries of smaller models' performance. The training approach involves parameter-efficient fine-tuning using the Low-Rank Adaptation (LoRA) method.
12
  At last, we evaluated Hyacinth6B, examining its performance across various aspects. Hyacinth6B shows commendable performance in certain metrics, even surpassing ChatGPT in two categories. We look forward to providing more resources and possibilities for the field of Traditional Chinese language processing. This research aims to expand the research scope of Traditional Chinese language models and enhance their applicability in different scenarios.
@@ -31,15 +31,15 @@ Training required approximately 20.6GB of VRAM without any quantization (default
31
 
32
  # Evaluate Results
33
  ## CMMLU
34
- ![image](https://hackmd.io/_uploads/Byza9lElR.png)
35
  ## C-eval
36
- ![image](https://hackmd.io/_uploads/B1JgogNeR.png)
37
  ## TC-eval by MediaTek Research
38
- ![image](https://hackmd.io/_uploads/Bks-ox4gC.png)
39
  ## MT-bench
40
- ![虛線加粗版](https://hackmd.io/_uploads/H176sgNlC.png)
41
  ## LLM-eval by NTU Miu Lab
42
- ![mstsc_And7XB3mpG](https://hackmd.io/_uploads/SkK8nx4eR.png)
43
  ## Bailong Bench
44
 
45
 
 
6
 
7
  # Hyacinth6B: A Trandidional Chinese Large Language Model
8
 
9
+ <img src="./pics/hyacinth.jpeg" alt="image_name png"/>
10
 
11
  Hyacinth6B is a Tranditional Chinese Large Language Model which fine-tune from [chatglm3-base](https://huggingface.co/THUDM/chatglm3-6b-base),our goal is to find a balance between model lightness and performance, striving to maximize performance while using a comparatively lightweight model. Hyacinth6B was developed with this objective in mind, aiming to fully leverage the core capabilities of LLMs without incurring substantial resource costs, effectively pushing the boundaries of smaller models' performance. The training approach involves parameter-efficient fine-tuning using the Low-Rank Adaptation (LoRA) method.
12
  At last, we evaluated Hyacinth6B, examining its performance across various aspects. Hyacinth6B shows commendable performance in certain metrics, even surpassing ChatGPT in two categories. We look forward to providing more resources and possibilities for the field of Traditional Chinese language processing. This research aims to expand the research scope of Traditional Chinese language models and enhance their applicability in different scenarios.
 
31
 
32
  # Evaluate Results
33
  ## CMMLU
34
+ <img src="./pics/cmmlu.png" alt="image_name png"/>
35
  ## C-eval
36
+ <img src="./pics/ceval.png" alt="image_name png"/>
37
  ## TC-eval by MediaTek Research
38
+ <img src="./pics/tc-eval.png" alt="image_name png"/>
39
  ## MT-bench
40
+ <img src="./pics/dashB.png" alt="image_name png"/>
41
  ## LLM-eval by NTU Miu Lab
42
+ <img src="./pics/llmeval.png" alt="image_name png"/>
43
  ## Bailong Bench
44
 
45