asiansoul commited on
Commit
2c560be
β€’
1 Parent(s): 92eba0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -16,7 +16,7 @@ tags:
16
  - merge
17
 
18
  ---
19
- # Joah-Llama-3-KoEn-8B-Coder-v1
20
 
21
  <a href="https://ibb.co/8XPkwP8"><img src="https://i.ibb.co/kMqZTqc/Joah.png" alt="Joah" border="0"></a><br />
22
 
@@ -24,14 +24,14 @@ tags:
24
 
25
  "μ’‹μ•„(Joah)" by AsianSoul
26
 
27
- Soon Multi Language Model Merge based on this. First German Start (Korean / English / German)
28
 
29
  Where to use Joah : Medical, Korean, English, Translation, Code, Science...
30
 
31
- ## Merge Details
32
 
33
 
34
- The performance of this merge model doesn't seem to be bad though.-> Just opinion
35
 
36
  This may not be a model that satisfies you. But if we continue to overcome our shortcomings,
37
 
@@ -50,11 +50,11 @@ I have found that most of merge's model outside so far do not actually have 64k
50
  If you support me, i will try it on a computer with maximum specifications, also, i would like to conduct great tests by building a network with high-capacity traffic and high-speed 10G speeds for you.
51
 
52
 
53
- ### Merge Method
54
 
55
  This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
56
 
57
- ### Models Merged
58
 
59
  The following models were included in the merge:
60
  * [beomi/Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
@@ -67,7 +67,7 @@ The following models were included in the merge:
67
  * [asiansoul/Llama-3-Open-Ko-Linear-8B](https://huggingface.co/asiansoul/Llama-3-Open-Ko-Linear-8B)
68
  * [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
69
 
70
- ### Configuration
71
 
72
  The following YAML configuration was used to produce this model:
73
 
 
16
  - merge
17
 
18
  ---
19
+ # 🎷 Joah-Llama-3-KoEn-8B-Coder-v1
20
 
21
  <a href="https://ibb.co/8XPkwP8"><img src="https://i.ibb.co/kMqZTqc/Joah.png" alt="Joah" border="0"></a><br />
22
 
 
24
 
25
  "μ’‹μ•„(Joah)" by AsianSoul
26
 
27
+ Soon Multi Language Model Merge based on this. First German Start (Korean / English / German) 🌍
28
 
29
  Where to use Joah : Medical, Korean, English, Translation, Code, Science...
30
 
31
+ ## 🎑 Merge Details
32
 
33
 
34
+ The performance of this merge model doesn't seem to be bad though.-> Just opinion ^^ 🏟️
35
 
36
  This may not be a model that satisfies you. But if we continue to overcome our shortcomings,
37
 
 
50
  If you support me, i will try it on a computer with maximum specifications, also, i would like to conduct great tests by building a network with high-capacity traffic and high-speed 10G speeds for you.
51
 
52
 
53
+ ### 🧢 Merge Method
54
 
55
  This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
56
 
57
+ ### πŸ“š Models Merged
58
 
59
  The following models were included in the merge:
60
  * [beomi/Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
 
67
  * [asiansoul/Llama-3-Open-Ko-Linear-8B](https://huggingface.co/asiansoul/Llama-3-Open-Ko-Linear-8B)
68
  * [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
69
 
70
+ ### 🍎 Configuration
71
 
72
  The following YAML configuration was used to produce this model:
73