keitokei1994
commited on
Commit
•
e9eeb0f
1
Parent(s):
a6a485a
Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,8 @@ language:
|
|
9 |
|
10 |
このモデルは、MergeKitツールを使用して作成されたMixture of Experts (MoE) 言語モデルです。
|
11 |
|
|
|
|
|
12 |
元のmeta-llama/Meta-Llama-3-8B-Instructに、日本語データセットでファインチューニングされたshisa-ai/shisa-v1-llama3-8を合わせることで、元のMeta-Llama-3-8B-Instructの能力を維持したまま、日本語能力を向上させようとしたモデルです。
|
13 |
|
14 |
[Sdff-Ltba/LightChatAssistant-2x7B](https://huggingface.co/Sdff-Ltba/LightChatAssistant-2x7B)と
|
@@ -44,6 +46,8 @@ language:
|
|
44 |
|
45 |
This model is a Mixture of Experts (MoE) language model created using the MergeKit tool.
|
46 |
|
|
|
|
|
47 |
By combining the original meta-llama/Meta-Llama-3-8B-Instruct with shisa-ai/shisa-v1-llama3-8, which was fine-tuned on a Japanese dataset, this model aims to improve Japanese language capabilities while maintaining the abilities of the original Meta-Llama-3-8B-Instruct.
|
48 |
|
49 |
Inspired by [Sdff-Ltba/LightChatAssistant-2x7B](https://huggingface.co/Sdff-Ltba/LightChatAssistant-2x7B) and [Aratako/LightChatAssistant-4x7B](https://huggingface.co/Aratako/LightChatAssistant-4x7B), I have started MoE on Llama3. I am grateful to both of them.
|
|
|
9 |
|
10 |
このモデルは、MergeKitツールを使用して作成されたMixture of Experts (MoE) 言語モデルです。
|
11 |
|
12 |
+
gguf版は [こちら](https://huggingface.co/keitokei1994/Llama-3-8B-shisa-2x8B) 。
|
13 |
+
|
14 |
元のmeta-llama/Meta-Llama-3-8B-Instructに、日本語データセットでファインチューニングされたshisa-ai/shisa-v1-llama3-8を合わせることで、元のMeta-Llama-3-8B-Instructの能力を維持したまま、日本語能力を向上させようとしたモデルです。
|
15 |
|
16 |
[Sdff-Ltba/LightChatAssistant-2x7B](https://huggingface.co/Sdff-Ltba/LightChatAssistant-2x7B)と
|
|
|
46 |
|
47 |
This model is a Mixture of Experts (MoE) language model created using the MergeKit tool.
|
48 |
|
49 |
+
gguf is [here](https://huggingface.co/keitokei1994/Llama-3-8B-shisa-2x8B) .
|
50 |
+
|
51 |
By combining the original meta-llama/Meta-Llama-3-8B-Instruct with shisa-ai/shisa-v1-llama3-8, which was fine-tuned on a Japanese dataset, this model aims to improve Japanese language capabilities while maintaining the abilities of the original Meta-Llama-3-8B-Instruct.
|
52 |
|
53 |
Inspired by [Sdff-Ltba/LightChatAssistant-2x7B](https://huggingface.co/Sdff-Ltba/LightChatAssistant-2x7B) and [Aratako/LightChatAssistant-4x7B](https://huggingface.co/Aratako/LightChatAssistant-4x7B), I have started MoE on Llama3. I am grateful to both of them.
|