Update README.md
Browse files
README.md
CHANGED
@@ -3,16 +3,23 @@ license: apache-2.0
|
|
3 |
datasets:
|
4 |
- wikimedia/wikipedia
|
5 |
language:
|
6 |
-
- ch
|
7 |
- zh
|
8 |
- en
|
9 |
base_model:
|
10 |
- meta-llama/Llama-3.2-3B
|
11 |
---
|
|
|
|
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
template
|
18 |
```
|
@@ -41,4 +48,4 @@ template
|
|
41 |
{{ .Response }}<|end_of_text|>
|
42 |
```
|
43 |
|
44 |
-
|
|
|
3 |
datasets:
|
4 |
- wikimedia/wikipedia
|
5 |
language:
|
|
|
6 |
- zh
|
7 |
- en
|
8 |
base_model:
|
9 |
- meta-llama/Llama-3.2-3B
|
10 |
---
|
11 |
+
continued pretraining with wiki-zh
|
12 |
+
dataset = load_dataset("wikimedia/wikipedia", "20231101.zh", split = "train",)
|
13 |
|
14 |
+
and sft with FreedomIntelligence/alpaca-gpt4-zh
|
15 |
|
16 |
+
using unsloth
|
17 |
+
|
18 |
+
use the model-unsloth.gguf file or model-unsloth-Q4_K_M.gguf file in llama.cpp or a UI based system like GPT4All.
|
19 |
+
|
20 |
+
|
21 |
+
|
22 |
+
aslo you can make the ollama model with modelfile
|
23 |
|
24 |
template
|
25 |
```
|
|
|
48 |
{{ .Response }}<|end_of_text|>
|
49 |
```
|
50 |
|
51 |
+
or just use ollama run lastmass/llama3.2-chinese
|