GGUF
Chinese
Zane666 commited on
Commit
8df1d12
1 Parent(s): a193e23

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -2
README.md CHANGED
@@ -5,5 +5,32 @@ datasets:
5
  - LooksJuicy/ruozhiba
6
  language:
7
  - zh
8
- - en
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - LooksJuicy/ruozhiba
6
  language:
7
  - zh
8
+ ---
9
+ # Model Card for Llama 3 8B Instruct (Quantized to 4-bit)
10
+
11
+ This model is a fine-tuned version of Llama 3 8B Instruct, quantized to 4-bit, using the Chinese datasets YeungNLP/firefly-train-1.1M and LooksJuicy/ruozhiba.
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+
17
+ - **Developed by:** Zane
18
+ - **Model type:** Llama 3 8B Instruct (Quantized to 4-bit)
19
+ - **Language(s) (NLP):** Chinese (zh)
20
+ - **License:** Apache-2.0
21
+
22
+ ## How to Get Started with the Model
23
+
24
+ ```python
25
+ from transformers import AutoModelForCausalLM, AutoTokenizer
26
+
27
+ model_name = "your-username/llama-3-8b-instruct-4bit-chinese"
28
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
29
+ model = AutoModelForCausalLM.from_pretrained(model_name)
30
+
31
+ input_text = "请输入您的中文文本"
32
+ inputs = tokenizer(input_text, return_tensors="pt")
33
+ outputs = model.generate(inputs.input_ids, max_length=50)
34
+ generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
35
+ print(generated_text)
36
+ ```