Spaces:
Running
Running
yentinglin
commited on
Commit
•
835f9a2
1
Parent(s):
3440524
Update app.py
Browse files
app.py
CHANGED
@@ -44,7 +44,8 @@ Taiwan-LLaMa is a fine-tuned model specifically designed for traditional mandari
|
|
44 |
|
45 |
Different versions of Taiwan-LLaMa are available:
|
46 |
|
47 |
-
- **Taiwan-LLaMa
|
|
|
48 |
- **Taiwan-LLaMa v0.9**: Partial instruction set
|
49 |
- **Taiwan-LLaMa v0.0**: No Traditional Mandarin pretraining
|
50 |
|
@@ -76,7 +77,7 @@ DEFAULT_MAX_NEW_TOKENS = 1024
|
|
76 |
|
77 |
max_prompt_length = 4096 - MAX_MAX_NEW_TOKENS - 10
|
78 |
|
79 |
-
model_name = "yentinglin/Taiwan-
|
80 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
81 |
|
82 |
with gr.Blocks() as demo:
|
|
|
44 |
|
45 |
Different versions of Taiwan-LLaMa are available:
|
46 |
|
47 |
+
- **Taiwan-LLaMa v2.0 (This demo)**: Cleaner pretraining, Better post-training
|
48 |
+
- **Taiwan-LLaMa v1.0**: Optimized for Taiwanese Culture
|
49 |
- **Taiwan-LLaMa v0.9**: Partial instruction set
|
50 |
- **Taiwan-LLaMa v0.0**: No Traditional Mandarin pretraining
|
51 |
|
|
|
77 |
|
78 |
max_prompt_length = 4096 - MAX_MAX_NEW_TOKENS - 10
|
79 |
|
80 |
+
model_name = "yentinglin/Taiwan-LLM-7B-v2.0-chat"
|
81 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
82 |
|
83 |
with gr.Blocks() as demo:
|