Text Generation
Transformers
Safetensors
Thai
English
qwen2
text-generation-inference
sft
trl
4-bit precision
bitsandbytes
LoRA
Fine-Tuning with LoRA
LLM
GenAI
NT GenAI
ntgenai
lahnmah
NT Thai GPT
ntthaigpt
medical
medtech
HealthGPT
หลานม่า
NT Academy
conversational
Inference Endpoints
4-bit precision
Update README.md
Browse files
README.md
CHANGED
@@ -80,8 +80,8 @@ model = AutoModelForCausalLM.from_pretrained("amornpan/openthaigpt-MedChatModelv
|
|
80 |
input_text = "ใส่คำถามทางการแพทย์ที่นี่"
|
81 |
inputs = tokenizer(input_text, return_tensors="pt")
|
82 |
|
83 |
-
# Generate the output
|
84 |
-
output = model.generate(**inputs)
|
85 |
|
86 |
# Decode and print the generated response, skipping special tokens
|
87 |
-
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
|
|
80 |
input_text = "ใส่คำถามทางการแพทย์ที่นี่"
|
81 |
inputs = tokenizer(input_text, return_tensors="pt")
|
82 |
|
83 |
+
# Generate the output with a higher max length or max new tokens
|
84 |
+
output = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=0.7)
|
85 |
|
86 |
# Decode and print the generated response, skipping special tokens
|
87 |
+
print(tokenizer.decode(output[0], skip_special_tokens=True))
|