Update README.md
#4
by
SuperkingbasSKB
- opened
README.md
CHANGED
@@ -16,10 +16,10 @@ tags:
|
|
16 |
- code
|
17 |
- legal
|
18 |
---
|
19 |
-
# OpenThaiLLM-
|
20 |
-
**OpenThaiLLM-
|
21 |
It demonstrates an amazing result, and is optimized for application use cases, Retrieval-Augmented Generation (RAG), Web deployment
|
22 |
-
constrained generation, and reasoning tasks.is a Thai 🇹🇭 & China 🇨🇳 large language model with 7 billion parameters, and it is based on Qwen2
|
23 |
## Introduction
|
24 |
|
25 |
Qwen2.5 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 7B Qwen2 model.
|
|
|
16 |
- code
|
17 |
- legal
|
18 |
---
|
19 |
+
# OpenThaiLLM-DoodNiLT-V1.0.0-Beta-7B: Thai & China & English Large Language Model
|
20 |
+
**OpenThaiLLM-DoodNiLT-V1.0.0-Beta-7B** is an 7 billion parameter instruct model designed for Thai 🇹🇭 & China 🇨🇳 language.
|
21 |
It demonstrates an amazing result, and is optimized for application use cases, Retrieval-Augmented Generation (RAG), Web deployment
|
22 |
+
constrained generation, and reasoning tasks.is a Thai 🇹🇭 & China 🇨🇳 large language model with 7 billion parameters, and it is based on Qwen2-7B.
|
23 |
## Introduction
|
24 |
|
25 |
Qwen2.5 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 7B Qwen2 model.
|