Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,8 @@ SeaLLM-13b models exhibit superior performance across a wide spectrum of linguis
|
|
41 |
- DEMO: [SeaLLMs/SeaLLM-Chat-13b](https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat-13b) DEMO allows **batch-inference** for evaluation purposes.
|
42 |
- Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
|
43 |
- Model weights:
|
44 |
-
- [SeaLLM-7B-
|
|
|
45 |
|
46 |
|
47 |
<blockquote style="color:red">
|
|
|
41 |
- DEMO: [SeaLLMs/SeaLLM-Chat-13b](https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat-13b) DEMO allows **batch-inference** for evaluation purposes.
|
42 |
- Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
|
43 |
- Model weights:
|
44 |
+
- [SeaLLM-7B-Hybrid](https://huggingface.co/SeaLLMs/SeaLLM-7B-Hybrid): A ``base'' model trained with mixture of unlabeled raw texts and English supervised data in a hybrid setup, used for few-shot prompting or fine-tuning. (see [technical report](https://arxiv.org/pdf/2312.00738.pdf) for details).
|
45 |
+
- [SeaLLM-7B-Chat](https://huggingface.co/SeaLLMs/SeaLLM-7B-chat): Lower capability than [SeaLLM-13B-Chat](https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat-13b) but much faster and memory-efficient.
|
46 |
|
47 |
|
48 |
<blockquote style="color:red">
|