nxphi47 commited on
Commit
7ec21dd
1 Parent(s): 8d22163

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -46,14 +46,12 @@ We introduce [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5), th
46
  - Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
47
  - Model weights:
48
  - [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2).
49
- - [SeaLLM-7B-v2-gguf](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf).
50
- - [SeaLLM-7B-v2-GGUF (thanks Lonestriker)](https://huggingface.co/LoneStriker/SeaLLM-7B-v2-GGUF). NOTE: use [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to work properly.
51
  - Run locally:
52
  - [LM-studio](https://lmstudio.ai/):
53
- - [SeaLLM-7B-v2-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q4_0.gguf) and [SeaLLM-7B-v2-q8_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/SeaLLM-7B-v2.q8_0.gguf).
54
- - LM-studio requires this [seallm.preset.json](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2-gguf/blob/main/seallm.preset.json) to set chat template properly.
55
- - [ollama](https://ollama.ai/) `ollama run nxphi47/seallm-7b-v2:q4_0`
56
- - [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [mlx-community/SeaLLM-7B-v2-4bit-mlx](https://huggingface.co/mlx-community/SeaLLM-7B-v2-4bit-mlx)
57
 
58
  <blockquote style="color:red">
59
  <p><strong style="color: red">Terms of Use and License</strong>:
 
46
  - Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
47
  - Model weights:
48
  - [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2).
49
+ - [SeaLLM-7B-v2-GGUF](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF).
 
50
  - Run locally:
51
  - [LM-studio](https://lmstudio.ai/):
52
+ - [SeaLLM-7B-v2.5-q4_0-chatml](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5-chatml.Q4_K_M.gguf) with ChatML template (`<eos>` token changed to `<|im_end|>`)
53
+ - [SeaLLM-7B-v2.5-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5.Q4_K_M.gguf) - must use SeaLLM-7B-v2.5 chat format.
54
+ - [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized)
 
55
 
56
  <blockquote style="color:red">
57
  <p><strong style="color: red">Terms of Use and License</strong>: