---
license: other
pipeline_tag: text-generation
base_model: internlm/internlm2_5-7b-chat-1m
model_creator: InternLM
model_name: internlm2_5-7b-chat-1m
quantized_by: Second State Inc.
---
# internlm2_5-7b-chat-1m-GGUF
## Original Model
[internlm/internlm2_5-7b-chat-1m](https://huggingface.co/internlm/internlm2_5-7b-chat-1m)
## Run with LlamaEdge
- LlamaEdge version: [v0.12.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.3) and above
- Prompt template
- Prompt type
- chat: `chatml`
- tool use: `internlm-2-tool`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `1000000`
- Run as LlamaEdge service
- Chat
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:internlm2_5-7b-chat-1m-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template chatml \
--ctx-size 1000000 \
--model-name internlm2_5-7b-chat-1m
```
- Tool use
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:internlm2_5-7b-chat-1m-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template internlm-2-tool \
--ctx-size 1000000 \
--model-name internlm2_5-7b-chat-1m
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. \
--nn-preload default:GGML:AUTO:internlm2_5-7b-chat-1m-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template chatml \
--ctx-size 1000000
```
*Quantized with llama.cpp b3933*