Instructions to use internlm/Intern-S1-Pro with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use internlm/Intern-S1-Pro with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="internlm/Intern-S1-Pro", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("internlm/Intern-S1-Pro", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use internlm/Intern-S1-Pro with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "internlm/Intern-S1-Pro" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "internlm/Intern-S1-Pro", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/internlm/Intern-S1-Pro
- SGLang
How to use internlm/Intern-S1-Pro with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "internlm/Intern-S1-Pro" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "internlm/Intern-S1-Pro", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "internlm/Intern-S1-Pro" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "internlm/Intern-S1-Pro", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use internlm/Intern-S1-Pro with Docker Model Runner:
docker model run hf.co/internlm/Intern-S1-Pro
Intern-S1-Pro Deployment Guide
The Intern-S1-Pro release is a 1T parameter model stored in FP8 format. Deployment requires at least two 8-GPU H200 nodes, with either of the following configurations:
- Tensor Parallelism (TP)
- Data Parallelism (DP) + Expert Parallelism (EP)
NOTE: The deployment examples in this guide are provided for reference only and may not represent the latest or most optimized configurations. Inference frameworks are under active development — always consult the official documentation from each framework’s maintainers to ensure peak performance and compatibility.
LMDeploy
Required version lmdeploy>=0.12.0
- Tensor Parallelism
# start ray on node 0 and node 1
# node 0
lmdeploy serve api_server internlm/Intern-S1-Pro --backend pytorch --tp 16
- Data Parallelism + Expert Parallelism
# node 0, proxy server
lmdeploy serve proxy --server-name ${proxy_server_ip} --server-port ${proxy_server_port} --routing-strategy 'min_expected_latency' --serving-strategy Hybrid
# node 0
export LMDEPLOY_DP_MASTER_ADDR=${node0_ip}
export LMDEPLOY_DP_MASTER_PORT=29555
lmdeploy serve api_server \
internlm/Intern-S1-Pro \
--backend pytorch \
--tp 1 \
--dp 16 \
--ep 16 \
--proxy-url http://${proxy_server_ip}:${proxy_server_port} \
--nnodes 2 \
--node-rank 0 \
--reasoning-parser intern-s1 \
--tool-call-parser qwen3
# node 1
export LMDEPLOY_DP_MASTER_ADDR=${node0_ip}
export LMDEPLOY_DP_MASTER_PORT=29555
lmdeploy serve api_server \
internlm/Intern-S1-Pro \
--backend pytorch \
--tp 1 \
--dp 16 \
--ep 16 \
--proxy-url http://${proxy_server_ip}:${proxy_server_port} \
--nnodes 2 \
--node-rank 1 \
--reasoning-parser intern-s1 \
--tool-call-parser qwen3
vLLM
- Tensor Parallelism + Expert Parallelism
# start ray on node 0 and node 1
# node 0
export VLLM_ENGINE_READY_TIMEOUT_S=10000
vllm serve internlm/Intern-S1-Pro \
--tensor-parallel-size 16 \
--enable-expert-parallel \
--distributed-executor-backend ray \
--max-model-len 65536 \
--trust-remote-code \
--reasoning-parser deepseek_r1 \
--enable-auto-tool-choice \
--tool-call-parser hermes
- Data Parallelism + Expert Parallelism
# node 0
export VLLM_ENGINE_READY_TIMEOUT_S=10000
vllm serve internlm/Intern-S1-Pro \
--all2all-backend deepep_low_latency \
--tensor-parallel-size 1 \
--enable-expert-parallel \
--data-parallel-size 16 \
--data-parallel-size-local 8 \
--data-parallel-address ${node0_ip} \
--data-parallel-rpc-port 13345 \
--gpu_memory_utilization 0.8 \
--mm_processor_cache_gb=0 \
--media-io-kwargs '{"video": {"num_frames": 768, "fps": 2}}' \
--max-model-len 65536 \
--trust-remote-code \
--api-server-count=8 \
--reasoning-parser deepseek_r1 \
--enable-auto-tool-choice \
--tool-call-parser hermes
# node 1
export VLLM_ENGINE_READY_TIMEOUT_S=10000
vllm serve internlm/Intern-S1-Pro \
--all2all-backend deepep_low_latency \
--tensor-parallel-size 1 \
--enable-expert-parallel \
--data-parallel-size 16 \
--data-parallel-size-local 8 \
--data-parallel-start-rank 8 \
--data-parallel-address ${node0_ip} \
--data-parallel-rpc-port 13345 \
--gpu_memory_utilization 0.8 \
--mm_processor_cache_gb=0 \
--media-io-kwargs '{"video": {"num_frames": 768, "fps": 2}}' \
--max-model-len 65536 \
--trust-remote-code \
--headless \
--reasoning-parser deepseek_r1 \
--enable-auto-tool-choice \
--tool-call-parser hermes
NOTE: To prevent out-of-memory (OOM) errors, we limit the context length using
--max-model-len 65536. For datasets requiring longer responses, you may increase this value as needed. Additionally, video inference can consume substantial memory in vLLM API server processes; we therefore recommend setting--media-io-kwargs '{"video": {"num_frames": 768, "fps": 2}}'to constrain preprocessing memory usage during video benchmarking.
SGLang
- Tensor Parallelism + Expert Parallelism
export DIST_ADDR=${master_node_ip}:${master_node_port}
# node 0
python3 -m sglang.launch_server \
--model-path internlm/Intern-S1-Pro \
--tp 16 \
--ep 16 \
--mem-fraction-static 0.85 \
--trust-remote-code \
--dist-init-addr ${DIST_ADDR} \
--nnodes 2 \
--attention-backend fa3 \
--mm-attention-backend fa3 \
--keep-mm-feature-on-device \
--node-rank 0 \
--reasoning-parser qwen3 \
--tool-call-parser qwen
# node 1
python3 -m sglang.launch_server \
--model-path internlm/Intern-S1-Pro \
--tp 16 \
--ep 16 \
--mem-fraction-static 0.85 \
--trust-remote-code \
--dist-init-addr ${DIST_ADDR} \
--nnodes 2 \
--attention-backend fa3 \
--mm-attention-backend fa3 \
--keep-mm-feature-on-device \
--node-rank 1 \
--reasoning-parser qwen3 \
--tool-call-parser qwen