Instructions to use Skywork/Skywork-R1V-38B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Skywork/Skywork-R1V-38B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Skywork/Skywork-R1V-38B", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Skywork/Skywork-R1V-38B", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Skywork/Skywork-R1V-38B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Skywork/Skywork-R1V-38B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Skywork/Skywork-R1V-38B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Skywork/Skywork-R1V-38B
- SGLang
How to use Skywork/Skywork-R1V-38B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Skywork/Skywork-R1V-38B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Skywork/Skywork-R1V-38B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Skywork/Skywork-R1V-38B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Skywork/Skywork-R1V-38B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use Skywork/Skywork-R1V-38B with Docker Model Runner:
docker model run hf.co/Skywork/Skywork-R1V-38B
metadata
pipeline_tag: image-text-to-text
library_name: vllm
license: mit
Skywork-R1V
📖 Technical Report | 💻 GitHub | 🌐 ModelScope
1. Model Introduction
| Model Name | Vision Encoder | Language Model | HF Link |
|---|---|---|---|
| Skywork-R1V-38B | InternViT-6B-448px-V2_5 | deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | 🤗 Link |
| Skywork-R1V-38B-qwq | InternViT-6B-448px-V2_5 | Qwen/QwQ-32B | - |
2. Feature
- Visual Chain-of-Thought: Enables multi-step logical reasoning on visual inputs, breaking down complex image-based problems into manageable steps.
- Mathematical & Scientific Analysis: Capable of solving visual math problems and interpreting scientific/medical imagery with high precision.
- Cross-Modal Understanding: Seamlessly integrates text and images for richer, context-aware comprehension.
3. Evaluation
Comparison with Larger-Scale Open-Source and Closed-Source Models
| Benchmark | LLM | VLM | ||||
|---|---|---|---|---|---|---|
| QwQ-32B-Preview | InternVL-2.5-38B | VILA 1.5-40B | InternVL2-40B | Skywork-R1V-38B | ||
| Reasoning | MATH-500 | 90.6 | - | - | - | 94.0 |
| AIME 2024 | 50.0 | - | - | - | 72.0 | |
| GPQA | 54.5 | - | - | - | 61.6 | |
| Vision | MathVista(mini) | - | 71.9 | 49.5 | 63.7 | 67.5 |
| MMMU(Val) | - | 63.9 | 55.1 | 55.2 | 69.0 | |
Evaluation results of state-of-the-art LLMs and VLMs
| Vision | Reasoning | Vision | |||||
|---|---|---|---|---|---|---|---|
| MATH-500 | AIME 2024 | GPQA | MathVista(mini) | MMMU(Val) | |||
| pass@1 | pass@1 | pass@1 | pass@1 | pass@1 | |||
| Qwen2.5-72B-Instruct | ❌ | 80.0 | 23.3 | 49.0 | - | - | |
| Deepseek V3 | ❌ | 90.2 | 39.2 | 59.1 | - | - | |
| Deepseek R1 | ❌ | 97.3 | 79.8 | 71.5 | - | - | |
| Claude 3.5 Sonnet | ✅ | 78.3 | 16.0 | 65.0 | 65.3 | 66.4 | |
| GPT-4o | ✅ | 74.6 | 9.3 | 49.9 | 63.8 | 69.1 | |
| Kimi k1.5 | ✅ | 96.2 | 77.5 | - | 74.9 | 70.0 | |
| Qwen2.5-VL-72B-Instruct | ✅ | - | - | - | 74.8 | 70.2 | |
| LLaVA-Onevision-72B | ✅ | - | - | - | 67.5 | 56.8 | |
| InternVL2-Llama3-76B | ✅ | - | - | - | 65.5 | 62.7 | |
| InternVL2.5-78B | ✅ | - | - | - | 72.3 | 70.1 | |
| Skywork-R1V-38B | ✅ | 94.0 | 72.0 | 61.6 | 67.5 | 69.0 | |
4. Usage
1. Clone the Repository
git clone https://github.com/SkyworkAI/Skywork-R1V.git
cd skywork-r1v/inference
2. Set Up the Environment
conda create -n r1-v python=3.10
conda activate r1-v
bash setup.sh
3. Run the Inference Script
CUDA_VISIBLE_DEVICES="0,1" python inference_with_transformers.py \
--model_path path \
--image_paths image1_path \
--question "your question"
5. Citation
If you use Skywork-R1V in your research, please cite:
@misc{peng2025skyworkr1vpioneeringmultimodal,
title={Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought},
author={Yi Peng and Chris and Xiaokun Wang and Yichen Wei and Jiangbo Pei and Weijie Qiu and Ai Jian and Yunzhuo Hao and Jiachun Pan and Tianyidan Xie and Li Ge and Rongxian Zhuang and Xuchen Song and Yang Liu and Yahui Zhou},
year={2025},
eprint={2504.05599},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.05599},
}
This project is released under an open-source license.