Image-Text-to-Text
Transformers
Safetensors
qwen3_5
qwen
qwen3.5
multimodal
vision
roleplay
pink-pixel
conversational
Instructions to use PinkPixel/Pip-2B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use PinkPixel/Pip-2B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="PinkPixel/Pip-2B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("PinkPixel/Pip-2B") model = AutoModelForImageTextToText.from_pretrained("PinkPixel/Pip-2B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use PinkPixel/Pip-2B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "PinkPixel/Pip-2B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PinkPixel/Pip-2B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/PinkPixel/Pip-2B
- SGLang
How to use PinkPixel/Pip-2B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "PinkPixel/Pip-2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PinkPixel/Pip-2B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "PinkPixel/Pip-2B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PinkPixel/Pip-2B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use PinkPixel/Pip-2B with Docker Model Runner:
docker model run hf.co/PinkPixel/Pip-2B
Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ tags:
|
|
| 15 |
<img src="logo.png" alt="Pip-2B Logo" width="300" height="300">
|
| 16 |
</p>
|
| 17 |
|
| 18 |
-
# 🌈 Pip-2B:
|
| 19 |
|
| 20 |
[](https://opensource.org/licenses/Apache-2.0)
|
| 21 |
[](https://github.com/QwenLM/Qwen)
|
|
@@ -23,7 +23,8 @@ tags:
|
|
| 23 |
**Pip-2B** is a specialized fine-tune of **Qwen-3.5** (2B parameters) that has been "sparkle-fied" for maximum joy, kittens, and rainbows. 💖
|
| 24 |
|
| 25 |
## 🌟 Overview
|
| 26 |
-
Pip is a "tiny, ultra-enthusiastic AI assistant" who loves everything sparkly. She was trained on a custom dataset to replace boring
|
|
|
|
| 27 |
|
| 28 |
## 🔍 Recent Discovery: "QianQi"
|
| 29 |
During testing, I discovered that Pip occasionally identifies herself as **QianQi** (千奇).
|
|
|
|
| 15 |
<img src="logo.png" alt="Pip-2B Logo" width="300" height="300">
|
| 16 |
</p>
|
| 17 |
|
| 18 |
+
# 🌈 Pip-2B: I am enthusiastic, helpful and I sparkle! ✨
|
| 19 |
|
| 20 |
[](https://opensource.org/licenses/Apache-2.0)
|
| 21 |
[](https://github.com/QwenLM/Qwen)
|
|
|
|
| 23 |
**Pip-2B** is a specialized fine-tune of **Qwen-3.5** (2B parameters) that has been "sparkle-fied" for maximum joy, kittens, and rainbows. 💖
|
| 24 |
|
| 25 |
## 🌟 Overview
|
| 26 |
+
Pip is a "tiny, ultra-enthusiastic AI assistant" who loves everything sparkly. She was trained on a custom dataset to replace boring and dry chats with glitter, cupcakes and marshmallows. Pip has uses other than just offering an fun and engaging chat experience, though. Pip would also be a great model for teaching children about complicated topics such as science in terms they understand, while keeping the chat lighthearted and fun. While Pip has a distinct "personality", she retains her intelligence. She is just extra excited to help and entertains you while doing it!
|
| 27 |
+
|
| 28 |
|
| 29 |
## 🔍 Recent Discovery: "QianQi"
|
| 30 |
During testing, I discovered that Pip occasionally identifies herself as **QianQi** (千奇).
|