Spaces:
Running
Running
Update README with API integration instructions for n8n
Browse files
README.md
CHANGED
|
@@ -10,62 +10,154 @@ pinned: false
|
|
| 10 |
short_description: Text-to-Image generation using Tencent HunyuanImage-3.0
|
| 11 |
---
|
| 12 |
|
| 13 |
-
# π¨ HunyuanImage-3.0 Text-to-Image
|
| 14 |
|
| 15 |
-
This Space provides an interface for the **Tencent HunyuanImage-3.0** model
|
| 16 |
|
| 17 |
-
##
|
| 18 |
|
| 19 |
-
|
| 20 |
-
-
|
| 21 |
-
-
|
| 22 |
-
-
|
| 23 |
-
- Supports intelligent prompt understanding and automatic elaboration
|
| 24 |
|
| 25 |
-
##
|
| 26 |
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
- Using Inference API endpoints
|
| 32 |
-
- Deploying on appropriate hardware (4Γ80GB GPUs recommended)
|
| 33 |
-
- Using inference providers like FAL AI
|
| 34 |
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
|
|
|
| 40 |
|
| 41 |
-
|
|
|
|
| 42 |
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
- π― Advanced prompt understanding
|
| 52 |
-
- πΌοΈ Multiple resolution support
|
| 53 |
- π² Seed control for reproducibility
|
| 54 |
-
- βοΈ Configurable diffusion steps
|
| 55 |
- π Example prompts included
|
|
|
|
|
|
|
| 56 |
|
| 57 |
-
##
|
| 58 |
|
| 59 |
-
This Space
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
|
| 61 |
-
## Links
|
| 62 |
|
| 63 |
- [Official Website](https://hunyuan.tencent.com/image)
|
| 64 |
- [GitHub Repository](https://github.com/Tencent-Hunyuan/HunyuanImage-3.0)
|
| 65 |
- [Technical Paper](https://arxiv.org/pdf/2509.23951)
|
| 66 |
- [Model Card](https://huggingface.co/tencent/HunyuanImage-3.0)
|
| 67 |
|
| 68 |
-
## Citation
|
| 69 |
|
| 70 |
```bibtex
|
| 71 |
@article{cao2025hunyuanimage,
|
|
|
|
| 10 |
short_description: Text-to-Image generation using Tencent HunyuanImage-3.0
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# π¨ HunyuanImage-3.0 Text-to-Image with Inference API
|
| 14 |
|
| 15 |
+
This Space provides an interface for the **Tencent HunyuanImage-3.0** model using Hugging Face Inference API (paid from your account balance).
|
| 16 |
|
| 17 |
+
## β
What's New
|
| 18 |
|
| 19 |
+
- **Real image generation** using HF Inference API
|
| 20 |
+
- **n8n integration** via REST API endpoint
|
| 21 |
+
- **Base64 image output** for easy integration
|
| 22 |
+
- **Automatic token-based authentication**
|
|
|
|
| 23 |
|
| 24 |
+
## π§ Setup Instructions
|
| 25 |
|
| 26 |
+
### 1. Get Your HF Token
|
| 27 |
+
1. Go to [Hugging Face Settings > Access Tokens](https://huggingface.co/settings/tokens)
|
| 28 |
+
2. Create a new token with `write` permissions
|
| 29 |
+
3. Copy the token (starts with `hf_...`)
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
+
### 2. Set Your Token in Space
|
| 32 |
+
1. Go to [Settings](https://huggingface.co/spaces/Alae65/HunyuanImage-3/settings)
|
| 33 |
+
2. Find "Variables and secrets" section
|
| 34 |
+
3. Click "Replace" next to HF_TOKEN
|
| 35 |
+
4. Paste your actual token (replace the placeholder)
|
| 36 |
+
5. Click "Save"
|
| 37 |
|
| 38 |
+
### 3. Restart the Space
|
| 39 |
+
After setting the token, click "Restart space" in Settings.
|
| 40 |
|
| 41 |
+
## π‘ API Endpoint for n8n Integration
|
| 42 |
+
|
| 43 |
+
### Using the Gradio API
|
| 44 |
+
|
| 45 |
+
The Space provides a REST API that can be called from n8n using the HTTP Request node.
|
| 46 |
+
|
| 47 |
+
**Endpoint URL:**
|
| 48 |
+
```
|
| 49 |
+
https://alae65-hunyuanimage-3.hf.space/gradio_api/call/api_generate
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
### n8n HTTP Node Configuration
|
| 53 |
+
|
| 54 |
+
1. Add an **HTTP Request** node in n8n
|
| 55 |
+
2. Configure as follows:
|
| 56 |
+
|
| 57 |
+
**Method:** POST
|
| 58 |
+
|
| 59 |
+
**URL:**
|
| 60 |
+
```
|
| 61 |
+
https://alae65-hunyuanimage-3.hf.space/gradio_api/call/api_generate
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
**Body (JSON):**
|
| 65 |
+
```json
|
| 66 |
+
{
|
| 67 |
+
"data": [
|
| 68 |
+
"A serene mountain landscape with a crystal clear lake",
|
| 69 |
+
42,
|
| 70 |
+
50
|
| 71 |
+
]
|
| 72 |
+
}
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
**Parameters:**
|
| 76 |
+
- `data[0]` (string): Your image prompt
|
| 77 |
+
- `data[1]` (integer): Seed number (for reproducibility)
|
| 78 |
+
- `data[2]` (integer): Number of inference steps (10-100)
|
| 79 |
+
|
| 80 |
+
**Response Format:**
|
| 81 |
+
The API returns a JSON with:
|
| 82 |
+
```json
|
| 83 |
+
{
|
| 84 |
+
"success": true,
|
| 85 |
+
"image_base64": "iVBORw0KGgoAAAANSUhEUg...",
|
| 86 |
+
"seed": 42,
|
| 87 |
+
"status": "Success!",
|
| 88 |
+
"prompt": "A serene mountain landscape..."
|
| 89 |
+
}
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
### Example n8n Workflow
|
| 93 |
|
| 94 |
+
1. **Trigger** β Webhook or Schedule
|
| 95 |
+
2. **HTTP Request** β Call HunyuanImage API
|
| 96 |
+
3. **Code** β Decode base64 image if needed
|
| 97 |
+
4. **Save or Send** β Store image or send via email/Slack
|
| 98 |
+
|
| 99 |
+
### Python Example
|
| 100 |
+
|
| 101 |
+
```python
|
| 102 |
+
import requests
|
| 103 |
+
import base64
|
| 104 |
+
from PIL import Image
|
| 105 |
+
from io import BytesIO
|
| 106 |
+
|
| 107 |
+
url = "https://alae65-hunyuanimage-3.hf.space/gradio_api/call/api_generate"
|
| 108 |
+
|
| 109 |
+
payload = {
|
| 110 |
+
"data": [
|
| 111 |
+
"A beautiful sunset over the ocean",
|
| 112 |
+
42,
|
| 113 |
+
50
|
| 114 |
+
]
|
| 115 |
+
}
|
| 116 |
+
|
| 117 |
+
response = requests.post(url, json=payload)
|
| 118 |
+
result = response.json()
|
| 119 |
+
|
| 120 |
+
if result.get("success"):
|
| 121 |
+
# Decode base64 image
|
| 122 |
+
image_data = base64.b64decode(result["image_base64"])
|
| 123 |
+
image = Image.open(BytesIO(image_data))
|
| 124 |
+
image.save("output.png")
|
| 125 |
+
print("Image saved successfully!")
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
## π― Features
|
| 129 |
|
| 130 |
- π― Advanced prompt understanding
|
| 131 |
+
- πΌοΈ Multiple resolution support
|
| 132 |
- π² Seed control for reproducibility
|
| 133 |
+
- βοΈ Configurable diffusion steps (10-100)
|
| 134 |
- π Example prompts included
|
| 135 |
+
- π REST API for n8n and automation
|
| 136 |
+
- π¦ Base64 image encoding
|
| 137 |
|
| 138 |
+
## π° Pricing
|
| 139 |
|
| 140 |
+
This Space uses the **Hugging Face Inference API** which is billed based on usage:
|
| 141 |
+
- Cost is deducted from your HF account balance
|
| 142 |
+
- You mentioned having $9 credit available
|
| 143 |
+
- Check [Billing Settings](https://huggingface.co/settings/billing) for details
|
| 144 |
+
|
| 145 |
+
## π Model Information
|
| 146 |
+
|
| 147 |
+
- **Model:** [tencent/HunyuanImage-3.0](https://huggingface.co/tencent/HunyuanImage-3.0)
|
| 148 |
+
- **Architecture:** Autoregressive MoE (64 experts)
|
| 149 |
+
- **Parameters:** 80B total, 13B active per token
|
| 150 |
+
- **License:** tencent-hunyuan-community
|
| 151 |
+
- **Paper:** [arXiv:2509.23951](https://arxiv.org/abs/2509.23951)
|
| 152 |
|
| 153 |
+
## π Links
|
| 154 |
|
| 155 |
- [Official Website](https://hunyuan.tencent.com/image)
|
| 156 |
- [GitHub Repository](https://github.com/Tencent-Hunyuan/HunyuanImage-3.0)
|
| 157 |
- [Technical Paper](https://arxiv.org/pdf/2509.23951)
|
| 158 |
- [Model Card](https://huggingface.co/tencent/HunyuanImage-3.0)
|
| 159 |
|
| 160 |
+
## π Citation
|
| 161 |
|
| 162 |
```bibtex
|
| 163 |
@article{cao2025hunyuanimage,
|