Alae65 commited on
Commit
4ba0b46
Β·
verified Β·
1 Parent(s): 0f30eec

Update README with API integration instructions for n8n

Browse files
Files changed (1) hide show
  1. README.md +126 -34
README.md CHANGED
@@ -10,62 +10,154 @@ pinned: false
10
  short_description: Text-to-Image generation using Tencent HunyuanImage-3.0
11
  ---
12
 
13
- # 🎨 HunyuanImage-3.0 Text-to-Image Generation
14
 
15
- This Space provides an interface for the **Tencent HunyuanImage-3.0** model, a powerful native multimodal model for image generation.
16
 
17
- ## About HunyuanImage-3.0
18
 
19
- HunyuanImage-3.0 is a groundbreaking model that:
20
- - Features 80B total parameters with 13B activated per token (MoE architecture)
21
- - Unifies multimodal understanding and generation in an autoregressive framework
22
- - Achieves performance comparable to leading closed-source models
23
- - Supports intelligent prompt understanding and automatic elaboration
24
 
25
- ## ⚠️ Important Notes
26
 
27
- **Hardware Requirements:**
28
- - Direct inference requires **3Γ—80GB GPU memory** (240GB total)
29
- - ZeroGPU is insufficient for full model inference
30
- - For production use, consider:
31
- - Using Inference API endpoints
32
- - Deploying on appropriate hardware (4Γ—80GB GPUs recommended)
33
- - Using inference providers like FAL AI
34
 
35
- **Current Implementation:**
36
- This Space demonstrates the UI structure and configuration. For actual inference:
37
- 1. The model needs to be loaded with proper hardware
38
- 2. Or integrate with Inference API/providers
39
- 3. Or use model quantization techniques
 
40
 
41
- ## Model Information
 
42
 
43
- - **Model:** [tencent/HunyuanImage-3.0](https://huggingface.co/tencent/HunyuanImage-3.0)
44
- - **Architecture:** Autoregressive MoE (64 experts)
45
- - **Parameters:** 80B total, 13B active per token
46
- - **License:** tencent-hunyuan-community
47
- - **Paper:** [arXiv:2509.23951](https://arxiv.org/abs/2509.23951)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
- ## Features
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
  - 🎯 Advanced prompt understanding
52
- - πŸ–ΌοΈ Multiple resolution support (auto, 1024x1024, 1280x768, 768x1280)
53
  - 🎲 Seed control for reproducibility
54
- - βš™οΈ Configurable diffusion steps
55
  - πŸ“ Example prompts included
 
 
56
 
57
- ## API Endpoint (Coming Soon)
58
 
59
- This Space will support API endpoints for integration with n8n and other workflow tools.
 
 
 
 
 
 
 
 
 
 
 
60
 
61
- ## Links
62
 
63
  - [Official Website](https://hunyuan.tencent.com/image)
64
  - [GitHub Repository](https://github.com/Tencent-Hunyuan/HunyuanImage-3.0)
65
  - [Technical Paper](https://arxiv.org/pdf/2509.23951)
66
  - [Model Card](https://huggingface.co/tencent/HunyuanImage-3.0)
67
 
68
- ## Citation
69
 
70
  ```bibtex
71
  @article{cao2025hunyuanimage,
 
10
  short_description: Text-to-Image generation using Tencent HunyuanImage-3.0
11
  ---
12
 
13
+ # 🎨 HunyuanImage-3.0 Text-to-Image with Inference API
14
 
15
+ This Space provides an interface for the **Tencent HunyuanImage-3.0** model using Hugging Face Inference API (paid from your account balance).
16
 
17
+ ## βœ… What's New
18
 
19
+ - **Real image generation** using HF Inference API
20
+ - **n8n integration** via REST API endpoint
21
+ - **Base64 image output** for easy integration
22
+ - **Automatic token-based authentication**
 
23
 
24
+ ## πŸ”§ Setup Instructions
25
 
26
+ ### 1. Get Your HF Token
27
+ 1. Go to [Hugging Face Settings > Access Tokens](https://huggingface.co/settings/tokens)
28
+ 2. Create a new token with `write` permissions
29
+ 3. Copy the token (starts with `hf_...`)
 
 
 
30
 
31
+ ### 2. Set Your Token in Space
32
+ 1. Go to [Settings](https://huggingface.co/spaces/Alae65/HunyuanImage-3/settings)
33
+ 2. Find "Variables and secrets" section
34
+ 3. Click "Replace" next to HF_TOKEN
35
+ 4. Paste your actual token (replace the placeholder)
36
+ 5. Click "Save"
37
 
38
+ ### 3. Restart the Space
39
+ After setting the token, click "Restart space" in Settings.
40
 
41
+ ## πŸ“‘ API Endpoint for n8n Integration
42
+
43
+ ### Using the Gradio API
44
+
45
+ The Space provides a REST API that can be called from n8n using the HTTP Request node.
46
+
47
+ **Endpoint URL:**
48
+ ```
49
+ https://alae65-hunyuanimage-3.hf.space/gradio_api/call/api_generate
50
+ ```
51
+
52
+ ### n8n HTTP Node Configuration
53
+
54
+ 1. Add an **HTTP Request** node in n8n
55
+ 2. Configure as follows:
56
+
57
+ **Method:** POST
58
+
59
+ **URL:**
60
+ ```
61
+ https://alae65-hunyuanimage-3.hf.space/gradio_api/call/api_generate
62
+ ```
63
+
64
+ **Body (JSON):**
65
+ ```json
66
+ {
67
+ "data": [
68
+ "A serene mountain landscape with a crystal clear lake",
69
+ 42,
70
+ 50
71
+ ]
72
+ }
73
+ ```
74
+
75
+ **Parameters:**
76
+ - `data[0]` (string): Your image prompt
77
+ - `data[1]` (integer): Seed number (for reproducibility)
78
+ - `data[2]` (integer): Number of inference steps (10-100)
79
+
80
+ **Response Format:**
81
+ The API returns a JSON with:
82
+ ```json
83
+ {
84
+ "success": true,
85
+ "image_base64": "iVBORw0KGgoAAAANSUhEUg...",
86
+ "seed": 42,
87
+ "status": "Success!",
88
+ "prompt": "A serene mountain landscape..."
89
+ }
90
+ ```
91
+
92
+ ### Example n8n Workflow
93
 
94
+ 1. **Trigger** β†’ Webhook or Schedule
95
+ 2. **HTTP Request** β†’ Call HunyuanImage API
96
+ 3. **Code** β†’ Decode base64 image if needed
97
+ 4. **Save or Send** β†’ Store image or send via email/Slack
98
+
99
+ ### Python Example
100
+
101
+ ```python
102
+ import requests
103
+ import base64
104
+ from PIL import Image
105
+ from io import BytesIO
106
+
107
+ url = "https://alae65-hunyuanimage-3.hf.space/gradio_api/call/api_generate"
108
+
109
+ payload = {
110
+ "data": [
111
+ "A beautiful sunset over the ocean",
112
+ 42,
113
+ 50
114
+ ]
115
+ }
116
+
117
+ response = requests.post(url, json=payload)
118
+ result = response.json()
119
+
120
+ if result.get("success"):
121
+ # Decode base64 image
122
+ image_data = base64.b64decode(result["image_base64"])
123
+ image = Image.open(BytesIO(image_data))
124
+ image.save("output.png")
125
+ print("Image saved successfully!")
126
+ ```
127
+
128
+ ## 🎯 Features
129
 
130
  - 🎯 Advanced prompt understanding
131
+ - πŸ–ΌοΈ Multiple resolution support
132
  - 🎲 Seed control for reproducibility
133
+ - βš™οΈ Configurable diffusion steps (10-100)
134
  - πŸ“ Example prompts included
135
+ - πŸ”Œ REST API for n8n and automation
136
+ - πŸ“¦ Base64 image encoding
137
 
138
+ ## πŸ’° Pricing
139
 
140
+ This Space uses the **Hugging Face Inference API** which is billed based on usage:
141
+ - Cost is deducted from your HF account balance
142
+ - You mentioned having $9 credit available
143
+ - Check [Billing Settings](https://huggingface.co/settings/billing) for details
144
+
145
+ ## πŸ“š Model Information
146
+
147
+ - **Model:** [tencent/HunyuanImage-3.0](https://huggingface.co/tencent/HunyuanImage-3.0)
148
+ - **Architecture:** Autoregressive MoE (64 experts)
149
+ - **Parameters:** 80B total, 13B active per token
150
+ - **License:** tencent-hunyuan-community
151
+ - **Paper:** [arXiv:2509.23951](https://arxiv.org/abs/2509.23951)
152
 
153
+ ## πŸ”— Links
154
 
155
  - [Official Website](https://hunyuan.tencent.com/image)
156
  - [GitHub Repository](https://github.com/Tencent-Hunyuan/HunyuanImage-3.0)
157
  - [Technical Paper](https://arxiv.org/pdf/2509.23951)
158
  - [Model Card](https://huggingface.co/tencent/HunyuanImage-3.0)
159
 
160
+ ## πŸ“ Citation
161
 
162
  ```bibtex
163
  @article{cao2025hunyuanimage,