Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
| 2 |
<div align="center">
|
| 3 |
|
|
@@ -346,5 +352,4 @@ We extend our heartfelt gratitude to the following open-source projects and comm
|
|
| 346 |
* π¨ [Diffusers](https://github.com/huggingface/diffusers) - Diffusion models library
|
| 347 |
* π [HuggingFace](https://huggingface.co/) - AI model hub and community
|
| 348 |
* β‘ [FlashAttention](https://github.com/Dao-AILab/flash-attention) - Memory-efficient attention
|
| 349 |
-
* π [FlashInfer](https://github.com/flashinfer-ai/flashinfer) - Optimized inference engine
|
| 350 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pipeline_tag: text-to-image
|
| 3 |
+
license_name: tencent-hunyuan-community
|
| 4 |
+
license: other
|
| 5 |
+
license_link: LICENSE
|
| 6 |
+
---
|
| 7 |
|
| 8 |
<div align="center">
|
| 9 |
|
|
|
|
| 352 |
* π¨ [Diffusers](https://github.com/huggingface/diffusers) - Diffusion models library
|
| 353 |
* π [HuggingFace](https://huggingface.co/) - AI model hub and community
|
| 354 |
* β‘ [FlashAttention](https://github.com/Dao-AILab/flash-attention) - Memory-efficient attention
|
| 355 |
+
* π [FlashInfer](https://github.com/flashinfer-ai/flashinfer) - Optimized inference engine
|
|
|