Pretty cool. Could use something like that, just bit bigger, to replace T5-XL and T5-XXL.
Corwin Black
Mescalamba
Β·
AI & ML interests
image denoising
Recent Activity
new activity
about 2 months ago
calcuis/lumina-gguf:Gemma 2 2b quantized doesn't work
new activity
about 2 months ago
Alpha-VLLM/Lumina-Image-2.0:When I run in comfyui
new activity
2 months ago
Comfy-Org/Lumina_Image_2.0_Repackaged:Prompt ComfyUI image VS Offical demo
Organizations
None yet
Mescalamba's activity
Gemma 2 2b quantized doesn't work
20
#1 opened 2 months ago
by
eeditor1055
When I run in comfyui
4
#10 opened 2 months ago
by
layde
Prompt ComfyUI image VS Offical demo
2
#2 opened 2 months ago
by
Dakerqi
Any LLM?
Im not sure its possible for thing like T5 XXL, which is kinda unfortunate cause that stupid thing is used for quite a few image diffusion models as input and its censored pretty heavily even as encoder only part.
Very good!
#1 opened 4 months ago
by
Mescalamba
It does seem really cool but..
#9 opened 4 months ago
by
Mescalamba
Its really good but..
#3 opened 5 months ago
by
Mescalamba
It's a great experimental project! how can I run the model on ComfyUI and get the best result?
12
#1 opened 7 months ago
by
wikeeyang

Possible to work with 8GB VRAM and 16GB RAM?
4
#7 opened 6 months ago
by
krigeta
need fp8 for speed
3
#1 opened 6 months ago
by
Ai11Ali