Radamés Ajna

radames

AI & ML interests

None yet

Organizations

radames's activity

replied to Sentdex's post 1 day ago
replied to renyuxi's post 3 days ago
view reply

Hi @renyuxi , thanks for sharing this update! 8 steps with CFG and negative prompts is amazing!

posted an update 4 days ago
view post
Post
1591
I've built a custom component that integrates Rerun web viewer with Gradio, making it easier to share your demos as Gradio apps.

Basic snippet
# pip install gradio_rerun gradio
import gradio as gr
from gradio_rerun import Rerun

gr.Interface(
    inputs=gr.File(file_count="multiple", type="filepath"),
    outputs=Rerun(height=900),
    fn=lambda file_path: file_path,
).launch()

More details here radames/gradio_rerun
Source https://github.com/radames/gradio-rerun-viewer

Follow Rerun here https://huggingface.co/rerun
replied to oliveryanzuolu's post 10 days ago
posted an update 11 days ago
posted an update 11 days ago
replied to andrewrreed's post 11 days ago
view reply

Very interesting, @andrewrreed , and completely unaware of this feature! Do you know of any other strategies for grounded generation in models like LLaMA or Mistral?

posted an update 20 days ago
posted an update about 1 month ago
view post
Post
2720
Following up on @vikhyatk 's Moondream2 update and @santiagomed 's implementation on Candle, I quickly put togheter the WASM module so that you could try running the ~1.5GB quantized model in the browser. Perhaps the next step is to rewrite it using https://github.com/huggingface/ratchet and run it even faster with WebGPU, @FL33TW00D-HF .

radames/Candle-Moondream-2

ps: I have a collection of all Candle WASM demos here radames/candle-wasm-examples-650898dee13ff96230ce3e1f
replied to freddyaboulton's post about 1 month ago
view reply

nice!! can you set the jpeg quality as well?

replied to chansung's post about 1 month ago
replied to Wauplin's post about 1 month ago
posted an update about 1 month ago
view post
Post
3470
Testing new pix2pix-Turbo in real-time, very interesting GAN architecture that leverages SD-Turbo model. Here I'm using edge2image LoRA single-step inference 🤯

It's very interesting how ControlNet Canny quality is comparable, but in a single step. Looking forward to when they release the code: https://github.com/GaParmar/img2img-turbo/issues/1

I've been keeping a list of fast diffusion model pipelines together with this real-time websocket app. Have a look if you want to test it locally, or check out the demo here on Spaces.

radames/real-time-pix2pix-turbo

Github app:
https://github.com/radames/Real-Time-Latent-Consistency-Model/

You can also check the authors img2img sketch model here

gparmar/img2img-turbo-sketch

Refs:
One-Step Image Translation with Text-to-Image Models (2403.12036)

cc @gparmar @junyanz
replied to visheratin's post 2 months ago
view reply

hi @visheratin , do you have any guides on how to train similar model? Phi-2 + SigLIP vision encoder?

replied to victor's post 4 months ago
view reply

very cool! I just ordered RPi5 to run some tests, also this awesome mic hat

image.png

replied to victor's post 4 months ago
view reply

I know it's possible to run real-time whisper on a rapberrypi with whisper.cpp @ggerganov

replied to victor's post 4 months ago
view reply

Are you thinking of running it on a device or in the cloud?

replied to abhishek's post 4 months ago