Abubakar Abid PRO

abidlabs

AI & ML interests

self-supervised learning, applications to medicine & biology, interpretation, reproducibility

Articles

Organizations

abidlabs's activity

posted an update 15 days ago
view post
Post
1933
Open Models vs. Closed APIs for Software Engineers
-----------------------------------------------------------------------

If you're an ML researcher / scientist, you probably don't need much convincing to use open models instead of closed APIs -- open models give you reproducibility and let you deeply investigate the model's behavior.

But what if you are a software engineer building products on top of LLMs? I'd argue that open models are a much better option even if you are using them as APIs. For at least 3 reasons:

1) The most obvious reason is reliability of your product. Relying on a closed API means that your product has a single point-of-failure. On the other hand, there are at least 7 different API providers that offer Llama3 70B already. As well as libraries that abstract on top of these API providers so that you can make a single request that goes to different API providers depending on availability / latency.

2) Another benefit is eventual consistency going local. If your product takes off, it will be more economical and lower latency to have a dedicated inference endpoint running on your VPC than to call external APIs. If you've started with an open-source model, you can always deploy the same model locally. You don't need to modify prompts or change any surrounding logic to get consistent behavior. Minimize your technical debt from the beginning.

3) Finally, open models give you much more flexibility. Even if you keep using APIs, you might want to tradeoff latency vs. cost, or use APIs that support batches of inputs, etc. Because different API providers have different infrastructure, you can use the API provider that makes the most sense for your product -- or you can even use multiple API providers for different users (free vs. paid) or different parts of your product (priority features vs. nice-to-haves)
replied to their post 27 days ago
view reply

It looks like there are multiple requirements.txt files in that repo, perhaps you need to change a different one? You can check which version of Gradio is running in the app by visiting the iframe url directly and then adding /config, e.g.:

https://aadnk-faster-whisper-webui.hf.space/config

replied to their post 28 days ago
view reply

That should be all you need to do. If itโ€™s a Space, can you link to it?

posted an update about 1 month ago
view post
Post
2843
Introducing the Gradio API Recorder ๐Ÿช„

Every Gradio app now includes an API recorder that lets you reconstruct your interaction in a Gradio app as code using the Python or JS clients! Our goal is to make Gradio the easiest way to build ML APIs, not just UIs ๐Ÿ”ฅ

ยท
replied to Molbap's post about 1 month ago
replied to freddyaboulton's post about 2 months ago
replied to dhuynh95's post about 2 months ago
replied to andrewyng's post 2 months ago
view reply

Thanks for hosting this course on DeepLearning.ai!

posted an update 3 months ago
view post
Post
Necessity is the mother of invention, and of Gradio components.

Sometimes we realize that we need a Gradio component to build a cool application and demo, so we just build it. For example, we just added a new gr.ParamViewer component because we needed it to display information about Python & JavaScript functions in our documentation.

Of course, our users should be able able to do the same thing for their machine learning applications, so that's why Gradio lets you build custom components, and publish them to the world ๐Ÿ”ฅ
posted an update 3 months ago
view post
Post
Lots of cool Gradio custom components, but is the most generally useful one I've seen so far: insert a Modal into any Gradio app by using the modal component!

from gradio_modal import Modal

with gr.Blocks() as demo:
    gr.Markdown("### Main Page")
    gr.Textbox("lorem ipsum " * 1000, lines=10)

    with Modal(visible=True) as modal:
        gr.Markdown("# License Agreement")
posted an update 3 months ago
view post
Post
Just out: new custom Gradio component specifically designed for code completion models ๐Ÿ”ฅ
  • 1 reply
ยท
replied to their post 3 months ago
view reply

Here's a quick example I put together for that video:

import gradio as gr
import time

def hide_tab():
    return gr.Tab(visible=False)


with gr.Blocks() as demo:
    with gr.Tabs(selected=1):
        with gr.Tab("First step", id=1) as a:
            gr.Image("bunny.jpeg", height=400, width=500)
            button1 = gr.Button("Next step")
        with gr.Tab("Second step") as b:
            gr.Model3D("Bunny.obj")
            button2 = gr.Button("Next step")
        with gr.Tab("Third step") as c:
            gr.Markdown("All done!")

    button1.click(hide_tab, None, a)
    button2.click(hide_tab, None, b)
    
demo.launch()
posted an update 3 months ago
view post
Post
The next version of Gradio will be significantly more efficient (as well as a bit faster) for anyone who uses Gradio's streaming features. Looking at you chatbot developers @oobabooga @pseudotensor :)

The major change that we're making is that when you stream data, Gradio used to send the entire payload at each token. This is generally the most robust way to ensure all the data is correctly transmitted. We've now switched to sending "diffs" --> so at each time step, we automatically compute the diff between the most recent updates and then only send the latest token (or whatever the diff may be). Coupled with the fact that we are now using SSE, which is a more robust communication protocol than WS (SSE will resend packets if there's any drops), we should have the best of both worlds: efficient *and* robust streaming.

Very cool stuff @aliabid94 ! PR: https://github.com/gradio-app/gradio/pull/7102
posted an update 3 months ago
replied to their post 3 months ago
replied to their post 3 months ago
replied to their post 3 months ago
replied to osanseviero's post 3 months ago
replied to osanseviero's post 3 months ago
view reply

For some reason, being exposed to two very different languages during training seems to help models (just like humans) with all sorts of tasks

posted an update 3 months ago
view post
Post
Gradio 4.16 introduces a new flow: you can hide/show Tabs or make them interactive/non-interactive.

Really nice for multi-step machine learning ademos โšก๏ธ
  • 6 replies
ยท
posted an update 3 months ago
view post
Post
โœจ Excited to release gradio 4.16. New features include:

๐Ÿปโ€โ„๏ธ Native support for Polars Dataframe
๐Ÿ–ผ๏ธ Gallery component can be used as an input
โšก Much faster streaming for low-latency chatbots
๐Ÿ“„ Auto generated docs for custom components

... and much more! This is HUGE release, so check out everything else in our changelog: https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md
ยท
posted an update 4 months ago
view post
Post
๐—›๐—ผ๐˜„ ๐˜„๐—ฒ ๐—บ๐—ฎ๐—ฑ๐—ฒ ๐—š๐—ฟ๐—ฎ๐—ฑ๐—ถ๐—ผ ๐—ณ๐—ฎ๐˜€๐˜๐—ฒ๐—ฟ ๐—ฏ๐˜†... ๐˜€๐—น๐—ผ๐˜„๐—ถ๐—ป๐—ด ๐—ถ๐˜ ๐—ฑ๐—ผ๐˜„๐—ป!

About a month ago, @oobabooga (who built the popular text generation webui) reported an interesting issue to the Gradio team. After upgrading to Gradio 4, @oobabooga noticed that chatbots that streamed very quickly had a lag before their text would show up in the Gradio app.

After some investigation, we determined that the Gradio frontend would receive the updates from the backend immediately, but the browser would lag before rendering the changes on the screen. The main difference between Gradio 3 and Gradio 4 was that we migrated the communication protocol between the backend and frontend from Websockets (WS) to Server-Side Events (SSE), but we couldn't figure out why this would affect the browser's ability to render the streaming updates it was receiving.

After diving deep into browsers events, @aliabid94 and @pngwn made a realization: most browsers treat WS events (specifically the WebSocket.onmessage function) with a lower priority than SSE events (EventSource.onmessage function), which allowed the browser to repaint the window between WS messages. With SSE, the streaming updates would stack up in the browser's event stack and be prioritized over any browser repaint. The browser would eventually clear the stack but it would take some time to go through each update, which produced a lag.

We debated different options, but the solution that we implemented was to introduce throttling: we slowed down how frequently we would push updates to the browser event stack to a maximum rate of 20/sec. Although this seemingly โ€œslowed downโ€ Gradio streaming, it actually would allow browsers to process updates in real-time and provide a much better experience to end users of Gradio apps.

See the PR here: https://github.com/gradio-app/gradio/pull/7084

Kudos to @aliabid94 and @pngwn for the fix, and to @oobabooga and @pseudotensor for helping us test it out!
ยท
replied to s3nh's post 4 months ago
view reply

Thank you @s3nh this is exactly what a friend of mine needed to know! Forwarding him your post

posted an update 4 months ago
view post
Post
There's a lot of interest in machine learning models that generate 3D objects, so Gradio now supports previewing STL files natively in the Model3D component. Huge thanks to Monius for the contribution ๐Ÿ”ฅ๐Ÿ”ฅ
  • 2 replies
ยท
posted an update 4 months ago
view post
Post
๐„๐ฆ๐›๐ซ๐š๐œ๐ž๐ ๐›๐ฒ ๐‡๐ฎ๐ ๐ ๐ข๐ง๐  ๐…๐š๐œ๐ž: ๐ญ๐ก๐ž ๐ˆ๐ง๐ฌ๐ข๐๐ž ๐’๐ญ๐จ๐ซ๐ฒ ๐จ๐Ÿ ๐Ž๐ฎ๐ซ ๐’๐ญ๐š๐ซ๐ญ๐ฎ๐ฉโ€™๐ฌ ๐€๐œ๐ช๐ฎ๐ข๐ฌ๐ข๐ญ๐ข๐จ๐ง

In late 2021, our team of five engineers, scattered around the globe, signed the papers to shut down our startup, Gradio. For many founders, this would have been a moment of sadness or even bitter reflection.

But we were celebrating. We were getting acquired by Hugging Face!

We had been working very hard towards this acquisition, but for weeks, the acquisition had been blocked by a single investor. The more we pressed him, the more he buckled down, refusing to sign off on the acquisition. Until, unexpectedly, the investor conceded, allowing us to join Hugging Face.

For the first time since our acquisition, Iโ€™m writing down the story in detail, hoping that it may shed some light into the obscure world of startup acquisitions and what decisions founders can make to improve their odds for a successful acquisition.

To understand how we got acquired by Hugging Face, you need to know why we started Gradio.

๐€๐ง ๐ˆ๐๐ž๐š ๐Ÿ๐ซ๐จ๐ฆ ๐ญ๐ก๐ž ๐‡๐ž๐š๐ซ๐ญ

Two years before the acquisition, in early 2019, I was working on a research project at Stanford. It was the third year of my PhD, and my labmates and I had trained a machine learning model that could predict patient biomarkers (such as whether patients had certain diseases or an implanted pacemaker) from an ultrasound image of their heart โ€” as well as a cardiologist.

Naturally, cardiologists were skeptical... read the rest of the story here: https://twitter.com/abidlabs/status/1745533306492588303
  • 1 reply
ยท
replied to pharaouk's post 4 months ago