mohammed arbi's picture
8 9

mohammed arbi

Goodnight7

AI & ML interests

None yet

Recent Activity

updated a Space 1 day ago
Goodnight7/3min-portfolio
updated a Space 1 day ago
Goodnight7/llama3.2_vision
published a Space 1 day ago
Goodnight7/llama3.2_vision
View all activity

Organizations

SUP4ERNOVA's profile picture Tunisia.AI's profile picture GDGoC SUP'COM's profile picture

Goodnight7's activity

published a Space 1 day ago
upvoted an article 4 days ago
view article
Article

Vision Language Models Explained

โ€ข 306
published a Space 4 days ago
reacted to abidlabs's post with โค๏ธ 4 days ago
view post
Post
3113
JOURNEY TO 1 MILLION DEVELOPERS

5 years ago, we launched Gradio as a simple Python library to let researchers at Stanford easily demo computer vision models with a web interface.

Today, Gradio is used by >1 million developers each month to build and share AI web apps. This includes some of the most popular open-source projects of all time, like Automatic1111, Fooocus, Oobaboogaโ€™s Text WebUI, Dall-E Mini, and LLaMA-Factory.

How did we get here? How did Gradio keep growing in the very crowded field of open-source Python libraries? I get this question a lot from folks who are building their own open-source libraries. This post distills some of the lessons that I have learned over the past few years:

1. Invest in good primitives, not high-level abstractions
2. Embed virality directly into your library
3. Focus on a (growing) niche
4. Your only roadmap should be rapid iteration
5. Maximize ways users can consume your library's outputs

1. Invest in good primitives, not high-level abstractions

When we first launched Gradio, we offered only one high-level class (gr.Interface), which created a complete web app from a single Python function. We quickly realized that developers wanted to create other kinds of apps (e.g. multi-step workflows, chatbots, streaming applications), but as we started listing out the apps users wanted to build, we realized what we needed to do:

Read the rest here: https://x.com/abidlabs/status/1907886
reacted to jsulz's post with ๐Ÿ”ฅ 4 days ago
view post
Post
3466
Huge week for xet-team as Llama 4 is the first major model on Hugging Face uploaded with Xet providing the backing! Every byte downloaded comes through our infrastructure.

Using Xet on Hugging Face is the fastest way to download and iterate on open source models and we've proved it with Llama 4 giving a boost of ~25% across all models.

We expect builders on the Hub to see even more improvements, helping power innovation across the community.

With the models on our infrastructure, we can peer in and see how well our dedupe performs across the Llama 4 family. On average, we're seeing ~25% dedupe, providing huge savings to the community who iterate on these state-of-the-art models. The attached image shows a few selected models and how they perform on Xet.

Thanks to the meta-llama team for launching on Xet!
reacted to as-cle-bert's post with โค๏ธ 30 days ago
view post
Post
2729
I just released a fully automated evaluation framework for your RAG applications!๐Ÿ“ˆ

GitHub ๐Ÿ‘‰ https://github.com/AstraBert/diRAGnosis
PyPi ๐Ÿ‘‰ https://pypi.org/project/diragnosis/

It's called ๐๐ข๐‘๐€๐†๐ง๐จ๐ฌ๐ข๐ฌ and is a lightweight framework that helps you ๐—ฑ๐—ถ๐—ฎ๐—ด๐—ป๐—ผ๐˜€๐—ฒ ๐˜๐—ต๐—ฒ ๐—ฝ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐—ป๐—ฐ๐—ฒ ๐—ผ๐—ณ ๐—Ÿ๐—Ÿ๐— ๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—ฟ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฎ๐—น ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐—ถ๐—ป ๐—ฅ๐—”๐—š ๐—ฎ๐—ฝ๐—ฝ๐—น๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป๐˜€.

You can launch it as an application locally (it's Docker-ready!๐Ÿ‹) or, if you want more flexibility, you can integrate it in your code as a python package๐Ÿ“ฆ

The workflow is simple:
๐Ÿง  You choose your favorite LLM provider and model (supported, for now, are Mistral AI, Groq, Anthropic, OpenAI and Cohere)
๐Ÿง  You pick the embedding models provider and the embedding model you prefer (supported, for now, are Mistral AI, Hugging Face, Cohere and OpenAI)
๐Ÿ“„ You prepare and provide your documents
โš™๏ธ Documents are ingested into a Qdrant vector database and transformed into a synthetic question dataset with the help of LlamaIndex
๐Ÿ“Š The LLM is evaluated for the faithfulness and relevancy of its retrieval-augmented answer to the questions
๐Ÿ“Š The embedding model is evaluated for hit rate and mean reciprocal ranking (MRR) of the retrieved documents

And the cool thing is that all of this is ๐—ถ๐—ป๐˜๐˜‚๐—ถ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—ฎ๐—ป๐—ฑ ๐—ฐ๐—ผ๐—บ๐—ฝ๐—น๐—ฒ๐˜๐—ฒ๐—น๐˜† ๐—ฎ๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ฒ๐—ฑ: you plug it in, and it works!๐Ÿ”Œโšก

Even cooler? This is all built on top of LlamaIndex and its integrations: no need for tons of dependencies or fancy workarounds๐Ÿฆ™
And if you're a UI lover, Gradio and FastAPI are there to provide you a seamless backend-to-frontend experience๐Ÿ•ถ๏ธ

So now it's your turn: you can either get diRAGnosis from GitHub ๐Ÿ‘‰ https://github.com/AstraBert/diRAGnosis
or just run a quick and painless:

uv pip install diragnosis


To get the package installed (lightning-fast) in your environment๐Ÿƒโ€โ™€๏ธ

Have fun and feel free to leave feedback and feature/integrations requests on GitHub issuesโœจ
updated a Space 3 months ago
published a Space 3 months ago
updated a Space 3 months ago