already seeing some purple on the hub : https://huggingface.co/Systran/faster-whisper-large-v3
Hafedh Hichri
not-lain
AI & ML interests
custom AI models with HF integration, HuggingFace fellow ๐ค
Recent Activity
liked
a model
about 20 hours ago
unsloth/DeepSeek-V3-0324-GGUF
upvoted
a
paper
about 23 hours ago
Sonata: Self-Supervised Learning of Reliable Point Representations
Organizations
not-lain's activity

reacted to
AdinaY's
post with ๐ฅ
10 days ago
Post
2868
RWKV7-G1 0.1B ๐ฅ Pure RNN reasoning model released by RWKV
Model: BlinkDL/rwkv7-g1
paper: RWKV-7 "Goose" with Expressive Dynamic State Evolution (2503.14456)
โจ Apache2.0
โจ Supports 100+ languages
โจ 0.1 B runs smoothly on low power devices
โจ 0.4B/1.5B/2.9B are coming soon!!
Model: BlinkDL/rwkv7-g1
paper: RWKV-7 "Goose" with Expressive Dynamic State Evolution (2503.14456)
โจ Apache2.0
โจ Supports 100+ languages
โจ 0.1 B runs smoothly on low power devices
โจ 0.4B/1.5B/2.9B are coming soon!!

reacted to
Jaward's
post with ๐
10 days ago
Post
2079
Nvidia brings blue (from starwars droids) to life ๐คฏ, supercute with flawless dexterity and droid voice. It's the result of their colab research with Google DeepMind and Disney, revealed as part of their new opensource physics engine for robotics simulation: NEWTON - which enables robots to learn how to complete complex tasks with greater precision.
ReadMore: https://developer.nvidia.com/blog/announcing-newton-an-open-source-physics-engine-for-robotics-simulation?ncid=so-twit-820797-vt48
ReadMore: https://developer.nvidia.com/blog/announcing-newton-an-open-source-physics-engine-for-robotics-simulation?ncid=so-twit-820797-vt48

reacted to
csabakecskemeti's
post with ๐
10 days ago
Post
1788
GTC new model announcement now from Nvidia
nvidia/Llama-3_3-Nemotron-Super-49B-v1
GGUFs:
DevQuasar/nvidia.Llama-3_3-Nemotron-Super-49B-v1-GGUF
Enjoy!
nvidia/Llama-3_3-Nemotron-Super-49B-v1
GGUFs:
DevQuasar/nvidia.Llama-3_3-Nemotron-Super-49B-v1-GGUF
Enjoy!

reacted to
m-ric's
post with ๐ค
13 days ago
Post
4597
smolagents now support vLLM! ๐ฅณ
As one of the most popular local inference solutions, the community had been asking us to integrate vLLM: after a heavy refactoring of our LLM classes, we've just released smolagents 1.11.0, with a brand new VLLMModel class.
Go try it and tell us what you think!
https://github.com/huggingface/smolagents/blob/45b2c86857b7f7657daaa74e4d17d347e9e2c4a4/src/smolagents/models.py#L497
As one of the most popular local inference solutions, the community had been asking us to integrate vLLM: after a heavy refactoring of our LLM classes, we've just released smolagents 1.11.0, with a brand new VLLMModel class.
Go try it and tell us what you think!
https://github.com/huggingface/smolagents/blob/45b2c86857b7f7657daaa74e4d17d347e9e2c4a4/src/smolagents/models.py#L497

replied to
their
post
16 days ago
Glad to be of help ๐

reacted to
BrigitteTousi's
post with ๐ค
17 days ago

posted
an
update
17 days ago
Post
1624
๐AraClip is now fully integrated with Hugging Face ๐ค
AraClip is a specialized CLIP model that was created by @pain and optimized for Arabic text-image retrieval tasks๐ฅ
๐ Try it out ๐
๐ค model: Arabic-Clip/araclip
๐งฉ Gradio demo: Arabic-Clip/Araclip-Simplified
๐ website: https://arabic-clip.github.io/Arabic-CLIP/
AraClip is a specialized CLIP model that was created by @pain and optimized for Arabic text-image retrieval tasks๐ฅ
๐ Try it out ๐
๐ค model: Arabic-Clip/araclip
๐งฉ Gradio demo: Arabic-Clip/Araclip-Simplified
๐ website: https://arabic-clip.github.io/Arabic-CLIP/

reacted to
as-cle-bert's
post with โค๏ธ๐
19 days ago
Post
2711
I just released a fully automated evaluation framework for your RAG applications!๐
GitHub ๐ https://github.com/AstraBert/diRAGnosis
PyPi ๐ https://pypi.org/project/diragnosis/
It's called ๐๐ข๐๐๐๐ง๐จ๐ฌ๐ข๐ฌ and is a lightweight framework that helps you ๐ฑ๐ถ๐ฎ๐ด๐ป๐ผ๐๐ฒ ๐๐ต๐ฒ ๐ฝ๐ฒ๐ฟ๐ณ๐ผ๐ฟ๐บ๐ฎ๐ป๐ฐ๐ฒ ๐ผ๐ณ ๐๐๐ ๐ ๐ฎ๐ป๐ฑ ๐ฟ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ถ๐ป ๐ฅ๐๐ ๐ฎ๐ฝ๐ฝ๐น๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป๐.
You can launch it as an application locally (it's Docker-ready!๐) or, if you want more flexibility, you can integrate it in your code as a python package๐ฆ
The workflow is simple:
๐ง You choose your favorite LLM provider and model (supported, for now, are Mistral AI, Groq, Anthropic, OpenAI and Cohere)
๐ง You pick the embedding models provider and the embedding model you prefer (supported, for now, are Mistral AI, Hugging Face, Cohere and OpenAI)
๐ You prepare and provide your documents
โ๏ธ Documents are ingested into a Qdrant vector database and transformed into a synthetic question dataset with the help of LlamaIndex
๐ The LLM is evaluated for the faithfulness and relevancy of its retrieval-augmented answer to the questions
๐ The embedding model is evaluated for hit rate and mean reciprocal ranking (MRR) of the retrieved documents
And the cool thing is that all of this is ๐ถ๐ป๐๐๐ถ๐๐ถ๐๐ฒ ๐ฎ๐ป๐ฑ ๐ฐ๐ผ๐บ๐ฝ๐น๐ฒ๐๐ฒ๐น๐ ๐ฎ๐๐๐ผ๐บ๐ฎ๐๐ฒ๐ฑ: you plug it in, and it works!๐โก
Even cooler? This is all built on top of LlamaIndex and its integrations: no need for tons of dependencies or fancy workarounds๐ฆ
And if you're a UI lover, Gradio and FastAPI are there to provide you a seamless backend-to-frontend experience๐ถ๏ธ
So now it's your turn: you can either get diRAGnosis from GitHub ๐ https://github.com/AstraBert/diRAGnosis
or just run a quick and painless:
To get the package installed (lightning-fast) in your environment๐โโ๏ธ
Have fun and feel free to leave feedback and feature/integrations requests on GitHub issuesโจ
GitHub ๐ https://github.com/AstraBert/diRAGnosis
PyPi ๐ https://pypi.org/project/diragnosis/
It's called ๐๐ข๐๐๐๐ง๐จ๐ฌ๐ข๐ฌ and is a lightweight framework that helps you ๐ฑ๐ถ๐ฎ๐ด๐ป๐ผ๐๐ฒ ๐๐ต๐ฒ ๐ฝ๐ฒ๐ฟ๐ณ๐ผ๐ฟ๐บ๐ฎ๐ป๐ฐ๐ฒ ๐ผ๐ณ ๐๐๐ ๐ ๐ฎ๐ป๐ฑ ๐ฟ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น ๐บ๐ผ๐ฑ๐ฒ๐น๐ ๐ถ๐ป ๐ฅ๐๐ ๐ฎ๐ฝ๐ฝ๐น๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป๐.
You can launch it as an application locally (it's Docker-ready!๐) or, if you want more flexibility, you can integrate it in your code as a python package๐ฆ
The workflow is simple:
๐ง You choose your favorite LLM provider and model (supported, for now, are Mistral AI, Groq, Anthropic, OpenAI and Cohere)
๐ง You pick the embedding models provider and the embedding model you prefer (supported, for now, are Mistral AI, Hugging Face, Cohere and OpenAI)
๐ You prepare and provide your documents
โ๏ธ Documents are ingested into a Qdrant vector database and transformed into a synthetic question dataset with the help of LlamaIndex
๐ The LLM is evaluated for the faithfulness and relevancy of its retrieval-augmented answer to the questions
๐ The embedding model is evaluated for hit rate and mean reciprocal ranking (MRR) of the retrieved documents
And the cool thing is that all of this is ๐ถ๐ป๐๐๐ถ๐๐ถ๐๐ฒ ๐ฎ๐ป๐ฑ ๐ฐ๐ผ๐บ๐ฝ๐น๐ฒ๐๐ฒ๐น๐ ๐ฎ๐๐๐ผ๐บ๐ฎ๐๐ฒ๐ฑ: you plug it in, and it works!๐โก
Even cooler? This is all built on top of LlamaIndex and its integrations: no need for tons of dependencies or fancy workarounds๐ฆ
And if you're a UI lover, Gradio and FastAPI are there to provide you a seamless backend-to-frontend experience๐ถ๏ธ
So now it's your turn: you can either get diRAGnosis from GitHub ๐ https://github.com/AstraBert/diRAGnosis
or just run a quick and painless:
uv pip install diragnosis
To get the package installed (lightning-fast) in your environment๐โโ๏ธ
Have fun and feel free to leave feedback and feature/integrations requests on GitHub issuesโจ

reacted to
Bils's
post with ๐
20 days ago
Post
4878
Spatial sound experience! SonicOrbit features AI beat detection to auto-sync your rhythm.
Bils/SonicOrbit
Bils/SonicOrbit

reacted to
daavoo's
post with ๐
21 days ago
Post
2053
Hi there ๐! Check this project for mapping features in OpenStreetMap with Computer Vision:
โญ-> https://github.com/mozilla-ai/osm-ai-helper
And a live demo showing how to map new swimming pools ๐:
๐บ๏ธ -> mozilla-ai/osm-ai-helper
โญ-> https://github.com/mozilla-ai/osm-ai-helper
And a live demo showing how to map new swimming pools ๐:
๐บ๏ธ -> mozilla-ai/osm-ai-helper

reacted to
DualityAI-RebekahBogdanoff's
post with ๐
21 days ago
Post
2825
๐ Duality is super excited to announce that our Kaggle competition is LIVE! Synthetic-to-Real Object Detection Challenge is LIVE! ๐ฆ
Want to master AI training, learn industry-proven synthetic data workflows, and compete for public recognition and cash prizes?
๐ Join our Synthetic-to-Real Object Detection Challenge on Kaggle! https://www.kaggle.com/competitions/synthetic-2-real-object-detection-challenge/overview
Compete to build the top-performing model capable of detecting real-world objectsโtrained entirely on synthetic data. Master these industry-proven methods for faster, more targeted, and diverse dataset creation, and set yourself apart, unlocking today's most exciting AI opportunities.
Ready to test your skills?
๐ The Challenge
Train an object detection model using synthetic images created with FalconโDuality AI's cutting-edge digital twin simulation softwareโthen evaluate your model on real-world imagery.
The Twist?
๐ Boost your modelโs accuracy by creating and refining your own custom synthetic datasets using Falcon! Get access to the tools and double the data by following this link and creating a free account-
https://falcon.duality.ai/secure/documentation/ex-1-objdetection?sidebarMode=learn
Win Cash Prizes & Recognition
๐น Earn cash and public shout-outs from the Duality AI accounts
Enhance Your Portfolio
๐น Demonstrate your real-world AI and ML expertise in object detection to prospective employers and collaborators.
Expand Your Network
๐น Engage, compete, and collaborate with fellow ML engineers, researchers, and students.
๐ Put your skills to the test and join our Kaggle competition today: https://www.kaggle.com/competitions/synthetic-2-real-object-detection-challenge/overview
Want to master AI training, learn industry-proven synthetic data workflows, and compete for public recognition and cash prizes?
๐ Join our Synthetic-to-Real Object Detection Challenge on Kaggle! https://www.kaggle.com/competitions/synthetic-2-real-object-detection-challenge/overview
Compete to build the top-performing model capable of detecting real-world objectsโtrained entirely on synthetic data. Master these industry-proven methods for faster, more targeted, and diverse dataset creation, and set yourself apart, unlocking today's most exciting AI opportunities.
Ready to test your skills?
๐ The Challenge
Train an object detection model using synthetic images created with FalconโDuality AI's cutting-edge digital twin simulation softwareโthen evaluate your model on real-world imagery.
The Twist?
๐ Boost your modelโs accuracy by creating and refining your own custom synthetic datasets using Falcon! Get access to the tools and double the data by following this link and creating a free account-
https://falcon.duality.ai/secure/documentation/ex-1-objdetection?sidebarMode=learn
Win Cash Prizes & Recognition
๐น Earn cash and public shout-outs from the Duality AI accounts
Enhance Your Portfolio
๐น Demonstrate your real-world AI and ML expertise in object detection to prospective employers and collaborators.
Expand Your Network
๐น Engage, compete, and collaborate with fellow ML engineers, researchers, and students.
๐ Put your skills to the test and join our Kaggle competition today: https://www.kaggle.com/competitions/synthetic-2-real-object-detection-challenge/overview

reacted to
andito's
post with ๐ฅ๐
24 days ago
Post
2531
Extremely bullish on
@CohereForAI
's Aya Vision (8B & 32B) - new SOTA open-weight VLMs
- 8B wins up to 81% of the time in its class, better than Gemini Flash
- 32B beats Llama 3.2 90B!
- Covers 23 languages, excels in image captioning, VQA & more
- Integrated on transformers from Day 0!
Efficient multimodal models are here to stay!!๐ฅ
Check out their blog! https://huggingface.co/blog/aya-vision
- 8B wins up to 81% of the time in its class, better than Gemini Flash
- 32B beats Llama 3.2 90B!
- Covers 23 languages, excels in image captioning, VQA & more
- Integrated on transformers from Day 0!
Efficient multimodal models are here to stay!!๐ฅ
Check out their blog! https://huggingface.co/blog/aya-vision

reacted to
lysandre's
post with ๐ฅ๐โค๏ธ
about 1 month ago
Post
5930
SmolVLM-2 and SigLIP-2 are now part of
They're added on top of the v4.49.0 release, and can be installed from the following tags:
This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).
Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.
Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.
transformers
in dedicated releases!They're added on top of the v4.49.0 release, and can be installed from the following tags:
v4.49.0-SmolVLM-2
and v4.49.0-SigLIP-2
.This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).
Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.
Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.