Build datasets for AI on the Hugging Face Hub—10x easier than ever!
Today, I'm excited to share our biggest feature since we joined Hugging Face.
Here’s how it works:
1. Pick a dataset—upload your own or choose from 240K open datasets. 2. Paste the Hub dataset ID into Argilla and set up your labeling interface. 3. Share the URL with your team or the whole community!
And the best part? It’s: - No code – no Python needed - Integrated – all within the Hub - Scalable – from solo labeling to 100s of contributors
I am incredibly proud of the team for shipping this after weeks of work and many quick iterations.
Let's make this sentence obsolete: "Everyone wants to do the model work, not the data work."
Import any dataset from the Hub and configure your labeling tasks without needing any code!
Really excited about extending the Hugging Face Hub integration with many more streamlined features and workflows, and we would love to hear your feedback and ideas, so don't feel shy and reach out 🫶🏽
🚨 We’ve just released a new tool to compare the performance of models in the 🤗 Open LLM Leaderboard: the Comparator 🎉 open-llm-leaderboard/comparator
Want to see how two different versions of LLaMA stack up? Let’s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. 🦙🧵👇
1/ Load the Models' Results - Go to the 🤗 Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator - Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns. - Press the Load button. Ready to dive into the results!
2/ Compare Metric Results in the Results Tab 📊 - Head over to the Results tab. - Here, you’ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! 🌟 - Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.
3/ Check Config Alignment in the Configs Tab ⚙️ - To ensure you’re comparing apples to apples, head to the Configs tab. - Review both models’ evaluation configurations, such as metrics, datasets, prompts, few-shot configs... - If something looks off, it’s good to know before drawing conclusions! ✅
4/ Compare Predictions by Sample in the Details Tab 🔍 - Curious about how each model responds to specific inputs? The Details tab is your go-to! - Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button. - Check out the side-by-side predictions and dive into the nuances of each model’s outputs.
5/ With this tool, it’s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether you’re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.
🚀 Try the 🤗 Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
By far the coolest release of the day! > The Open LLM Leaderboard, most comprehensive suite for comparing Open LLMs on many benchmarks, just released a comparator tool that lets you dig into the detail of differences between any models.
Here's me checking how the new Llama-3.1-Nemotron-70B that we've heard so much compares to the original Llama-3.1-70B. 🤔🔎
You can now build a custom text classifier without days of human labeling!
👍 LLMs work reasonably well as text classifiers. 👎 They are expensive to run at scale and their performance drops in specialized domains.
👍 Purpose-built classifiers have low latency and can potentially run on CPU. 👎 They require labeled training data.
Combine the best of both worlds: the automatic labeling capabilities of LLMs and the high-quality annotations from human experts to train and deploy a specialized model.
Big news! You can now build strong ML models without days of human labelling
You simply: - Define your dataset, including annotation guidelines, labels and fields - Optionally label some records manually. - Use an LLM to auto label your data with a human (you? your team?) in the loop!
Open-source AI creates healthy competition in a field where natural tendencies lead to extreme concentration of power. Imagine a world where only one or two companies could build software. This is the biggest risk and ethical challenge of them all IMO. Let's fight this!
🌟 Argilla v2.1.0 goes multi-modal: Image Field, Dark Mode, Enhanched Hugging Face Hub imports and more!
🖼 Image Field: Seamlessly work with multimodal datasets 🌓 Dark Mode: Reduce eye strain with our sleek new look 🤗 Enhanced Hugging Face Hub import with the SDK 🇪🇸 Spanish UI: Breaking language barriers
Plus more improvements to supercharge your model curation workflow!
Hugging Face dropped SmolLM 🤏 > Beats MobileLLM, Qwen 0.5B, Phi 1.5B and more! > 135M, 360M, and 1.7B param model checkpoints > Trained on 600B high-quality synthetic + FineWeb Edu tokens > Architecture: Llama + GQA + 2048 ctx length > Ripe for fine-tuning and on-device deployments. > Works out of the box with Transformers!
Mistral released Mathstral 7B ∑ > 56.6% on MATH and 63.47% on MMLU > Same architecture as Mistral 7B > Works out of the box with Transformers & llama.cpp > Released under Apache 2.0 license
⚗️ Looking to get started with Synthetic data and AI Feedback?
I created this cool notebook for a workshop @davanstrien and I gave it a couple of weeks back. It uses https://distilabel.argilla.io/dev/ and I think it is a good entry point for anyone with a practical interest in the topic.