Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

Jofthomas 
posted an update 1 day ago
view post
Post
1765
The new Mistral 3 models are here !

Today, we announce Mistral 3, the next generation of Mistral models. Mistral 3 includes three state-of-the-art small, dense models (14B, 8B, and 3B) and Mistral Large 3 – our most capable model to date – a sparse mixture-of-experts trained with 41B active and 675B total parameters.

All models are released under the Apache 2.0 license.

Ministrals :
https://huggingface.co/collections/mistralai/ministral-3

Mistral Large 3:
https://huggingface.co/collections/mistralai/mistral-large-3
  • 2 replies
·
sergiopaniego 
posted an update 1 day ago
flozi00 
posted an update 2 days ago
view post
Post
2903
We recently discussed how Tensor Parallelism slices matrices to reduce latency within a single node. But what happens when you need to scale beyond that, where the bandwidth drops?

That is where Pipeline Parallelism (PP) takes over.

Instead of slicing the operation, PP slices the model depth. It turns your GPU cluster into an assembly line: GPU 0 handles layers 1-12, GPU 1 handles 13-24, and so on.

The hardware challenge here isn't the interconnect speed—it is the "Pipeline Bubble." In a naive setup, expensive H100s sit idle for most of the cycle waiting for data to flow through the chain.

My latest guide breaks down the scheduling strategies used to minimize this idle silicon time.

In this deep dive, we cover:

The Hardware Mechanics: Vertical Slicing
Unlike TP which requires "chatty" All-Reduce operations, PP relies on lightweight Point-to-Point (Send/Recv) communication. This makes it the only viable strategy for crossing node boundaries over Ethernet or InfiniBand.

Fighting the Bubble: 1F1B vs. GPipe
We analyze the scheduling algorithms that keep the GPUs fed:

GPipe: The "flush and fill" approach. Simple, but memory-intensive.
1F1B (One-Forward-One-Backward): The industry standard. By interleaving forward and backward passes, we aggressively free up memory and reduce the bubble size.
The Math of Efficiency
The "Bubble" is a mathematical inevitability. We look at the efficiency formula
M+N−1
M

to understand why you need massive global batch sizes to make PP worth the effort.

The article includes a conceptual PyTorch implementation of the 1F1B state machine to illustrate exactly how the data is handed off between stages.

Read the full breakdown here:
https://flozi.net/en/guides/ai/scaling/pipeline_parallel
hesamation 
posted an update 1 day ago
view post
Post
1190
this is big... 50 AI researchers from Bytedance, Alibaba, Tencent, and other labs/universities just published a 300-page paper with surprising lessons about coding models and agents (data, pre and post-training, etc).

key highlights:

> small LLMs can beat proprietary giants
RL (RLVR specifically) gives small open-source models an edge over big models in reasoning. a 14B model trained with RLVR on high-quality verified problems can match the performance of OpenAI's o3.

> models have a hard time learning Python.
mixing language models during pre-training is good, but Python behaves different from statically typed languages. languages with similar syntax (Java and C#, or JavaScript and TypeScript) creates high positive synergy. mixing Python heavily into the training of statically typed languages can actually hurt because of Python's dynamic typing.

> not all languages are equal (coding scaling laws)
the amount of data required to specialize a model on a language drastically depends on the language. paper argues like C# and Java are easier to learn (less training data required). languages like Python and Javascript are actually more tricky to learn, ironically (you see AI most used for these languages :)

> MoE vs Dense (ability vs stability)
MoE models offer higher capacity, but are much more fragile during SFT than dense models. hyperparams in training have a more drastic effect in MoE models, while dense models are more stable. MoE models also require constant learning rate schedules to avoid routing instability.

> code models are "insecure" by default (duh)
training on public repos makes models learn years of accumulated insecure coding patterns. safety fine-tuning often fails to work much on code. a model might refuse to write a hate speech email but will happily generate a SQL-injection vulnerable function because it "works."

read the full paper:
From Code Foundation Models to Agents and Applications: A Practical Guide to Code Intelligence (2511.18538)
prithivMLmods 
posted an update 2 days ago
view post
Post
3298
Hello everyone,

The strangerzonehf [HF] Community / Organization Page, which is maintained by me, has reached the Top 10 Developer Pages ranking at 6th place, contributing 3.4% in the calendar cycle from August 2024 to August 2025. It is also the only South Asia / Indian page in the list. I could not be more proud to be doing things for the community. ❤️🤗

Source: https://www.dataprovenance.org/economies-of-open-intelligence.pdf

It is a pleasure to be a part of it.
Thank you!
@prithivMLmods
sergiopaniego 
posted an update 2 days ago
view post
Post
2961
want to use open models easily through an API?

Inference Providers might be exactly what you’re looking for sooo here’s a complete beginner-friendly walkthrough 🧐

https://www.youtube.com/watch?v=oxwsizy1Spw
  • 2 replies
·
ovi054 
posted an update 3 days ago
view post
Post
5915
Introducing Anim Lab AI⚡

My submission for the MCP 1st Birthday Hackathon

Turn any math concept or logic into a clear video explanation instantly using AI.

👉 Try it now: MCP-1st-Birthday/anim-lab-ai

Demo outputs are attached 👇
Kseniase 
posted an update 3 days ago
view post
Post
6064
9 Recent advances in Multi-Agent Systems (all open-source)

The idea to split tasks across multiple agents instead of relying on one universal agent is now seen as one of the most effective ways to build an AI stack. Concepts like “agent swarms” were highlighted at the AI Engineer Code Summit in NYC (Nov 20–21) as the winning architecture. And this trend is not only about coding and software. It applies across all AI domains.

So here is some recent research that helps keep multi-agent systems (MAS) better and up-to-date:

1. LatentMAS → Latent Collaboration in Multi-Agent Systems (2511.20639)
AI agents share their hidden "thoughts" directly in latent space instead of talking through text. This makes collaboration and reasoning way faster and accurate (no extra training needed)

2. Puppeteer → Multi-Agent Collaboration via Evolving Orchestration (2505.19591)
Uses a “puppeteer” LLM that dynamically decides which agents (“puppets”) to call and in what order. By learning this orchestration with reinforcement learning (RL), the system solves complex tasks more efficiently and with fewer compute costs

3. MADD → MADD: Multi-Agent Drug Discovery Orchestra (2511.08217)
A MAS with 4 agents for drug discovery. It lets researchers describe a drug discovery task in plain language. Then MADD automatically builds and runs the full hit-identification pipeline, making AI-driven drug design a simple end-to-end workflow

4. Multi-Agent Tool-Integrated Policy Optimization (MATPO) → Multi-Agent Tool-Integrated Policy Optimization (2510.04678)
Lets one LLM act as multiple agents (like a planner and a worker) by using different prompts and training them together with RL. So you get the benefits of a multi-agent system without needing multiple models

If you're interested in trends in multi-agent for software development of the future, explore my article with the emergent playbook. This is super interesting → https://www.turingpost.com/p/aisoftwarestack
Also, subscribe to the Turing Post: https://www.turingpost.com/subscribe

Read further below ⬇️
  • 2 replies
·
rajkumarrawal 
posted an update 1 day ago
view post
Post
1955
September(2025) LLM Safety & Reliability Benchmarks Report By AI Parivartan Research Lab (AIPRL-LIR)

Monthly LLM's Intelligence Reports for AI Decision Makers :

Our "aiprl-llm-intelligence-report" repo to establishes (AIPRL-LIR) framework for Large Language Model overall evaluation and analysis through systematic monthly intelligence reports. Unlike typical AI research papers or commercial reports. It provides structured insights into AI model performance, benchmarking methodologies, Multi-hosting provider analysis, industry trends ...

( all in one monthly report ) Leading Models & Companies, 23 Benchmarks in 6 Categories, Global Hosting Providers, & Research Highlights

Here’s what you’ll find inside this month’s intelligence report:-

Leading Models & Companies :

23 Benchmarks in 6 Categories :
With a special focus on Safety & Reliability performance across diverse tasks.

Global Hosting Providers :

Research Highlights :
Comparative insights, evaluation methodologies, and industry trends for AI decision makers.

Disclaimer:
This comprehensive Safety & Reliability analysis represents the current state of large language model capabilities as of September 2025. All performance metrics are based on standardized evaluations and may vary based on specific implementation details, hardware configurations, and testing methodologies. Users are advised to consult original research papers and official documentation for detailed technical insights and application guidelines. Individual model performance may differ in real-world scenarios and should be validated accordingly. If there are any discrepancies or updates beyond this report, please refer to the respective model providers for the most current information.

Repository link is in comments below :

https://huggingface.co/blog/rajkumarrawal/september-2025-aiprl-lir-safety-reliability

AiParivartanResearchLab

  • 1 reply
·
ronantakizawa 
posted an update 2 days ago
view post
Post
1576
Introducing the tiktok-trending-hashtags dataset: a compilation of 1,830 unique trending hashtags on TikTok from 2022 to 2025. This dataset captures viral one-time and seasonal viral moments on TikTok and is perfect for researchers, marketers, and content creators studying viral content patterns on social media.

ronantakizawa/tiktok-trending-hashtags
#tiktok #trends #social-media