Stefano Fiorucci PRO

anakin87

AI & ML interests

Contributing to Haystack LLM framework 🏗️. Language Models: orchestration, post-training, synthetic data...

Articles

Organizations

deepset's profile picture Blog-explorers's profile picture ZeroGPU Explorers's profile picture Hugging Face Discord Community's profile picture

Posts 10

view post
Post
298
🐝🐝🐝 𝐀 𝐒𝐰𝐚𝐫𝐦 𝐨𝐟 𝐀𝐠𝐞𝐧𝐭𝐬 𝐰𝐢𝐭𝐡 𝐋𝐥𝐚𝐦𝐚 3.2, 𝐆𝐏𝐓-4𝐨 𝐦𝐢𝐧𝐢 𝐚𝐧𝐝 𝐂𝐥𝐚𝐮𝐝𝐞 3.5 𝐒𝐨𝐧𝐧𝐞𝐭

𝐓𝐋;𝐃𝐑: I reimplemented the Swarm concept using Haystack, but made it work with both open and proprietary models 💫

✍️ blog article: https://haystack.deepset.ai/blog/swarm-of-agents
📓 notebook: https://haystack.deepset.ai/cookbook/swarm


Some time ago OpenAI published Swarm: an educational framework for building multi-agent systems.

Their approach focuses on two main concepts:
・ 𝐑𝐨𝐮𝐭𝐢𝐧𝐞𝐬: Each agent follows specific 📜 instructions and uses 🛠️ tools to execute them.
・ 𝐇𝐚𝐧𝐝𝐨𝐟𝐟𝐬 🤝: Agents can transfer control to one another using tool/function calling.


When I first read these ideas, I thought: 𝘴𝘪𝘮𝘱𝘭𝘦 𝘣𝘶𝘵 𝘱𝘰𝘸𝘦𝘳𝘧𝘶𝘭! And they pair well with the recent unified tool support in Haystack.

🧑‍💻 So, I decided to re-implement these concepts using Haystack, and in just a few lines of code, I had a working prototype.

🆒 Bonus feature: this implementation isn't tied to a single model provider - different agents can be powered by different models!

I replicated the ACME customer service example from the original article, with 3 Agents:
🐝 Triage Agent - Llama 3.2 running on Ollama
🐝 Sales Agent - Anthropic Claude 3.5 Sonnet
🐝 Issues and Repairs Agent - OpenAI GPT-4o mini


Want to see the full implementation and give it a try? Check out the blog post and notebook! ✨
view post
Post
1079
Ok, you're finally convinced that synthetic data works... ⚗️

𝐍𝐨𝐰 𝐲𝐨𝐮 𝐰𝐚𝐧𝐭 𝐭𝐨 𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐞 𝐚𝐧 𝐢𝐧𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐨𝐧 𝐝𝐚𝐭𝐚𝐬𝐞𝐭 𝐟𝐨𝐫 𝐟𝐢𝐧𝐞-𝐭𝐮𝐧𝐢𝐧𝐠 𝐢𝐧 𝐚 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐨𝐭𝐡𝐞𝐫 𝐭𝐡𝐚𝐧 𝐄𝐧𝐠𝐥𝐢𝐬𝐡.
But how do you get started?

I explore how to do this with Magpie in my new article
https://huggingface.co/blog/anakin87/multilingual-magpie

---

🐦‍⬛ 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐌𝐚𝐠𝐩𝐢𝐞?

It's a recent technique for creating synthetic instruction datasets.

Magpie is based on a simple but ingenious idea 👇
if you prompt an instruction-tuned model with a pre-query template, you can make it generate a plausible user query/instruction

Here's an example:
model: Llama-3-8B-Instruct
pre-query template: "<|begin_of_text|><|start_header_id|>user<|end_header_id|>"
generated user instruction: "What are some of the responsibilities of a commercial pilot?"

You can then feed this instruction back into the same model to get the assistant response.

By repeating this process, it's possible to generate large synthetic datasets with relatively little effort.

🪄 The authors demonstrate that using these datasets for Supervised Fine Tuning (SFT) can yield strong performance, even competitive with the original instruct model.


🧗𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐧𝐠 𝐧𝐨𝐧-𝐄𝐧𝐠𝐥𝐢𝐬𝐡 𝐝𝐚𝐭𝐚

Most Language Models are primarily trained on English texts, so they tend to produce data in English.

How can we overcome this?

Earlier approaches were complex or costly.

Then @mrm8488 found a simple solution: add the target language to the pre-query template.
For Spanish, the template becomes "<|begin_of_text|><|start_header_id|>user<|end_header_id|>spanish:".

This method works for Spanish and German!

❌ Unfortunately, it does not work well for other languages (🇮🇹, 🇳🇱, ...)

👇