chansung park PRO

chansung

AI & ML interests

None yet

Articles

Organizations

Posts 9

view post
Post
1982
🦙🦙 LLaMA Duo project update

Last time, I gave a brief introduction about LLaMA Duo project with @sayakpaul . It is a simple toolset to aligning sLLM with service LLM with coverage dataset 👉🏻 (https://huggingface.co/posts/chansung/708646454991943).
- coverage dataset is what we believe to be the most important/desired (instruction, response) pairs. In system thinking, each instruction could be an analogy of a function from traditional programming. We make unit tests and measure the coverage % for all the features/functions. Similarly, we need to ensure if our fine-tuned model could handle what % of given instructions from coverage dataset satisfactory (hence coverage dataset).

We have tested it with "Coding" category of data from HuggingFaceH4/no_robots dataset. It has about 300 SFT training data points under Coding category. After fine-tuning Gemma 7B model on that, the result was very poor. LLaMA Duo's evaluation tool gave < 20% of metrics in similarity and preciseness on the test split.

So, we used LLaMA Duo's synthetic data generation tool to generate 60k data points that looks similar to the original dataset. We first created ~10k synthetic data points, then created 50k more based on the synthetic dataset itself.

After fine-tuning Gemma 7B on the 60k synthetic dataset, the evaluation result went up to 80~90% high. Also, when testing out the model in UI, it tends to give good responses.

It is a good showcase to transition from service LLM to sLLM or having a backup sLLM for service LLM failure scenarios. I am going to expand this experiments on all categories of no_robots dataset. It will roughly generate > 100k data points.

Here are some links:
- LLaMA Duo project repo: https://github.com/deep-diver/llamaduo
- 60k Coding synthetic dataset: chansung/merged_ds_coding
- Fine-tuned Gemma 7B model: chansung/coding_llamaduo_60k_v0.2
view post
Post
2561
💻 Smoothing the Transition from Service LLM to Local LLM

Imagine your go-to LLM service is down, or you need to use it offline – yikes! This project is all about having that "Plan B" ready to go. Here's LLaMA Duo I've been building with @sayakpaul :

✨ Fine-tune a smaller LLM: We used Hugging Face's alignment-handbook to teach a smaller-sized LLM to mimic my favorite large language model. Think of it as that super-smart AI assistant getting a capable understudy.

🤖 Batch Inference: Let's get that fine-tuned LLM working! My scripts generate lots of text like a champ, and we've made sure things run smoothly even with bigger workloads.

🧐 Evaluation: How well is my small LLM doing? We integrated with the Gemini API to use it as an expert judge – it compares my model's work to the original. Talk about a tough critic!

🪄 Synthetic Data Generation: Need to boost that model's performance? Using Gemini's feedback, we can create even more training data, custom-made to make the LLM better.

🧱 Building Blocks: This isn't just a one-time thing – it's a toolkit for all kinds of LLMOps work. Want to change your evaluation metrics? Bring in models trained differently? Absolutely, let's make it happen.

Why this project is awesome:

💪 Reliability: Keep things running no matter what happens to your main LLM source.
🔒 Privacy: Process sensitive information on your own terms.
🗺️ Offline capable: No internet connection? No problem!
🕰️ Version Control: Lock in your favorite LLM's behavior, even if the service model changes.

We'm excited to share the code on GitHub. Curious to see what you all think! 👉🏻 https://github.com/deep-diver/llamaduo