Sourab Mangrulkar

smangrul

AI & ML interests

Machine Learning, Deep Learning, Natural Language Processing, Natural Language Generation, Computer Vision, Reinforcement Learning

Articles

Organizations

smangrul's activity

posted an update 11 days ago
view post
Post
1832
Unlocking the Power of locally running Llama-3 8B Model Agents with Chat-UI! πŸ”₯πŸš€βœ¨

I'm thrilled to share my hackathon-style side project:
1. Finetuning Llama-8B for function calling using PEFT QLoRA as the instruct Llama-3 model doesn't support this. The colab notebook for it is here: https://lnkd.in/ggJMzqh2. πŸ› οΈ
2. Finetuned model along with the 4-bit quants here: https://lnkd.in/gNpFKY6V ✨
3. Clone Hugging Face https://lnkd.in/gKBKuUBQ and make it compatible for function calling by building upon the PR https://lnkd.in/gnqFuAd4 for my model and local inferencing usecase using Ollama. This was a steep learning curve wherein I stayed awake the whole night to get it working. πŸ’ͺ🏽
4. Above, I used SerpAPI for web browsing and Mongo DB Atlas free tier for persistence of conversations and assistant configs. πŸ”Ž
5. More work is required to switch between using tools and responding directly wherein I see the model breaks. 🧐

How cool is this wherein we are approaching experience akin to ChatGPT while using local hosted agent model running on your laptop! πŸ’»
  • 1 reply
Β·
posted an update about 1 month ago
view post
Post
1909
πŸ€— PEFT v0.10.0 release! πŸ”₯πŸš€βœ¨

Some highliπŸ“ghts:
1. FSDP+QLoRA and DeepSpeed Stage-3+QLoRA
2. Layer expansion + LoRA
3. DoRA support for Conv2D layers and quantized bitsandbytes layers
4. New LoftQ utility
5. Batched inference for mixed LoRA adapters.

http://Answer.AI team in collaboration with bitsandbytes and Hugging Face πŸ€— open sourced code enabling the usage of FSDP+QLoRA and explained the whole process in their insightful blogpost https://lnkd.in/g6jgfXyv. This is now integrated into Hugging Face ecosystem.

For an end-to-end example on FSDP+QLoRA, please refer https://lnkd.in/gT3yY-Rx.

For an end-to-end example on DeepSpeed Stage-3+QLoRA, please refer https://lnkd.in/gkt-xZRE.

With the PR https://lnkd.in/g5F348MN these changes are now upstreamed in https://lnkd.in/g5_MxYtY thanks to Wing Lian ! πŸš€

Kudos to http://Answer.AI team, Titus von KΓΆller , Younes Belkada, Benjamin Bossan and Zachary Mueller for all the help without which this couldn't have been possible. πŸ€—

For efficient depthwise layer expansion akin to passthrough method of mergekit but without using additional memory and attaching LoRAs to it, refer to the details below! πŸ”₯https://lnkd.in/ge95ztjA

Now DoRA is supported for Conv2D layers as well as bitsandbytes quantized layers ✨. For more details, please refer the below thread.
https://lnkd.in/gsJbuWPD

Now you can mix different LoRA adapters in a batch during inference which speeds-up the inference by avoiding computation of base model multiple times which would be the case for adaptive inference with batch_size=1! ⚑️.
Details below. https://lnkd.in/gD-pcX_B

LoftQ reduces quantization error by appropriately initializing the LoRA adapter weights. Normally, this is a two-step process. Benjamin Bossan
added new util replace_lora_weights_loftq for LoftQ to use it on the fly with bnb.

For more details, refer to the release notes. πŸ“
https://lnkd.in/gg7-AmHA. As always, make sure losses go down and be happy to watch your model train!
  • 1 reply
Β·
replied to their post about 2 months ago
posted an update 2 months ago
view post
Post
🚨 Now you can run Starcoder- 2 models locally on your Mac M1 Pro Apple Silicon with 16GB memory! πŸ§‘πŸ½β€πŸ’» ⚑️✨

Below is the UX with Twinny extension using bigcode/starcoder2-3b for FIM and codellama/CodeLlama-7b-Instruct-hf for chat. Dev tools is showing the prompt being sent to ollama server.

Starcoder-2 is now supported in llama.cpp https://github.com/ggerganov/llama.cpp/pull/5795!
cd llama.cpp
python convert-hf-to-gguf.py ../starcoder2-3b/ --outfile models/starcoder2-3b.gguf --outtype "f16"
./quantize models/starcoder2-3b.gguf models/starcoder2-3b-Q4_K_M.gguf Q4_K_M

For more details, please go through the following tweet thread: https://x.com/sourab_m/status/1764583139798823235?s=20
posted an update 2 months ago
view post
Post
🚨 New Release of πŸ€—PEFT!

1. New methods for merging LoRA weights. Refer this HF Post for more details: https://huggingface.co/posts/smangrul/850816632583824

2. AWQ and AQLM support for LoRA. You can now:
- Train adapters on top of 2-bit quantized models with AQLM
- Train adapters on top of powerful AWQ quantized models
Note for inference you can't merge the LoRA weights into the base model!

3. DoRA support: Enabling DoRA is as easy as adding use_dora=True to your LoraConfig. Find out more about this method here: https://arxiv.org/abs/2402.09353

4. Improved documentation, particularly docs regarding PEFT LoRA+DeepSpeed and PEFT LoRA+FSDP! πŸ“„ Check out the docs at https://huggingface.co/docs/peft/index.

5. Full Release Notes: https://github.com/huggingface/peft/releases/tag/v0.9.0
Β·
posted an update 2 months ago
view post
Post
Exciting news for Indic LLMs! πŸš€

Sarvam AI just released high-quality, curated dataset with multi-turn conversations in English, Hindi, and Hinglish! πŸ’Ž With a whopping 100K samples! 🀯
Check it out: sarvamai/samvaad-hi-v1

Who's going to finetune high-quality SFT models on this dataset? ✨
if you are interested in pushing the boundaries with respect to Indic LLMs, join the discord channel: https://discord.gg/hugging-face-879548962464493619
posted an update 2 months ago
view post
Post
πŸš€ Exciting news from πŸ€— PEFT!

We are introducing new merging methods for LoRA adapters. These methods allow for retaining the unique capabilities of individual LoRAs while enabling them to combine their strengths: https://huggingface.co/blog/peft_merging

We explored the application of merging LoRA adapters in the context of personal code copilot before πŸš€πŸ‘Ύβœ¨. Please go through the below thread on it: https://x.com/sourab_m/status/1718008115726283004?s=20

New merging methods ties, dare, and magnitude_prune introduced alongside existing methods cat, linear, and svd. Blogpost details each method. These methods can be applied on-the-fly during inference time instead of merging offline enabling great developer UX. ✨

How do I merge my LoRA adapters?
Easy, use class method add_weighted_adapter(). For example, below you can see how we can combine three LoRA adapters using ties method. We can observe that merged adapter can retain the capabilities of individual adapters!

Now that we have seen they can retain individual LoRAs, how about use cases wherein we require the capabilities from multiple LoRAs being merged/combined? Below is an application of it in text-to-image domain. πŸ–ΌοΈ

Kudos to @prateeky2806 (TIES author) and Le Yu (DARE author) for their kind and generous guidance on the PRs! Also, if you want to explore full model merging, refer to super cool projects like https://github.com/arcee-ai/mergekit/tree/main, https://github.com/Gryphe/BlockMerge_Gradient and https://github.com/yule-BUAA/MergeLM/tree/main.

Excited to see what the community creates on top of this! πŸš€βœ¨ #LetsBuildTogether