--- base_model: unsloth/llama-3.2-3b-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en datasets: - davzoku/moecule-kyc --- # 🫐🥫 kyc_adapter_3b

logo

## Model Details This is a LoRA adapter for [Moecule](https://huggingface.co/collections/davzoku/moecule-67dabc6bb469dcd00ad2a7c5) family of MoE models. It is part of [Moecule Ingredients](https://huggingface.co/collections/davzoku/moecule-ingredients-67dac0e6210eb1d95abc6411) and all relevant expert models, LoRA adapters, and datasets can be found there. ### Additional Information - QLoRA 4-bit fine-tuning with Unsloth - Base Model: `unsloth/llama-3.2-3b-Instruct` ## The Team - CHOCK Wan Kee - Farlin Deva Binusha DEVASUGIN MERLISUGITHA - GOH Bao Sheng - Jessica LEK Si Jia - Sinha KHUSHI - TENG Kok Wai (Walter) ## References - [Unsloth Tutorial](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama) - [Unsloth Finetuning Colab Notebook]()