Edit model card

Llama2 fine tuned in Intel Hardware using peft and Lora

Description : Meta's Llama 2 is a transformer-based model tailored for converting natural language instructions into Python code snippets. This model has been optimized for efficient deployment on resource-constrained hardware through techniques such as LORA (Low-Rank Adaptation) and QLORA (Quantized Low-Rank Adaptation), enabling 4-bit quantization without sacrificing performance. Leveraging advanced optimization libraries, such as Intel's Accelerate and Extension for PyTorch, Meta's Llama 2 offers streamlined fine-tuning and inference on Intel Xeon Scalable processors.

Usage : To utilize Meta's Llama 2 finetuned using the python code snippets, simply load the model using the Hugging Face Transformers library. Ensure compatibility with the prompt template structure: s [inst] instruction [\inst] answer s. Fine-tune the model using the Hugging Face Trainer class, specifying training configurations and leveraging Intel hardware and oneAPI optimization libraries for enhanced performance.

Use in Transformers

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Smd-Arshad/Llama-python-finetuned")
model = AutoModelForCausalLM.from_pretrained("Smd-Arshad/Llama-python-finetuned")
Downloads last month
2