--- library_name: peft widget: - text: >- Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Generate an SQL statement to add a row in the customers table where the columns are name, address, and city. ### Input: name = John, address = 123 Main Street, city = Winter Park ### Response: inference: parameters: temperature: 0.1 max_new_tokens: 1024 base_model: meta-llama/Llama-2-7b-hf license: llama2 datasets: - sahil2801/CodeAlpaca-20k language: - en tags: - code - text-generation-inference - finetuned - llama-2 - code-llama --- # Model Card for Model ID ## How to Get Started with the Model To use this adapter: ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM # Load base model in 4 bit model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", load_in_4bit=True) # Wrap model with pretrained model weights config = PeftConfig.from_pretrained("MaziyarPanahi/Llama-2-7b-hf-codealpaca-4bit") model = PeftModel.from_pretrained(model, "MaziyarPanahi/Llama-2-7b-hf-codealpaca-4bit", config=config) ``` Prompt Template: ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Input: {input} ### Response: ``` ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 - ### Framework versions - PEFT 0.7.1