Model Overview

A LoRA (Low-Rank Adaptation) decomposed from base model from base to instruct

Model Details

  • Base Model: (meta-llama/Llama-3.1-8B) - (meta-llama/Llama-3.1-8B-instruct)
  • Adaptation Method: LoRA

Training Configuration

Training Hyperparameters

  • Rank (r): 16 -> 16
  • Alpha: 1 -> 16

LoRA Configuration

  • Rank (r): 16
  • Alpha: 16
  • Target Modules:
    • q_proj (Query projection)
    • k_proj (Key projection)
    • v_proj (Value projection)
    • o_proj (Output projection)
    • up_proj (Upsampling projection)
    • down_proj (Downsampling projection)
    • gate_proj (Gate projection)

Usage

This adapter must be used in conjunction with the base Llama-3.1-8B-instruct model.

Loading the Model

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-instruct")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-instruct")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "path_to_adapter")

Limitations and Biases

  • This adapter might inherits some limitations and biases present in the base Llama-3.1-8B-instruct model
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.