Edit model card

Model Card for LoRa Adaptor: Llama 3 Instruct Fine-Tuned on Deutsche Bahn FAQ

Model Overview

Model Name: islam-hajosman/llama3_instruct_fine_tuned_bahn_1k_v1_lora_adapter
Architecture: Llama 3 Instruct with LoRa Adaptor
Quantization: 4-bit NF4 with double quantization
Domain-Specific Fine-Tuning Dataset: islam-hajosman/deutsche_bahn_faq_1k

This model card describes the LoRa adaptor fine-tuned to improve responses to FAQs from the Deutsche Bahn website. This project is part of a Master's thesis aiming to enhance domain-specific performance.

Fine-Tuning Configuration

LoRA Configuration

lora_config = LoraConfig(
    r=16,
    lora_alpha=32,
    lora_dropout=0.0,
    bias="none",
    target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'],
    task_type="CAUSAL_LM"
)
model = get_peft_model(model, lora_config)

Hardware Used

  • GPU: 1x H100 (80 GB PCIe)
  • CPU: 26 cores
  • RAM: 205.4 GB
  • Storage: 1.1 TB SSD
  • Cost: $2.5 per hour

Training Summary

  • Total Trainable Parameters: 0.915% of 8B parameters
  • LoRA-Adaptor Size: 4.37GB
  • Training Time and Cost: $2 for 50 minutes
  • Number of Steps per Epoch: 16 (based on 1024 samples, batch size 8, gradient accumulation 8)

Performance Metrics

  • Training Completed:
    • TrainOutput(global_step=480, training_loss=0.28411184588912874, metrics={'train_runtime': 3012.7974, 'train_samples_per_second': 10.197, 'train_steps_per_second': 0.159, 'total_flos': 3.871795189898281e+17, 'train_loss': 0.28411184588912874, 'epoch': 30.0})

Weights & Biases Tracking

Usage

To use this LoRa adaptor model, load it from Huggingface using the model name islam-hajosman/llama3_instruct_fine_tuned_bahn_1k_v1_lora_adapter. This model is optimized for providing domain-specific answers to Deutsche Bahn FAQ.

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

tokenizer = AutoTokenizer.from_pretrained("islam-hajosman/llama3_instruct_fine_tuned_bahn_1k_v1_lora_adapter")
base_model = AutoModelForCausalLM.from_pretrained("base_model_name")
model = PeftModel.from_pretrained(base_model, "islam-hajosman/llama3_instruct_fine_tuned_bahn_1k_v1_lora_adapter")

input_text = "Ihre Frage hier"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)
Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for islam-hajosman/llama3_instruct_fine_tuned_bahn_1k_lora_adapter

Adapter
(616)
this model

Dataset used to train islam-hajosman/llama3_instruct_fine_tuned_bahn_1k_lora_adapter