Edit model card

Uploaded model

  • Developed by: anamikac2708
  • License: cc-by-nc-4.0
  • Finetuned from model : meta-llama/Meta-Llama-3-8B

This llama model was trained with Huggingface's TRL library and NEFTune https://arxiv.org/abs/2310.05914 using open-sourced finance dataset https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset developed for finance application by FinLang Team

NEFTune paper propose to add random noise to the embedding vectors of the training data during the forward pass of fine-tuning as a result the model overfits less to the specifics of the instruction-tuning dataset, such as formatting details, exact wording, and text length. Instead of collapsing to the exact instruction distribution, the model is more capable of providing answers that incorporate knowledge and behaviors of the pretrained base model.

How to Get Started with the Model

import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
peft_model_id='anamikac2708/Llama3-8b-finetuned-NEFTune-investopedia'
model = AutoPeftModelForCausalLM.from_pretrained(
  peft_model_id,
  device_map="auto",
  torch_dtype=torch.bfloat16,
  #load_in_4bit = True # IF YOU WANT TO LOAD WITH BITSANDBYTES INT4
)
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n        try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n        CONTEXT:\n        D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}]
prompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)
print(f"Query:\n{example[1]['content']}")
print(f"Context:\n{example[0]['content']}")
print(f"Original Answer:\n{example[2]['content']}")
print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}")

Training Details

Peft Config :

{
 'Technqiue' : 'QLORA',
 'rank': 256,
 'target_modules' : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj",],
 'lora_alpha' : 128,
 'lora_dropout' : 0, 
 'bias': "none",    
}
    
Hyperparameters:

{
    "epochs": 3,
    "evaluation_strategy": "epoch",
    "gradient_checkpointing": True,
    "max_grad_norm" : 0.3,
    "optimizer" : "adamw_torch_fused",
    "learning_rate" : 2e-4,
    "lr_scheduler_type": "constant",
    "warmup_ratio" : 0.03,
    "per_device_train_batch_size" : 4,  
    "per_device_eval_batch_size" : 4,
    "gradient_accumulation_steps" : 4
}

Model was trained on 1xA100 80GB, below loss and memory consmuption details:

{'eval_loss': 1.0598081350326538, 'eval_runtime': 369.4517, 'eval_samples_per_second': 1.597, 'eval_steps_per_second': 0.401, 'epoch': 3.0} {'train_runtime': 31215.8079, 'train_samples_per_second': 0.448, 'train_steps_per_second': 0.028, 'train_loss': 0.9325563074660328, 'epoch': 3.0}

Evaluation

We evaluated the model on test set (sample 1k) https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset. Evaluation was done using Proprietary LLMs as jury on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best) inspired by the paper Replacing Judges with Juries https://arxiv.org/abs/2404.18796. Model got an average score of 4.78. Average inference speed of the model is 2.06 secs. Human Evaluation is in progress to see the percentage of alignment between human and LLM.

Bias, Risks, and Limitations

This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking into ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

License

Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.

Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from