File size: 5,396 Bytes
58143a5
eca8745
 
 
 
 
 
 
 
 
 
 
 
58143a5
 
eca8745
58143a5
eca8745
 
 
58143a5
eca8745
58143a5
eca8745
 
58143a5
 
 
eca8745
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58143a5
 
eca8745
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58143a5
 
 
 
eca8745
 
58143a5
eca8745
58143a5
eca8745
 
58143a5
eca8745
58143a5
eca8745
58143a5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
language:
- en
license: cc-by-nc-4.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- finlang
- dora
base_model: mistralai/Mistral-7B-v0.1
---

# Uploaded  model

- **Developed by:** anamikac2708
- **License:** cc-by-nc-4.0
- **Finetuned from model :** mistralai/Mistral-7B-v0.1

This Mistral model was trained Huggingface's TRL library and DoRA (https://arxiv.org/abs/2402.09353) using open-sourced finance dataset https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset developed for finance application by FinLang Team

This paper proposes Weight-Decomposed LowRank Adaptation which  decomposes the pre-trained weight into two components, magnitude and direction, for fine-tuning, specifically
employing LoRA for directional updates to efficiently minimize the number of trainable parameters. Therefore can enhance both the learning capacity and training stability of LoRA while avoiding any additional inference overhead.

## How to Get Started with the Model

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
```python
import torch
from unsloth import FastLanguageModel
from transformers import AutoTokenizer, pipeline
peft_model_id = "anamikac2708/Mistral-7B-DORA-finetuned-investopedia-Lora-Adapters"
# Load Model with PEFT adapter
model = AutoPeftModelForCausalLM.from_pretrained(
  peft_model_id,
  device_map="auto",
  torch_dtype=torch.float16,
  #load_in_4bit = True
)
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n        try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n        CONTEXT:\n        D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}]
prompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)
print(f"Query:\n{example[1]['content']}")
print(f"Context:\n{example[0]['content']}")
print(f"Original Answer:\n{example[2]['content']}")
print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}")
```

## Training Details
```
Peft Config :

{
 'Technqiue' : 'QLORA',
 'rank': 256,
 'target_modules' : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj",],
 'lora_alpha' : 128,
 'lora_dropout' : 0, 
 'bias': "none",    
}
    
Hyperparameters:

{
    "epochs": 3,
    "evaluation_strategy": "epoch",
    "gradient_checkpointing": True,
    "max_grad_norm" : 0.3,
    "optimizer" : "adamw_torch_fused",
    "learning_rate" : 2e-5,
    "lr_scheduler_type": "constant",
    "warmup_ratio" : 0.03,
    "per_device_train_batch_size" : 4,  
    "per_device_eval_batch_size" : 4,
    "gradient_accumulation_steps" : 4
}
```

## Model was trained on 1xA100 80GB, below loss and memory consmuption details:
{'eval_loss': 0.946821391582489, 'eval_runtime': 840.1526, 'eval_samples_per_second': 0.801, 'eval_steps_per_second': 0.401, 'epoch': 3.0}
{'train_runtime': 64796.4597, 'train_samples_per_second': 0.246, 'train_steps_per_second': 0.031, 'train_loss': 0.709615581515563, 'epoch': 3.0}

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->
We evaluated the model on test set (sample 1k) https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset. Evaluation was done using Proprietary LLMs as jury on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best) inspired  by the paper Replacing Judges with Juries https://arxiv.org/abs/2404.18796. Model got an average score of 4.48.
Average inference speed of the model is 37 secs. Human Evaluation is in progress to see the percentage of alignment between human and LLM.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking into ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

## License

Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.