anamikac2708 commited on
Commit
9b6a7a2
1 Parent(s): e63509e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -4
README.md CHANGED
@@ -1,22 +1,102 @@
1
  ---
2
  language:
3
  - en
4
- license: apache-2.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
8
  - unsloth
9
  - llama
10
  - trl
 
11
  base_model: meta-llama/Meta-Llama-3-8B
12
  ---
13
 
14
  # Uploaded model
15
 
16
  - **Developed by:** anamikac2708
17
- - **License:** apache-2.0
18
  - **Finetuned from model :** meta-llama/Meta-Llama-3-8B
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ license: cc-by-nc-4.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
8
  - unsloth
9
  - llama
10
  - trl
11
+ - loftq
12
  base_model: meta-llama/Meta-Llama-3-8B
13
  ---
14
 
15
  # Uploaded model
16
 
17
  - **Developed by:** anamikac2708
18
+ - **License:** cc-by-nc-4.0
19
  - **Finetuned from model :** meta-llama/Meta-Llama-3-8B
20
 
21
+ This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library using open-sourced finance dataset https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset developed for finance application by FinLang Team
22
 
23
+ The model is finetuned using LoftQ (https://arxiv.org/abs/2310.08659), the paper proposes a novel quantization framework that simultaneously quantizes an LLM and finds a proper low-rank initialization for
24
+ LoRA fine-tuning. Such an initialization alleviates the discrepancy between the quantized and full-precision model and significantly improves generalization in downstream tasks.
25
+
26
+ ## How to Get Started with the Model
27
+
28
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
29
+ ```python
30
+ import torch
31
+ from unsloth import FastLanguageModel
32
+ from transformers import AutoTokenizer, pipeline
33
+ max_seq_length=2048
34
+ model, tokenizer = FastLanguageModel.from_pretrained(
35
+ model_name = "anamikac2708/Llama3-8b-LoftQ-finetuned-investopedia-Lora-Adapters", # YOUR MODEL YOU USED FOR TRAINING
36
+ max_seq_length = max_seq_length,
37
+ dtype = torch.bfloat16,
38
+ load_in_4bit = False
39
+ )
40
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
41
+ example = [{'content': 'You are a financial expert and you can answer any questions related to finance. You will be given a context and a question. Understand the given context and\n try to answer. Users will ask you questions in English and you will generate answer based on the provided CONTEXT.\n CONTEXT:\n D. in Forced Migration from the University of the Witwatersrand (Wits) in Johannesburg, South Africa; A postgraduate diploma in Folklore & Cultural Studies at Indira Gandhi National Open University (IGNOU) in New Delhi, India; A Masters of International Affairs at Columbia University; A BA from Barnard College at Columbia University\n', 'role': 'system'}, {'content': ' In which universities did the individual obtain their academic qualifications?\n', 'role': 'user'}, {'content': ' University of the Witwatersrand (Wits) in Johannesburg, South Africa; Indira Gandhi National Open University (IGNOU) in New Delhi, India; Columbia University; Barnard College at Columbia University.', 'role': 'assistant'}]
42
+ prompt = pipe.tokenizer.apply_chat_template(example[:2], tokenize=False, add_generation_prompt=True)
43
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)
44
+ print(f"Query:\n{example[1]['content']}")
45
+ print(f"Context:\n{example[0]['content']}")
46
+ print(f"Original Answer:\n{example[2]['content']}")
47
+ print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}")
48
+ ```
49
+ ## Training Details
50
+ ```
51
+ Peft Config :
52
+
53
+ {
54
+ 'Technqiue' : 'QLORA',
55
+ 'rank': 256,
56
+ 'target_modules' : ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj",],
57
+ 'lora_alpha' : 128,
58
+ 'lora_dropout' : 0,
59
+ 'bias': "none",
60
+ }
61
+
62
+ Hyperparameters:
63
+
64
+ {
65
+ "epochs": 3,
66
+ "evaluation_strategy": "epoch",
67
+ "gradient_checkpointing": True,
68
+ "max_grad_norm" : 0.3,
69
+ "optimizer" : "adamw_torch_fused",
70
+ "learning_rate" : 2e-5,
71
+ "lr_scheduler_type": "constant",
72
+ "warmup_ratio" : 0.03,
73
+ "per_device_train_batch_size" : 4,
74
+ "per_device_eval_batch_size" : 4,
75
+ "gradient_accumulation_steps" : 4
76
+ }
77
+ ```
78
+ ## Model was trained on 1xA100 80GB, below loss and memory consmuption details:
79
+ {'eval_loss': 0.9598488211631775, 'eval_runtime': 238.8119, 'eval_samples_per_second': 2.722, 'eval_steps_per_second': 0.683, 'epoch': 3.0}
80
+ {'train_runtime': 19338.1608, 'train_samples_per_second': 0.796, 'train_steps_per_second': 0.05, 'train_loss': 0.8229054163673337, 'epoch': 3.0}
81
+ Total training time 19340.593022108078
82
+ 19338.1608 seconds used for training.
83
+ 322.3 minutes used for training.
84
+ Peak reserved memory = 45.686 GB.
85
+ Peak reserved memory for training = 25.934 GB.
86
+ Peak reserved memory % of max memory = 57.72 %.
87
+ Peak reserved memory for training % of max memory = 32.765 %.
88
+
89
+ ## Evaluation
90
+
91
+ <!-- This section describes the evaluation protocols and provides the results. -->
92
+ We evaluated the model on test set (sample 1k) https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset. Evaluation was done using Proprietary LLMs as jury on four criteria Correctness, Faithfullness, Clarity, Completeness on scale of 1-5 (1 being worst & 5 being best) inspired by the paper Replacing Judges with Juries https://arxiv.org/abs/2404.18796. Model got an average score of 4.84.
93
+ Average inference speed of the model is 14.59 secs. Human Evaluation is in progress to see the percentage of alignment between human and LLM.
94
+
95
+ ## Bias, Risks, and Limitations
96
+
97
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
98
+ This model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking into ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
99
+
100
+ ## License
101
+
102
+ Since non-commercial datasets are used for fine-tuning, we release this model as cc-by-nc-4.0.