bavest commited on
Commit
e93c4ab
1 Parent(s): 278c9ac

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +151 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gpl
3
+ datasets:
4
+ - bavest/fin-llama-dataset
5
+ tags:
6
+ - finance
7
+ - llm
8
+ - llama
9
+ - trading
10
+ ---
11
+
12
+
13
+ # FIN-LLAMA
14
+
15
+ > Efficient Finetuning of Quantized LLMs for Finance
16
+
17
+ [Adapter Weights](https://huggingface.co/bavest/fin-llama)
18
+ | [Dataset](https://huggingface.co/datasets/bavest/fin-llama-dataset)
19
+
20
+ ## Installation
21
+
22
+ To load models in 4bits with transformers and bitsandbytes, you have to install accelerate and transformers from source
23
+ and make sure you have the latest version of the bitsandbytes library (0.39.0).
24
+
25
+ ```bash
26
+ pip3 install -r requirements.txt
27
+ pip3 install -q -U bitsandbytes
28
+ pip3 install -q -U git+https://github.com/huggingface/transformers.git
29
+ pip3 install -q -U git+https://github.com/huggingface/peft.git
30
+ pip3 install -q -U git+https://github.com/huggingface/accelerate.git
31
+ ```
32
+
33
+ ### Other dependencies
34
+
35
+ If you want to finetune the model on a new instance. You could run
36
+ the `setup.sh` to install the python and cuda package.
37
+
38
+ ```bash
39
+ bash scripts/setup.sh
40
+ ```
41
+
42
+ ## Finetuning
43
+
44
+ ```bash
45
+ # use 4 bit finetuning
46
+ bash script/finetune_4bit.sh
47
+
48
+ # use 8 bit finetuning
49
+ bash script/finetune_8bit.sh
50
+ ```
51
+
52
+ ## Usage
53
+
54
+ Quantization parameters are controlled from the `BitsandbytesConfig`
55
+
56
+ - Loading in 4 bits is activated through `load_in_4bit`
57
+ - The datatype used for the linear layer computations with `bnb_4bit_compute_dtype`
58
+ - Nested quantization is activated through `bnb_4bit_use_double_quant`
59
+ - The datatype used for qunatization is specified with `bnb_4bit_quant_type`. Note that there are two supported
60
+ quantization datatypes `fp4` (four bit float) and `nf4` (normal four bit float). The latter is theoretically optimal
61
+ for normally distributed weights and we recommend using `nf4`.
62
+
63
+ ```python
64
+ import torch
65
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
66
+
67
+ model = AutoModelForCausalLM.from_pretrained(
68
+ pretrained_model_name_or_path='bavest/fin-llama',
69
+ load_in_4bit=True,
70
+ device_map='auto',
71
+ torch_dtype=torch.bfloat16,
72
+ quantization_config=BitsAndBytesConfig(
73
+ load_in_4bit=True,
74
+ bnb_4bit_compute_dtype=torch.bfloat16,
75
+ bnb_4bit_use_double_quant=True,
76
+ bnb_4bit_quant_type='nf4'
77
+ ),
78
+ )
79
+
80
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
81
+
82
+ question = "What is the market cap of apple?"
83
+ input = "" # context if needed
84
+
85
+ prompt = f"""
86
+ A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question.
87
+ '### Instruction:\n{question}\n\n### Input:{input}\n""\n\n### Response:
88
+ """
89
+
90
+ input_ids = tokenizer.encode(prompt, return_tensors="pt").to('cuda:0')
91
+
92
+ with torch.no_grad():
93
+ generated_ids = model.generate(
94
+ input_ids,
95
+ do_sample=True,
96
+ top_p=0.9,
97
+ temperature=0.8,
98
+ )
99
+
100
+ generated_text = tokenizer.decode(
101
+ [el.item() for el in generated_ids[0]], skip_special_tokens=True
102
+ )
103
+ ```
104
+
105
+ ## Dataset for FIN-LLAMA
106
+
107
+ The dataset is released under bigscience-openrail-m.
108
+ You can find the dataset used to train FIN-LLAMA models on HF
109
+ at [bavest/fin-llama-dataset](https://huggingface.co/datasets/bavest/fin-llama-dataset).
110
+
111
+ ## Known Issues and Limitations
112
+
113
+ Here a list of known issues and bugs. If your issue is not reported here, please open a new issue and describe the
114
+ problem.
115
+ See [QLORA](https://github.com/artidoro/qlora) for any other limitations.
116
+
117
+ 1. 4-bit inference is slow. Currently, our 4-bit inference implementation is not yet integrated with the 4-bit matrix
118
+ multiplication
119
+ 2. Currently, using `bnb_4bit_compute_type='fp16'` can lead to instabilities.
120
+ 3. Make sure that `tokenizer.bos_token_id = 1` to avoid generation issues.
121
+
122
+ ## Acknowledgements
123
+
124
+ We also thank Meta for releasing the LLaMA models without which this work would not have been possible.
125
+
126
+ This repo builds on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
127
+ , [QLORA](https://github.com/artidoro/qlora), [Chinese-Guanaco](https://github.com/jianzhnie/Chinese-Guanaco/tree/main)
128
+ and [LMSYS FastChat](https://github.com/lm-sys/FastChat) repos.
129
+
130
+ ## License and Intended Use
131
+ We release the resources associated with QLoRA finetuning in this repository under GLP3 license. In addition, we release the FIN-LLAMA model family for base LLaMA model sizes of 7B, 13B, 33B, and 65B. These models are intended for purposes in line with the LLaMA license and require access to the LLaMA models.
132
+
133
+ ## Prompts
134
+ ### Act as an Accountant
135
+ > I want you to act as an accountant and come up with creative ways to manage finances. You'll need to consider budgeting, investment strategies and risk management when creating a financial plan for your client. In some cases, you may also need to provide advice on taxation laws and regulations in order to help them maximize their profits. My first suggestion request is “Create a financial plan for a small business that focuses on cost savings and long-term investments".
136
+
137
+ ## Paged Optimizer
138
+ You can access the paged optimizer with the argument --optim paged_adamw_32bit
139
+
140
+ ## Cite
141
+
142
+ ```tex
143
+ @misc{Fin-LLAMA,
144
+ author = {William Todt, Ramtin Babaei, Pedram Babaei},
145
+ title = {Fin-LLAMA: Efficient Finetuning of Quantized LLMs for Finance},
146
+ year = {2023},
147
+ publisher = {GitHub},
148
+ journal = {GitHub repository},
149
+ howpublished = {\url{https://github.com/Bavest/fin-llama}},
150
+ }
151
+ ```