msamon commited on
Commit
160290b
1 Parent(s): 05a6903

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: llama2
5
+ datasets:
6
+ - Open-Orca/OpenOrca
7
+ - GAIR/lima
8
+ - WizardLM/WizardLM_evol_instruct_V2_196k
9
+ metrics:
10
+ - accuracy
11
+ pipeline_tag: text-generation
12
+ tags:
13
+ - finance
14
+ ---
15
+
16
+ # AdaptLLM finance-chat AWQ
17
+ This repo contains the quantized version of the finance-chat model released by Microsoft / AdaptLLM.
18
+
19
+ Original model card below:
20
+
21
+ # Adapt (Large) Language Models to Domains
22
+ This repo contains the domain-specific chat model developed from **LLaMA-2-Chat-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
23
+
24
+ We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
25
+
26
+ ### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗
27
+
28
+ **************************** **Updates** ****************************
29
+ * 12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/finance-LLM-13B) developed from LLaMA-1-13B.
30
+ * 12/8: Released our [chat models](https://huggingface.co/AdaptLLM/finance-chat) developed from LLaMA-2-Chat-7B.
31
+ * 9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [base models](https://huggingface.co/AdaptLLM/finance-LLM) developed from LLaMA-1-7B.
32
+
33
+
34
+ ## Domain-Specific LLaMA-1
35
+ ### LLaMA-1-7B
36
+ In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
37
+
38
+ <p align='center'>
39
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
40
+ </p>
41
+
42
+ ### LLaMA-1-13B
43
+ Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
44
+
45
+ ## Domain-Specific LLaMA-2-Chat
46
+ Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
47
+
48
+ For example, to chat with the finance model:
49
+ ```python
50
+ from transformers import AutoModelForCausalLM, AutoTokenizer
51
+
52
+ model = AutoModelForCausalLM.from_pretrained("AdaptLLM/finance-chat")
53
+ tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/finance-chat", use_fast=False)
54
+
55
+ # Put your input here:
56
+ user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
57
+ Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
58
+ MMM Chicago Stock Exchange, Inc.
59
+ 1.500% Notes due 2026 MMM26 New York Stock Exchange
60
+ 1.750% Notes due 2030 MMM30 New York Stock Exchange
61
+ 1.500% Notes due 2031 MMM31 New York Stock Exchange
62
+
63
+ Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?'''
64
+
65
+ # We use the prompt template of LLaMA-2-Chat demo
66
+ prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]"
67
+
68
+ inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
69
+ outputs = model.generate(input_ids=inputs, max_length=4096)[0]
70
+
71
+ answer_start = int(inputs.shape[-1])
72
+ pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
73
+
74
+ print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
75
+ ```
76
+ ## Domain-Specific Tasks
77
+ To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
78
+
79
+ **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
80
+
81
+ ## Citation
82
+ If you find our work helpful, please cite us:
83
+ ```bibtex
84
+ @article{adaptllm,
85
+ title = {Adapting Large Language Models via Reading Comprehension},
86
+ author = {Daixuan Cheng and Shaohan Huang and Furu Wei},
87
+ journal = {CoRR},
88
+ volume = {abs/2309.09530},
89
+ year = {2023}
90
+ }
91
+ ```