bartowski commited on
Commit
49fd00a
1 Parent(s): 08c6e09

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: llama3
4
+ tags:
5
+ - large_language_model
6
+ - finance
7
+ - sec_data
8
+ - continual_pre_training
9
+ datasets:
10
+ - SEC_filings
11
+ ---
12
+
13
+ <img src="https://i.ibb.co/kHtBmDN/w8m6-X4-HCQRa-IR86ar-Cm5gg.webp" width="600" />
14
+
15
+ # GGUF Quantizations for Llama-3-SEC: A Domain-Specific Chat Agent for SEC Data Analysis
16
+
17
+ Llama-3-SEC is a state-of-the-art domain-specific large language model trained on a vast corpus of SEC (Securities and Exchange Commission) data. Built upon the powerful Meta-Llama-3-70B-Instruct model, Llama-3-SEC has been developed to provide unparalleled insights and analysis capabilities for financial professionals, investors, researchers, and anyone working with SEC filings and related financial data.
18
+
19
+ GGUF files converted with <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3166">b3166</a>. Imatrix calibrated with the dataset found [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8).
20
+
21
+ ## Model Details
22
+
23
+ - **Base Model:** Meta-Llama-3-70B-Instruct
24
+ - **Training Data:** 19B tokens of SEC filings data, carefully mixed with 1B tokens of general data from Together AI's RedPajama dataset: [RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) to maintain a balance between domain-specific knowledge and general language understanding.
25
+ - **Training Method:** Continual Pre-Training (CPT) using the Megatron-Core framework, followed by model merging with the base model using the state-of-the-art TIES merging technique in the Arcee Mergekit toolkit. It then underwent supervised fine-tuning on an 8xH100 node using [Spectrum](https://arxiv.org/abs/2406.06623). We used a mixture of custom domain specific and general open-source datasets.
26
+ - **Training Infrastructure:** AWS SageMaker HyperPod cluster with 4 nodes, each equipped with 32 H100 GPUs, ensuring efficient and scalable training of this massive language model.
27
+
28
+ ## Use Cases
29
+
30
+ Llama-3-SEC is designed to assist with a wide range of tasks related to SEC data analysis, including but not limited to:
31
+
32
+ - In-depth investment analysis and decision support
33
+ - Comprehensive risk management and assessment
34
+ - Ensuring regulatory compliance and identifying potential violations
35
+ - Studying corporate governance practices and promoting transparency
36
+ - Conducting market research and tracking industry trends
37
+
38
+ The model's deep understanding of SEC filings and related financial data makes it an invaluable tool for anyone working in the financial sector, providing powerful natural language processing capabilities tailored to the specific needs of this domain.
39
+
40
+ ## Evaluation
41
+
42
+ To ensure the robustness and effectiveness of Llama-3-SEC, the model has undergone rigorous evaluation on both domain-specific and general benchmarks. Key evaluation metrics include:
43
+
44
+ - Domain-specific perplexity, measuring the model's performance on SEC-related data
45
+
46
+ <img src="https://i.ibb.co/K5d0wMh/Screenshot-2024-06-11-at-10-23-18-PM.png" width="600">
47
+
48
+ - Extractive numerical reasoning tasks, using subsets of TAT-QA and ConvFinQA datasets
49
+
50
+ <img src="https://i.ibb.co/xGHRfLf/Screenshot-2024-06-11-at-10-23-59-PM.png" width="600">
51
+
52
+ - General evaluation metrics, such as BIG-bench, AGIEval, GPT4all, and TruthfulQA, to assess the model's performance on a wide range of tasks
53
+
54
+ <img src="https://i.ibb.co/2v6PdDx/Screenshot-2024-06-11-at-10-25-03-PM.png" width="600">
55
+
56
+ These results demonstrate significant improvements in domain-specific performance while maintaining strong general capabilities, thanks to the use of advanced CPT and model merging techniques.
57
+
58
+ ## Training and Inference
59
+
60
+ Llama-3-SEC has been trained using the chatml chat template. This template ensures that the model maintains its strong conversational abilities while incorporating the domain-specific knowledge acquired during the CPT process.
61
+
62
+ To run inference with the Llama-3-SEC model using the chatml chat template, you can use the following code:
63
+
64
+ ```python
65
+ from transformers import AutoModelForCausalLM, AutoTokenizer
66
+ device = "cuda"
67
+
68
+ model_name = "arcee-ai/Llama-3-SEC"
69
+
70
+ model = AutoModelForCausalLM.from_pretrained(
71
+ model_name,
72
+ torch_dtype="auto",
73
+ device_map="auto"
74
+ )
75
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
76
+
77
+ prompt = "What are the key regulatory considerations for a company planning to conduct an initial public offering (IPO) in the United States?"
78
+ messages = [
79
+ {"role": "system", "content": "You are an expert financial assistant - specializing in governance and regulatory domains."},
80
+ {"role": "user", "content": prompt}
81
+ ]
82
+ text = tokenizer.apply_chat_template(
83
+ messages,
84
+ tokenize=False,
85
+ add_generation_prompt=True
86
+ )
87
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
88
+
89
+ generated_ids = model.generate(
90
+ model_inputs.input_ids,
91
+ max_new_tokens=512
92
+ )
93
+ generated_ids = [
94
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
95
+ ]
96
+
97
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
98
+ ```
99
+
100
+ ## Limitations and Future Work
101
+
102
+ This release represents the initial checkpoint of the Llama-3-SEC model, trained on 20B tokens of SEC data. Additional checkpoints will be released in the future as training on the full 70B token dataset is completed. Future work will focus on further improvements to the CPT data processing layer, exploration of advanced model merging techniques, and alignment of CPT models with SFT, DPO, and other cutting-edge alignment methods to further enhance the model's performance and reliability.
103
+
104
+ ## Usage
105
+
106
+ The model is available for both commercial and non-commercial use under the Llama-3 license. We encourage users to explore the model's capabilities and provide feedback to help us continuously improve its performance and usability. For more information - please see our detailed blog on Llama-3-SEC.
107
+
108
+ **Note:** We trained Llama-3-SEC to be very compliant with system prompts. We have included a default system prompt, but if you wish to tailor answers to what your specific use case is, creating a system prompt that outlines your desired behavior is recommended.
109
+
110
+ **Disclaimer:** Llama-3-SEC is a large language model (LLM) designed to assist with SEC data analysis. Users are solely responsible for any actions taken as a result of using Llama-3-SEC. Always double-check model responses.
111
+
112
+ ## Citation
113
+
114
+ If you use this model in your research or applications, please cite:
115
+
116
+ ```bibtex
117
+ @misc{Introducing_SEC_Data_Chat_Agent,
118
+ title={Introducing the Ultimate SEC Data Chat Agent: Revolutionizing Financial Insights},
119
+ author={Shamane Siriwardhana and Luke Mayers and Thomas Gauthier and Jacob Solawetz and Tyler Odenthal and Anneketh Vij and Lucas Atkins and Charles Goddard and Mary MacCarthy and Mark McQuade},
120
+ year={2024},
121
+ note={Available at: \url{firstname@arcee.ai}},
122
+ url={URL after published}
123
+ }
124
+ ```
125
+
126
+ For further information or inquiries, please contact the authors at their respective email addresses (firstname@arcee.ai). We look forward to seeing the exciting applications and research that will emerge from the use of Llama-3-SEC in the financial domain.