Crystalcareai commited on
Commit
9022eb4
·
verified ·
1 Parent(s): 33d0d26

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -52
README.md CHANGED
@@ -1,57 +1,87 @@
1
  ---
2
- base_model:
3
- - meta-llama/Meta-Llama-3-70B-Instruct
4
- - meta-llama/Meta-Llama-3-70B
5
- library_name: transformers
6
  tags:
7
- - mergekit
8
- - merge
9
-
 
 
 
 
10
  ---
11
- # merged
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
-
15
- ## Merge Details
16
- ### Merge Method
17
-
18
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) as a base.
19
-
20
- ### Models Merged
21
-
22
- The following models were included in the merge:
23
- * /home/ubuntu/data/cpt
24
- * [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
25
-
26
- ### Configuration
27
-
28
- The following YAML configuration was used to produce this model:
29
-
30
- ```yaml
31
- merge_method: ties
32
- base_model: meta-llama/Meta-Llama-3-70B
33
- models:
34
- - model: /home/ubuntu/data/cpt
35
- parameters:
36
- weight:
37
- - filter: mlp
38
- value: [0.25, 0.5, 0.5, 0.25]
39
- - filter: self_attn
40
- value: [0.25, 0.5, 0.5, 0]
41
- - value: [0.25, 0.5, 0.5, 0.25]
42
- density: 0.75
43
- - model: meta-llama/Meta-Llama-3-70B-Instruct
44
- parameters:
45
- weight:
46
- - filter: mlp
47
- value: [0.75, 0.5, 0.5, 0.75]
48
- - filter: self_attn
49
- value: [0.75, 0.5, 0.5, 1]
50
- - value: [0.75, 0.5, 0.5, 0.75]
51
- density: 1.0
52
- parameters:
53
- normalize: true
54
- int8_mask: true
55
- dtype: bfloat16
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ```
 
 
 
1
  ---
2
+ language: en
3
+ license: llama3
 
 
4
  tags:
5
+ - large_language_model
6
+ - finance
7
+ - sec_data
8
+ - continual_pre_training
9
+ - model_merging
10
+ datasets:
11
+ - SEC_filings
12
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ # Llama-3-SEC: A Domain-Specific Chat Agent for SEC Data Analysis
15
+
16
+ Llama-3-SEC is a state-of-the-art domain-specific large language model trained on a vast corpus of SEC (Securities and Exchange Commission) data. Built upon the powerful Meta-Llama-3-70B-Instruct model, Llama-3-SEC has been developed to provide unparalleled insights and analysis capabilities for financial professionals, investors, researchers, and anyone working with SEC filings and related financial data.
17
+
18
+ ## Model Details
19
+
20
+ - **Base Model:** Meta-Llama-3-70B-Instruct
21
+ - **Training Data:** 70B tokens of SEC filings data, carefully mixed with 1B tokens of general data from Together AI's RedPajama dataset to maintain a balance between domain-specific knowledge and general language understanding
22
+ - **Training Method:** Continual Pre-Training (CPT) using the Megatron-Core framework, followed by model merging with the base model using the state-of-the-art TIES merging technique in the Arcee Mergekit toolkit
23
+ - **Training Infrastructure:** AWS SageMaker HyperPod cluster with 4 nodes, each equipped with 32 H100 GPUs, ensuring efficient and scalable training of this massive language model
24
+
25
+ ## Use Cases
26
+
27
+ Llama-3-SEC is designed to assist with a wide range of tasks related to SEC data analysis, including but not limited to:
28
+
29
+ - In-depth investment analysis and decision support
30
+ - Comprehensive risk management and assessment
31
+ - Ensuring regulatory compliance and identifying potential violations
32
+ - Studying corporate governance practices and promoting transparency
33
+ - Conducting market research and tracking industry trends
34
+
35
+ The model's deep understanding of SEC filings and related financial data makes it an invaluable tool for anyone working in the financial sector, providing powerful natural language processing capabilities tailored to the specific needs of this domain.
36
+
37
+ ## Evaluation
38
+
39
+ To ensure the robustness and effectiveness of Llama-3-SEC, the model has undergone rigorous evaluation on both domain-specific and general benchmarks. Key evaluation metrics include:
40
+
41
+ - Domain-specific perplexity, measuring the model's performance on SEC-related data
42
+
43
+ ![Domain Specific Perplexity of Model Variants](https://i.ibb.co/xGHRfLf/Screenshot-2024-06-11-at-10-23-59-PM.png)
44
+
45
+ - Extractive numerical reasoning tasks, using subsets of TAT-QA and ConvFinQA datasets
46
+
47
+ ![Domain Specific Evaluations of Model Variants](https://i.ibb.co/2v6PdDx/Screenshot-2024-06-11-at-10-25-03-PM.png)
48
+
49
+ - General evaluation metrics, such as BIG-bench, AGIEval, GPT4all, and TruthfulQA, to assess the model's performance on a wide range of tasks
50
+
51
+ ![General Evaluations of Model Variants](https://i.ibb.co/K5d0wMh/Screenshot-2024-06-11-at-10-23-18-PM.png)
52
+
53
+ - General perplexity on various datasets, including bigcode/starcoderdata, open-web-math/open-web-math, allenai/peS2o, mattymchen/refinedweb-3m, and Wikitext
54
+
55
+ The evaluation results demonstrate significant improvements in domain-specific performance while maintaining strong general capabilities, thanks to the use of advanced CPT and model merging techniques.
56
+
57
+ ## Training and Inference
58
+
59
+ Llama-3-SEC has been trained using the llama3 chat template, which allows for efficient and effective fine-tuning of the model on the SEC data. This template ensures that the model maintains its strong conversational abilities while incorporating the domain-specific knowledge acquired during the CPT process.
60
+
61
+ To run inference with the Llama-3-SEC model using the llama3 chat template, use the following code:
62
+
63
+ <chat_example>
64
+
65
+ ## Limitations and Future Work
66
+
67
+ This release represents the initial checkpoint of the Llama-3-SEC model, trained on 20B tokens of SEC data. Additional checkpoints will be released in the future as training on the full 70B token dataset is completed. Future work will focus on further improvements to the CPT data processing layer, exploration of advanced model merging techniques, and alignment of CPT models with SFT, DPO, and other cutting-edge alignment methods to further enhance the model's performance and reliability.
68
+
69
+ ## Usage
70
+
71
+ To use the Llama-3-SEC model, please refer to the detailed instructions provided in the repository. The model is available for both commercial and non-commercial use under the Llama-3 license. We encourage users to explore the model's capabilities and provide feedback to help us continuously improve its performance and usability.
72
+
73
+ ## Citation
74
+
75
+ If you use this model in your research or applications, please cite:
76
+
77
+ ```bibtex
78
+ @misc{Introducing_SEC_Data_Chat_Agent,
79
+ title={Introducing the Ultimate SEC Data Chat Agent: Revolutionizing Financial Insights},
80
+ author={Shamane Siriwardhana and Luke Mayers and Thomas Gauthier and Jacob Solawetz and Tyler Odenthal and Anneketh Vij and Lucas Atkins and Charles Goddard and Mary MacCarthy and Mark McQuade},
81
+ year={2024},
82
+ note={Available at: \url{firstname@arcee.ai}},
83
+ url={URL after published}
84
+ }
85
  ```
86
+
87
+ For further information or inquiries, please contact the authors at their respective email addresses (firstname@arcee.ai). We look forward to seeing the exciting applications and research that will emerge from the use of Llama-3-SEC in the financial domain.