Rahulholla commited on
Commit
071e2f2
1 Parent(s): e2370c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -131
README.md CHANGED
@@ -12,195 +12,106 @@ pipeline_tag: text-generation
12
 
13
  # Model Card for Model ID
14
 
15
- <!-- Provide a quick summary of what the model is/does. -->
16
-
17
-
18
-
19
  ## Model Details
20
 
21
  ### Model Description
22
 
23
- <!-- Provide a longer summary of what this model is. -->
24
-
25
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
26
 
27
- - **Developed by:** [More Information Needed]
28
- - **Funded by [optional]:** [More Information Needed]
29
- - **Shared by [optional]:** [More Information Needed]
30
- - **Model type:** [More Information Needed]
31
- - **Language(s) (NLP):** [More Information Needed]
32
  - **License:** [More Information Needed]
33
- - **Finetuned from model [optional]:** [More Information Needed]
34
 
35
  ### Model Sources [optional]
36
 
37
- <!-- Provide the basic links for the model. -->
38
-
39
- - **Repository:** [More Information Needed]
40
- - **Paper [optional]:** [More Information Needed]
41
- - **Demo [optional]:** [More Information Needed]
42
 
43
  ## Uses
44
 
45
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
-
47
  ### Direct Use
48
 
49
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
-
51
- [More Information Needed]
52
 
53
- ### Downstream Use [optional]
54
 
55
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
-
57
- [More Information Needed]
58
-
59
- ### Out-of-Scope Use
60
-
61
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
-
63
- [More Information Needed]
64
 
65
  ## Bias, Risks, and Limitations
66
 
67
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
-
69
- [More Information Needed]
70
 
71
  ### Recommendations
72
 
73
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
-
75
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
 
77
  ## How to Get Started with the Model
78
 
79
- Use the code below to get started with the model.
80
 
81
- [More Information Needed]
82
 
83
- ## Training Details
84
 
85
- ### Training Data
86
 
87
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
88
 
89
- [More Information Needed]
90
 
91
- ### Training Procedure
92
 
93
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
 
95
- #### Preprocessing [optional]
96
 
97
- [More Information Needed]
98
 
 
99
 
100
  #### Training Hyperparameters
101
 
102
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
-
104
- #### Speeds, Sizes, Times [optional]
105
-
106
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
-
108
- [More Information Needed]
 
 
109
 
110
  ## Evaluation
111
 
112
- <!-- This section describes the evaluation protocols and provides the results. -->
113
-
114
  ### Testing Data, Factors & Metrics
115
 
116
  #### Testing Data
117
 
118
- <!-- This should link to a Dataset Card if possible. -->
119
-
120
- [More Information Needed]
121
 
122
  #### Factors
123
 
124
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
-
126
- [More Information Needed]
127
 
128
  #### Metrics
129
 
130
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
-
132
- [More Information Needed]
133
 
134
  ### Results
135
 
136
- [More Information Needed]
137
 
138
  #### Summary
139
 
140
-
141
-
142
- ## Model Examination [optional]
143
-
144
- <!-- Relevant interpretability work for the model goes here -->
145
-
146
- [More Information Needed]
147
-
148
- ## Environmental Impact
149
-
150
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
-
152
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
-
154
- - **Hardware Type:** [More Information Needed]
155
- - **Hours used:** [More Information Needed]
156
- - **Cloud Provider:** [More Information Needed]
157
- - **Compute Region:** [More Information Needed]
158
- - **Carbon Emitted:** [More Information Needed]
159
-
160
- ## Technical Specifications [optional]
161
-
162
- ### Model Architecture and Objective
163
-
164
- [More Information Needed]
165
 
166
  ### Compute Infrastructure
167
 
168
- [More Information Needed]
169
-
170
- #### Hardware
171
-
172
- [More Information Needed]
173
-
174
- #### Software
175
-
176
- [More Information Needed]
177
-
178
- ## Citation [optional]
179
-
180
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
-
182
- **BibTeX:**
183
-
184
- [More Information Needed]
185
-
186
- **APA:**
187
-
188
- [More Information Needed]
189
-
190
- ## Glossary [optional]
191
-
192
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
-
194
- [More Information Needed]
195
-
196
- ## More Information [optional]
197
-
198
- [More Information Needed]
199
-
200
- ## Model Card Authors [optional]
201
-
202
- [More Information Needed]
203
-
204
- ## Model Card Contact
205
-
206
- [More Information Needed]
 
12
 
13
  # Model Card for Model ID
14
 
 
 
 
 
15
  ## Model Details
16
 
17
  ### Model Description
18
 
 
 
19
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
20
 
21
+ - **Developed by:** Dehaze
22
+ - **Funded by [optional]:** Dehaze
23
+ - **Model type:** Text-generation
24
+ - **Language(s) (NLP):** English
 
25
  - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** Mistral-7B-v0.1
27
 
28
  ### Model Sources [optional]
29
 
30
+ - **Repository:** DeHazeLabs/llm-case-study/stock-analysis
 
 
 
 
31
 
32
  ## Uses
33
 
 
 
34
  ### Direct Use
35
 
36
+ The model can be directly used to analyze stock option data and provide actionable trading insights based on the input provided. It can assist users in understanding key metrics such as implied volatility, option prices, technical indicators, and more, to make informed trading decisions.
 
 
37
 
38
+ ### Downstream Use
39
 
40
+ Users can fine-tune the model for specific tasks related to stock market analysis or integrate it into larger systems for automated trading strategies, financial advisory services, or sentiment analysis of financial markets.
 
 
 
 
 
 
 
 
41
 
42
  ## Bias, Risks, and Limitations
43
 
44
+ The model's predictions may be influenced by biases present in the training data, such as historical market trends or prevailing market sentiment. Additionally, the model's effectiveness may vary depending on the quality and relevance of the input data provided by users.
 
 
45
 
46
  ### Recommendations
47
 
48
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
49
+ Users should exercise caution and validate the model's predictions with additional research and analysis before making any trading decisions. It's also recommended to consider multiple sources of information and consult with financial experts when interpreting the model's output.
 
50
 
51
  ## How to Get Started with the Model
52
 
53
+ # Getting Started with the Model
54
 
55
+ ## Installation
56
 
57
+ Ensure that you have the `transformers` library installed. If not, you can install it via pip:
58
 
59
+ ```pip install transformers```
60
 
61
+ You can load the model using the provided pipeline or directly with the AutoTokenizer and AutoModelForCausalLM classes from the transformers library.
62
+ Once the model is loaded, you can use it for text generation tasks. If you prefer a high-level interface, you can use the pipeline approach as well.
63
+ Alternatively, you can directly interact with the model using the tokenizer and model objects as well.
64
 
65
+ ## Training Details
66
 
67
+ ### Training Data
68
 
69
+ The model was trained on a dataset containing examples of stock option data paired with corresponding trading insights. The dataset includes information such as implied volatility, option prices, technical indicators, and trading recommendations for various stocks.
70
 
71
+ ### Training Procedure
72
 
73
+ #### Preprocessing
74
 
75
+ The input data was preprocessed to tokenize and encode the text input before training.
76
 
77
  #### Training Hyperparameters
78
 
79
+ - **Training regime:**
80
+ Training regime: Mixed precision training with bf16 precision.
81
+ Warmup steps: 1
82
+ Per-device train batch size: 2
83
+ Gradient accumulation steps: 1
84
+ Max steps: 500
85
+ Learning rate: 2.5e-5
86
+ Optimizer: paged_adamw_8bit
87
+ Logging and saving strategy: Logging and saving checkpoints every 25 steps with wandb integration.
88
 
89
  ## Evaluation
90
 
 
 
91
  ### Testing Data, Factors & Metrics
92
 
93
  #### Testing Data
94
 
95
+ The testing data consisted of examples similar to the training data, with stock option data and expected trading insights provided.
 
 
96
 
97
  #### Factors
98
 
99
+ Factors considered during evaluation include the quality of the model's predictions, alignment with expected trading recommendations, and consistency across different test cases.
 
 
100
 
101
  #### Metrics
102
 
103
+ Evaluation metrics include accuracy of trading recommendations, relevance of generated insights, and overall coherence of the model's output.
 
 
104
 
105
  ### Results
106
 
107
+ The model demonstrated the ability to provide relevant and actionable trading insights based on the input stock option data.
108
 
109
  #### Summary
110
 
111
+ ## Technical Specifications
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
112
 
113
  ### Compute Infrastructure
114
 
115
+ 1 x A100 GPU - 80GB VRAM
116
+ 117 GB RAM
117
+ 12 vCPU