boda commited on
Commit
3eb8b73
1 Parent(s): 958afc9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -48
README.md CHANGED
@@ -36,9 +36,7 @@ The input to the model is structred as follows:
36
  - **Language(s) (NLP):** English
37
  - **Finetuned from model [optional]:** meta-llama/Llama-2-7b-hf
38
 
39
- ### Model Sources [optional]
40
-
41
- <!-- Provide the basic links for the model. -->
42
 
43
  - **Repository:** https://github.com/BodaSadalla98/llm-optimized-fintuning
44
 
@@ -55,11 +53,6 @@ The model is the result of our AI project. If you intend to use it, please, refe
55
 
56
  For improving stories generation, you can play parameters: temeperature, top_p/top_k, repetition_penalty, etc.
57
 
58
- ## How to Get Started with the Model
59
-
60
- Use the code below to get started with the model.
61
-
62
- [More Information Needed]
63
 
64
  ## Training Details
65
 
@@ -69,20 +62,6 @@ Use the code below to get started with the model.
69
 
70
  Github for the dataset: https://github.com/kevalnagda/StoryGeneration
71
 
72
- ### Training Procedure
73
-
74
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
75
-
76
-
77
- #### Training Hyperparameters
78
-
79
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
80
-
81
- #### Speeds, Sizes, Times [optional]
82
-
83
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
84
-
85
- [More Information Needed]
86
 
87
  ## Evaluation
88
 
@@ -90,9 +69,6 @@ Github for the dataset: https://github.com/kevalnagda/StoryGeneration
90
 
91
  ### Testing Data, Factors & Metrics
92
 
93
- #### Testing Data
94
-
95
- <!-- This should link to a Data Card if possible. -->
96
 
97
  Test split of the same dataset.
98
 
@@ -108,29 +84,6 @@ Perplexity: 8.0546
108
 
109
  BERTScore: 80.11
110
 
111
- #### Summary
112
-
113
-
114
-
115
- ## Model Examination [optional]
116
-
117
- <!-- Relevant interpretability work for the model goes here -->
118
-
119
- [More Information Needed]
120
-
121
- ## Environmental Impact
122
-
123
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
124
-
125
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
126
-
127
- - **Hardware Type:** [More Information Needed]
128
- - **Hours used:** [More Information Needed]
129
- - **Cloud Provider:** [More Information Needed]
130
- - **Compute Region:** [More Information Needed]
131
- - **Carbon Emitted:** [More Information Needed]
132
-
133
-
134
  ## Training procedure
135
 
136
 
 
36
  - **Language(s) (NLP):** English
37
  - **Finetuned from model [optional]:** meta-llama/Llama-2-7b-hf
38
 
39
+ ### Model Sources
 
 
40
 
41
  - **Repository:** https://github.com/BodaSadalla98/llm-optimized-fintuning
42
 
 
53
 
54
  For improving stories generation, you can play parameters: temeperature, top_p/top_k, repetition_penalty, etc.
55
 
 
 
 
 
 
56
 
57
  ## Training Details
58
 
 
62
 
63
  Github for the dataset: https://github.com/kevalnagda/StoryGeneration
64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
  ## Evaluation
67
 
 
69
 
70
  ### Testing Data, Factors & Metrics
71
 
 
 
 
72
 
73
  Test split of the same dataset.
74
 
 
84
 
85
  BERTScore: 80.11
86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
  ## Training procedure
88
 
89