mgladden commited on
Commit
7091191
1 Parent(s): d233b61

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -16
README.md CHANGED
@@ -12,33 +12,25 @@ pipeline_tag: text-generation
12
  ---
13
 
14
  # ManaGPT-1010
15
- <img style="float:right; margin:10px; margin-right:30px" src="https://huggingface.co/NeuraXenetica/ManaGPT-1010/resolve/main/ManaGPT_logo_01.png" width="150" height="150">
16
- This model is a fine-tuned version of GPT-2 that has been trained on a custom dataset of scholarly and popular texts from the field of organizational management that relate to the emerging effects of posthumanizing technologies (e.g., relating to advanced artificial intelligence, social robotics, virtual reality, neuroprosthetics, and cyber-physical systems) on the structure of organizations and human beings’ experience of organizational life.
 
 
17
 
18
  ## Model description
19
 
20
- More information needed
21
 
22
  ## Intended uses & limitations
23
 
24
- More information needed
25
-
26
- ## Training and evaluation data
27
-
28
- More information needed
29
-
30
- ## Training procedure
31
 
32
- ### Training hyperparameters
33
 
34
- The following hyperparameters were used during training:
35
  - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 500, 'decay_rate': 0.95, 'staircase': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
36
  - training_precision: float32
37
 
38
- ### Training results
39
-
40
-
41
-
42
  ### Framework versions
43
 
44
  - Transformers 4.27.1
 
12
  ---
13
 
14
  # ManaGPT-1010
15
+ <img style="float:right; margin:10px; margin-right:30px" src="https://huggingface.co/NeuraXenetica/ManaGPT-1010/resolve/main/ManaGPT_logo_01.png" width="150" height="150"></img>
16
+ _(Please note that ManaGPT-1010 has been superseded by **[ManaGPT-1020](https://huggingface.co/NeuraXenetica/ManaGPT-1020)**, which has been fine-tuned on a dataset roughly 6.45 times the size of that used to fine-tune ManaGPT-1010.)_
17
+
18
+ **ManaGPT-1010** is an experimental open-source text-generating AI designed to offer insights on the role of emerging technologies in organizational management.
19
 
20
  ## Model description
21
 
22
+ The model is a fine-tuned version of GPT-2 that has been trained on a custom corpus of scholarly and popular texts from the field of organizational management that relate to ongoing effects of posthumanizing technologies (e.g., relating to advanced artificial intelligence, social robotics, virtual reality, neuroprosthetics, and cyber-physical systems) on the structure of organizations and human beings’ experience of organizational life.
23
 
24
  ## Intended uses & limitations
25
 
26
+ This model has been designed for experimental research purposes; it isn’t intended for use in a production setting or in any sensitive or potentially hazardous contexts.
 
 
 
 
 
 
27
 
28
+ ## Training procedure and hyperparameters
29
 
30
+ The model was trained using a Tesla T4 with 16GB of GPU memory. The following hyperparameters were used during training:
31
  - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 500, 'decay_rate': 0.95, 'staircase': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
32
  - training_precision: float32
33
 
 
 
 
 
34
  ### Framework versions
35
 
36
  - Transformers 4.27.1