jacobfulano commited on
Commit
66aedd3
1 Parent(s): 9580a8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -36,7 +36,6 @@ Apache-2.0 (commercial use permitted)
36
 
37
  **SamIAm85**:
38
  >I want you to come up with a tweet based on this summary of the article:
39
-
40
  >"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
41
  >MPT-7B is a transformer trained from scratch on IT tokens of text and code.
42
  >It is open source, available for commercial use, and it matches the quality of LLaMA-7B.
@@ -101,8 +100,8 @@ The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.c
101
 
102
  ## Training Configuration
103
 
104
- This model was finetuned on 440 A100-40GBs for about half a day using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using FSDP.
105
 
106
  ## Acknowledgements
107
 
108
-
 
36
 
37
  **SamIAm85**:
38
  >I want you to come up with a tweet based on this summary of the article:
 
39
  >"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series.
40
  >MPT-7B is a transformer trained from scratch on IT tokens of text and code.
41
  >It is open source, available for commercial use, and it matches the quality of LLaMA-7B.
 
100
 
101
  ## Training Configuration
102
 
103
+ This model was finetuned on 440 A100-40GBs for about half a day using the [MosaicML Platform](https://www.mosaicml.com/platform).
104
 
105
  ## Acknowledgements
106
 
107
+ This model was finetuned by Sam Havens