Vezora Suparious commited on
Commit
905cb0b
1 Parent(s): a9f736b

Fix typo in the name Goddard (#4)

Browse files

- Fix typo in the name Goddard (5cf30dbe00b9a55da16ed0c861ab3eeeab0727bd)


Co-authored-by: Shaun Prince <Suparious@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -47,7 +47,7 @@ Please note that Mistral-22b is still in a WIP. v0.3 has started training now, w
47
 
48
  ## Thank you!
49
  - Thank you to [Daniel Han](https://twitter.com/danielhanchen), for Unsloth AI which was used to train this model. this led to a 2-3x speed increae and 2-3x decrease in memmory consumption.
50
- - Thank you to [Charles Coddard](https://twitter.com/chargoddard), for providng me with a script that was nessary to make this model.
51
  - Thank you to Mistral, for releasing Another Wonderful open source model, under Apache 2.0.
52
  - Thank you to [Tim Dettmers](https://twitter.com/Tim_Dettmers), for creating QLora
53
  - Thank you to [Tri Dao](https://twitter.com/tri_dao), for creating Flash Attention
 
47
 
48
  ## Thank you!
49
  - Thank you to [Daniel Han](https://twitter.com/danielhanchen), for Unsloth AI which was used to train this model. this led to a 2-3x speed increae and 2-3x decrease in memmory consumption.
50
+ - Thank you to [Charles Goddard](https://twitter.com/chargoddard), for providng me with a script that was nessary to make this model.
51
  - Thank you to Mistral, for releasing Another Wonderful open source model, under Apache 2.0.
52
  - Thank you to [Tim Dettmers](https://twitter.com/Tim_Dettmers), for creating QLora
53
  - Thank you to [Tri Dao](https://twitter.com/tri_dao), for creating Flash Attention