juewang commited on
Commit
95e005c
1 Parent(s): 0f35095

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -196,6 +196,8 @@ widget:
196
 
197
  # Model Summary
198
  We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks. GPT-JT was trained with a new decentralized algorithm with 1G interconnect.
 
 
199
 
200
  # Quick Start
201
  ```python
 
196
 
197
  # Model Summary
198
  We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks. GPT-JT was trained with a new decentralized algorithm with 1G interconnect.
199
+ GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
200
+ Please check out our demo: [TOMA-app](https://huggingface.co/spaces/togethercomputer/TOMA-app).
201
 
202
  # Quick Start
203
  ```python