datasets:
- natural_instructions
- the_pile
- cot
- Muennighoff/P3
inference:
parameters:
max_new_tokens: 5
temperature: 1
top_k: 1
language:
- en
pipeline_tag: text-generation
widget:
- example_title: Sentiment Analysis
text: >-
The task is to label the post's emotion as sadness, joy, love, anger,
fear, or surprise.
Input: I'm feeling quite sad and sorry for myself but ill snap out of it
soon.
Output: sadness
Input: I am just feeling cranky and blue.
Output: anger
Input: I can have for a treat or if i am feeling festive.
Output:
- example_title: Country Currency
text: |-
Return the currency of the given country.
Input: Switzerland
Output: Swiss Franc
Input: India
Output:
- example_title: Tweet Eval Hate
text: >-
Label whether the following tweet contains hate speech against either
immigrants or women. Hate Speech (HS) is commonly defined as any
communication that disparages a person or a group on the basis of some
characteristic such as race, color, ethnicity, gender, sexual orientation,
nationality, religion, or other characteristics.
Possible labels:
1. hate speech
2. not hate speech
Tweet: HOW REFRESHING! In South Korea, there is no such thing as
'political correctness" when it comes to dealing with Muslim refugee
wannabes via @user
Label: hate speech
Tweet: New to Twitter-- any men on here know what the process is to get
#verified?
Label: not hate speech
Tweet: Dont worry @user you are and will always be the most hysterical
woman.
Label:
- example_title: Entity Recognition
text: >-
Extract all the names of people, places, and organizations from the
following sentences.
Sentence: Satya Nadella, the CEO of Microsoft, was visiting the Bahamas
last May.
Entities: Satya Nadella, Microsoft, Bahamas
Sentence: Pacific Northwest cities include Seattle and Portland, which I
have visited with Vikash.
Entities:
- example_title: Data Clearning
text: |-
Format the data into a CSV file:
Input: Jane Doe jane.doe@gmail.com (520) 382 2435
Output: Jane Doe,jane.doe@gmail.com,520-382-2435
Input: Peter Lee (510) 333-2429 email: peter@yahoo.com
Output:
GPT-JT
Model Summary
We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks relative to GPT-J-6B. GPT-JT was trained with a new decentralized algorithm on computers networked on slow 1Gbps links. GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
Please check out our Online Demo!.
Quick Start
from transformers import pipeline
pipe = pipeline(model='togethercomputer/GPT-JT-6B-v1')
pipe('''"I do not like this!" Is it positive or negative? A:''')
or
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/GPT-JT-6B-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1")
Training Data
We fine-tune GPT-J-6B on NI, P3, COT, the pile data.
Hyperparameters
We used AdamW with a learning rate of 1e-5 and global batch size of 64, and train for 20k steps. We used mix-precision training where the activation is in FP16 while the optimizer states are kept in FP32. We use both data parallelism and pipeline parallelism to conduct training. During training, we truncate the input sequence to 2048 tokens, and for input sequence that contains less than 2048 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.
Infrastructure
We used the Together Research Computer to conduct training.