How do you use the bidirectional aspect of the model?

#1
by BigSalmon - opened

How do you use the bidirectional aspect of the model?

Together org

Hi. It is basically a Prefix-LM, where the prompt phase uses bidirectional attention, and the token generation phase uses causal attention.
We mainly follow the setup of UL2R. Have fun :)

I'm curious about getting this to load right using transformers. If we load it with AutoModelForCausalLM, won't the prompt not get bidirectional attention? Is the idea that PrefixLM is only needed during training and you go back to CLM at inference?

Together org

Great question! We achieve this by setting model.transformer.h[i].attn.bias.data[:] = 1
Therefore, during the prompt encoding phase, the causal mask will be all one thus none will be masked; And during token generation phase, each generated token naturally only sees the tokens before it, so no special handling is required. :)

@juewang Now I want to continue to fintune GPT-JT on dialog datasets, do you also release the finetuning source code or any instruction for it ?

Great question! We achieve this by setting model.transformer.h[i].attn.bias.data[:] = 1
Therefore, during the prompt encoding phase, the causal mask will be all one thus none will be masked; And during token generation phase, each generated token naturally only sees the tokens before it, so no special handling is required. :)

So by using this:
model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1")
model.generate(...)
Do I need to set model.transformer.h[i].attn.bias.data[:] = 1 at the first time of encoding the prompt or you have already handled it automatically in the code. If you've already handled it, can you show me that line of code ? Thank you !

@juewang Now I want to continue to fintune GPT-JT on dialog datasets, do you also release the finetuning source code or any instruction for it ?

We used a fork of DS3Lab/DT-FM, and I think we will release the source code of training GPT-JT soon!

Together org

Great question! We achieve this by setting model.transformer.h[i].attn.bias.data[:] = 1
Therefore, during the prompt encoding phase, the causal mask will be all one thus none will be masked; And during token generation phase, each generated token naturally only sees the tokens before it, so no special handling is required. :)

So by using this:
model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1")
model.generate(...)
Do I need to set model.transformer.h[i].attn.bias.data[:] = 1 at the first time of encoding the prompt or you have already handled it automatically in the code. If you've already handled it, can you show me that line of code ? Thank you !

You don't have to set it manually. After loading, it defaults to all 1.

So in short, the following code should work:

model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1")
model.generate(...) 

Great question! We achieve this by setting model.transformer.h[i].attn.bias.data[:] = 1
Therefore, during the prompt encoding phase, the causal mask will be all one thus none will be masked; And during token generation phase, each generated token naturally only sees the tokens before it, so no special handling is required. :)

So by using this:
model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1")
model.generate(...)
Do I need to set model.transformer.h[i].attn.bias.data[:] = 1 at the first time of encoding the prompt or you have already handled it automatically in the code. If you've already handled it, can you show me that line of code ? Thank you !

You don't have to set it manually. After loading, it defaults to all 1.

So in short, the following code should work:

model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1")
model.generate(...) 

Thank you for your answer. I also have another question about Padding during training. In the paper: "Transcending Scaling Laws with 0.1% Extra Compute",
Screen Shot 2022-11-30 at 23.45.52.png
Screen Shot 2022-11-30 at 23.36.56.png
Can you explain more about this prefix optimization ? Such as giving examples ! Thank you !

Together org

Our setting is slightly different to the UL2R paper -- we didn't add padding to the prefix, so the prefix length is variable.
We manipulated the causal mask to indicate the prefix part and target part, and we only calculate loss on the target part.

In the first 2.6B tokens, we trained with UL2's mixture-of-denoiser objective (see 3.3 in "Transcending Scaling Laws with 0.1% Extra Compute").
The training samples are like:

  • [S2S] [prefix] [target]
  • [NLU] [prefix] [target]
  • [NLG] [prefix] [target]

And in the following 0.92B tokens, we trained in pure PrefixLM (sequential denoising in UL2) and get rid of the [S2S] tag, since this is what we planned to evaluate (and can be used with HF's transformers out-of-the-box). So the training samples are like:

  • [prefix] [target]

Some further questions here!

  1. I noticed there are extra tokens in the tokenizer ( e.g. <|extratoken_1|>) that correspond to the infill tokens for the R- and X-denoising parts of the UL2 loss. However, there are no special tokens for the sentinels [S2S], [NLU], and [NLG]. Do you just let the tokenizer treat [S2S] as 5 separate tokens? Or is a different representation used?
  2. For R- and X-denoising, do you use special and tokens to bookend the part? (see attached image)
  3. What was the weight decay used during UL2 training?
    Screenshot 2023-01-20 at 5.32.34 PM.png
Together org

Hi, @jlli !
Regarding your first question, we handle [S2S], [NLU], and [NLG] as plain text, i.e. with multiple tokens designated for each sentinel.
For the second question, we adopt the strategy outlined in the paper "Transcending Scaling Laws with 0.1% Extra Compute" by utilizing the last 100 tokens in the vocabulary as additional identifiers (i.e. ).
Lastly, in regards to your third question, we use AdamW optimization algorithm with a weight decay of 0.01.
Please let me know if you have any further question :)

Sign up or log in to comment