Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints

PrefixLM finetuning details

#21
by jlli - opened

Some questions about the final PrefixLM finetuning procedure mentioned in the model card (0.92 billion tokens on a mixture of Pile/COT/NI/P3):

  • How was the mixture sampled and packed for sequence length 2048?
    • Specifically, does a single 2048-length sequence consist of packed examples from a single dataset, unpacked examples from a single dataset, or packed examples individually sampled from the overall mixture?
  • How were NI/P3 examples deduplicated? The original dataset cards mention there are potential duplicate inputs
  • Were the instruction datasets (CoT, NI, P3) trained with zero-shot or few-shot prompts?
    • a few shot prompt would include an example in the prefix portion of the input, which seems particularly salient for CoT.

Thanks so much!

Together org

Hi @jlli , those are all very interesting questions!

Regarding your questions:

How was the mixture sampled and packed for sequence length 2048?

For NI, a single 2048-length sequence consists of examples from a single dataset. For COT and P3, examples are sampled from the overall mixture.

How were NI/P3 examples deduplicated? The original dataset cards mention there are potential duplicate inputs

We used the original NI repo to generate training data. For example with multiple answers, we randomly sample an answer on the fly. So there should not be many duplicate inputs.
We did not intensionally deduplicate Muennighoff's P3. But since we sample data from the entire mixture, it should be less likely to encounter the same input in the same training sequence.

Were the instruction datasets (CoT, NI, P3) trained with zero-shot or few-shot prompts?

The instruction datasets (CoT, NI, P3) were trained by randomly splitting a sequence into the prompt and its target during training.
This ensures coverage of both zero-shot and few-shot type prompts over time.

Sign up or log in to comment