Additional information

#1
by Aspie96 - opened

Hi.

Would you mind documenting this a bit more?

What kind of information is contained in each file? How did you build this dataset? What base datasets did you use?

Out of my head it includes:

  • flan
  • natural instructions
  • open assistant data
  • dolly data
  • searchqa
  • cot collection
  • essays with instructions

I've embedded the first 4 datasets with sgpt with I think it was a flan fine tuned llama2 for embedding it (that took a while), made a pacmap dimension reduction and sampled by clusters. Small clusters, i.e. 1.6k in total, was interested in retaining as much "reasoning" variation as possible.
Essays is a small dataset, no sampling except as for all datasets I picked records with roughly less than 2k tokens so I could fine tune locally.
searchqa and cot collection I just used random sampling.
Roughly 2/3 of the data is the flan a like part which I embedded, the rest the other parts, sampled equally.

To my knowledge none of those are contaminated with openai data and are open source licensed.

KnutJaegersberg changed discussion status to closed

For the first 4 datasets, I've not embedded all records, but samples of 300k records per dataset.

auton2.csv is the same dataset, without system prompts

Sign up or log in to comment