You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

1,378,234,368 tokens (using the Llama tokenizer, ~1.18b gpt4 tokens) from a deduped pile raw shard, filter len<896, ask-llm (“How to Train Data-Efficient LLMs”) w/ mistralai/Mistral-7B-Instruct-v0.2, keep top 1/4

{
  "text": "Once upon a time...",
  "pos": -5.654354325
}
Downloads last month
0
Edit dataset card