How to find issues in dolly-15k to improve LLM training

#14
by cmauck10 - opened

Applying specific data-centric techniques to this dolly-15k dataset immediately reveals all sorts of issues in this dataset (even though it was carefully curated by over 5000 employees): responses that are inaccurate, unhelpful, or poorly written, incomplete/vague instructions, and other sorts of bad language (toxic, PII, …)

Data auto-detected to be bad can be filtered from the dataset or manually corrected. This is the fastest way to improve the quality of your existing instruction tuning data and your LLMs!

Read more details here: https://cleanlab.ai/blog/filter-llm-tuning-data/

Sign up or log in to comment