dramallama-novels / README.md
molbal's picture
Update README.md
0607e09 verified
metadata
license: unlicense
task_categories:
  - text-generation
language:
  - en
tags:
  - art
pretty_name: Drama Llama dataset
size_categories:
  - 10K<n<100K

DramaLlama dataset

title.png

This is the dataset repository of DramaLlama. This repository contains scripts designed to gather and prepare the dataset.

Note: This repository builds upon the findings of https://github.com/molbal/llm-text-completion-finetune

Step 1: Getting novels

We will use The Gutenberg project again to gather novels. Let's get some drama categories. I will aim for a larger dataset size this time.

I'm running the following scripts:

pip install requests

python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "detective fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "crime nonfiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "mystery fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "detective fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "gothic fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "horror" --num_records 10000
python .\j\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "romantic fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "short stories" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "western" --num_records 10000

Step 2: Preprocessing

Step 2/a: Stripping header and footer

Now we need to strip the headers and footers of the files. I noticed how some files failed to download, and those ones do not have a file extension. This might be caused by a bug in the downloader script, but it was ~200 errors for me out of ~4000 downloads so

python .\pipeline\step2a-strip.py --input_dir "./training-data/0_raw/" --output_dir "./training-data/2a_stripped/"

Step 2/b: Stripping

We do a bit more cleaning. We have two files, a blacklist and a junklist. Blacklist contains expressions that we do not want included in the trainig data, I filled it with common ChatGPT output. (We do not need to worry, as our training data comes well before ChatGPT, but still) Junklist's contents are simply removed from it. These are distribution notes here.

Here we chunk to small pieces, (~250) and if a chunk contains a blacklisted sentence, it is sent to our local LLM to rephrase it.

Note: We need Ollama for this installed on the local environment

ollama pull mistral
pip install nltk ollama
python .\pipeline\step2b-clean.py --input_dir "./training-data/2a_stripped/" --output_dir "./training-data/2b_cleaned/" --llm "mistral" 

After this, it puts the files back together in the output directory.

Step 3: Chunking

We chunk the dataset now and save it into a parquet file.

pip install pandas pyarrow
python .\pipeline\step3-chunking.py --source_dir "./training-data/2b_cleaned/" --output_file "./training-data/data.parquet"  

Step 4: 🤗 dataset upload

We upload the dataset to Hugging Face: https://huggingface.co/datasets/molbal/dramallama-novels