File size: 3,485 Bytes
cbb9eda
 
85dcff5
 
 
 
 
 
 
 
 
07bbce6
 
 
 
 
83b023f
 
07bbce6
 
 
 
83b023f
07bbce6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: unlicense
task_categories:
- text-generation
language:
- en
tags:
- art
pretty_name: Drama Llama dataset
size_categories:
- 10K<n<100K
---


# DramaLlama dataset

**Hugging Face reports the parquet file as unsafe. I'm looking into why.**

![title.png](title.png)

This is the dataset repository of DramaLlama. This repository contains scripts designed to gather and prepare the dataset.

Note: This repository builds upon the findings of https://github.com/molbal/llm-text-completion-finetune

## Step 1: Getting novels

We will use The Gutenberg project again to gather novels. Let's get some drama categories. I will aim for a larger dataset size this time.

I'm running the following scripts:

```bash
pip install requests

python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "detective fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "crime nonfiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "mystery fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "detective fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "gothic fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "horror" --num_records 10000
python .\j\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "romantic fiction" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "short stories" --num_records 10000
python .\pipeline\step1-acquire.py --output_dir "./training-data/0_raw/" --topic "western" --num_records 10000
```
## Step 2: Preprocessing

### Step 2/a: Stripping header and footer
Now we need to strip the headers and footers of the files. I noticed how some files failed to download, and those ones do not have a file extension. This might be caused by a bug in the downloader script, but it was ~200 errors for me out of ~4000 downloads so

```bash
python .\pipeline\step2a-strip.py --input_dir "./training-data/0_raw/" --output_dir "./training-data/2a_stripped/"
```


### Step 2/b: Stripping 
We do a bit more cleaning. We have two files, a blacklist and a junklist. Blacklist contains expressions that we do not want included in the trainig data, I filled it with common ChatGPT output. (We do not need to worry, as our training data comes well **before** ChatGPT, but still) Junklist's contents are simply removed from it. These are distribution notes here. 

Here we chunk to small pieces, (~250) and if a chunk contains a blacklisted sentence, it is sent to our local LLM to rephrase it.

_Note: We need Ollama for this installed on the local environment_

```bash
ollama pull mistral
pip install nltk ollama
python .\pipeline\step2b-clean.py --input_dir "./training-data/2a_stripped/" --output_dir "./training-data/2b_cleaned/" --llm "mistral" 
```

After this, it puts the files back together in the output directory.


## Step 3: Chunking
We chunk the dataset now and save it into a parquet file.
```bash
pip install pandas pyarrow
python .\pipeline\step3-chunking.py --source_dir "./training-data/2b_cleaned/" --output_file "./training-data/data.parquet"  
```

## Step 4: 🤗 dataset upload
We upload the dataset to Hugging Face: 
https://huggingface.co/datasets/molbal/dramallama-novels