Update README.md
Browse files
README.md
CHANGED
@@ -22,13 +22,15 @@ Most forums were scraped page-by-page using the Firefox extension [Web Scraper](
|
|
22 |
|
23 |
For the process to work properly, it was important not to scrape too many threads per run and using Firefox instead of Chrome, otherwise the process could fail (very quickly in the case of Chrome). Newer versions of Web Scraper solved some reliability issues with Firefox and made exporting the data to `.csv` format much quicker.
|
24 |
|
25 |
-
Using Python (pandas) on the exported data, pages from the same threads were grouped together and concatenated with the string `\n<!-- [NEW PAGE STARTS BELOW] -->\n`, without any further processing, obtaining one row of data per thread. The records were then saved into Parquet files (`compression='zstd', compression_level=7, row_group_size=20`).
|
|
|
|
|
26 |
|
27 |
## Usage notes
|
28 |
The data is not directly usable for finetuning as-is. You will need to split the threads into messages, extract metadata, clean/convert the HTML, etc. For the NSFW forums it will be probably best to somehow avoid using usernames directly, just to be nice(r).
|
29 |
|
30 |
### OOC
|
31 |
-
Both in-character (IC) and out-of-
|
32 |
|
33 |
## Forum listing
|
34 |
Only the roleplay sections were scraped. Note that the SFW forums can have censored slurs, for example using asterisks like `****`.
|
|
|
22 |
|
23 |
For the process to work properly, it was important not to scrape too many threads per run and using Firefox instead of Chrome, otherwise the process could fail (very quickly in the case of Chrome). Newer versions of Web Scraper solved some reliability issues with Firefox and made exporting the data to `.csv` format much quicker.
|
24 |
|
25 |
+
Using Python (pandas) on the exported data, pages from the same threads were grouped together and concatenated with the string `\n<!-- [NEW PAGE STARTS BELOW] -->\n`, without any further processing, obtaining one row of data per thread. The records were then saved into Parquet files (`compression='zstd', compression_level=7, row_group_size=20`).
|
26 |
+
|
27 |
+
I had to make sure to configure the row group size to a small value to avoid issues when loading the the files with pandas or in the HuggingFace dataset viewer (which supports a maximum row group size of 286 MB). As some of the RP threads are huge (more than 10 megabytes of size), I ultimately settled on 20, corresponding to about 200 MB in the worst case scenario.
|
28 |
|
29 |
## Usage notes
|
30 |
The data is not directly usable for finetuning as-is. You will need to split the threads into messages, extract metadata, clean/convert the HTML, etc. For the NSFW forums it will be probably best to somehow avoid using usernames directly, just to be nice(r).
|
31 |
|
32 |
### OOC
|
33 |
+
Both in-character (IC) and out-of-character (OOC) threads have been included, but in most cases there is no direct way to recognize them except by thread title. I strongly advise to filter OOC threads out, as they can cause coherency problems when directly finetuning models on them. For example, oftentimes users can randomly refer without context (other than posting date) to ongoing events in the corresponding IC threads, and while they might know exactly what they're referring about, the LLM would have no clue.
|
34 |
|
35 |
## Forum listing
|
36 |
Only the roleplay sections were scraped. Note that the SFW forums can have censored slurs, for example using asterisks like `****`.
|