|
# Documentation |
|
|
|
This contains some *really* quick docs and notes on filtering with SuperWIKI NEXT. |
|
|
|
## wikipedia_soup.py |
|
|
|
...Is the main class that handles the bulk of the filtering. |
|
|
|
Each filter has code documentation to explain what each function generally does. So I'd suggest you to read those instead. |
|
|
|
### Usage for wikipedia_soup.py |
|
|
|
probably the most important bit. |
|
|
|
wikipedia_soup takes in `*.ndjson` files directly from html wikipedia dumps. via the `process-root` command. |
|
|
|
*Note: there are 3 publicly exposed commands via typer, `process-root`, `process-folder`, `process-file`* |
|
|
|
`process-root` is probably what you want to use. It takes in the following folder structure: |
|
|
|
``` |
|
dumps <- Input folder for [process-root] |
|
|-afwiki-NS0-20240420-ENTERPRISE-HTML <- Input folder for [process-folder] |
|
|-afwiki_namespace_0_0.ndjson <- Input file for [process-file] |
|
|-afwiki_namespace_0_1.ndjson |
|
|-afwiki_namespace_0_2.ndjson |
|
... |
|
|-arwiki-NS0-20240420-ENTERPRISE-HTML |
|
|-arwiki_namespace_0_0.ndjson |
|
|-arwiki_namespace_0_1.ndjson |
|
|-arwiki_namespace_0_2.ndjson |
|
... |
|
... And so on... |
|
``` |
|
|
|
Downloading and filtering the files is relatively easy. |
|
|
|
1. Get a list of http urls (Whichever way you prefer) |
|
2. Download said list (wget, curl, aria2c, etc) |
|
3. Extract tar files into their own folder as shown above |
|
4. Run `process-root` command. |
|
5. Patience. |
|
6. ??? |
|
7. Finished! |
|
|
|
|
|
|
|
## wikipedia_template.py |
|
|
|
This file contains templates used in Wikipedia articles. |
|
|
|
If you do need to update a template, follow these steps: |
|
|
|
1. Open the file in your web browser. |
|
2. Paste the following URL, replacing `<ID>` with the relevant Wikidata entry ID. |
|
|
|
``` |
|
https://www.wikidata.org/w/api.php?action=wbgetentities&ids=<ID>&format=json&props=labels |
|
``` |
|
|
|
As for the related templates: |
|
|
|
- Stubs: `Q4663261` |
|
- Citation needed: `Q7106262` |
|
- Redirect: `Q6042392` |
|
|
|
**Note:** For Sections, there are currently no templates available. These must be added manually. |
|
|
|
## mediawiki_soup.py |
|
|
|
Before the introduction of Hugging Face's Datatrove and for the sake of simpler code development, this module implemented the `MediaWikiSoup` class. |
|
|
|
This class processes HTML content into markdown format and performs additional post-processing steps on the resulting markdown. |
|
|
|
`MediaWikiSoup` leverages a "filter chain" architecture. You can extend its functionalities by adding filter functions using either `add_markdown_filter` (for markdown processing) or `add_soup_filter` (for BeautifulSoup processing). |
|
|
|
## html2markdown.py |
|
|
|
Contains a customized markdownify instance. Since this is mainly carried over from 1.5, details on it are a bit hazy. |
|
|
|
For `<a>` elements, I only use the text contained. That is to say, I don't include the href. |
|
|
|
```html |
|
<a href="//example.com">This is an example</a> |
|
``` |
|
|
|
Will be md'd into: |
|
|
|
```md |
|
This is an example |
|
``` |
|
|
|
For image elements: |
|
|
|
```html |
|
<img src="//example.com" alt="Alt Text"/> |
|
``` |
|
|
|
Will be md'd into: |
|
|
|
```md |
|
Alt Text |
|
``` |
|
|
|
For `<li>` elements, I'm unsure what was the reason behind it. Now, God/LLM/Model/??? only knows. |
|
|
|
## folders2jsonl.py |
|
|
|
Is a simple script converting chunked ndjson files into 1 singular file for ease of processing. |
|
|
|
# Tools |
|
|
|
Extra tools unrelated to main filtering. But used in some shape or form. |
|
|
|
## tools/wikipedia_eligablewiki.py |
|
|
|
As the title says, it unbiasedly selects groups of wikipedia with high enough content. Refer to `Selection of Wikipedia` for how it was computed. |
|
|
|
the stats .json file can be fetched from the source page here: https://commons.wikimedia.org/w/index.php?title=Data:Wikipedia_statistics/data.tab&action=edit |
|
|
|
Copy the json in the source into a .json file and you should be good to go. |
|
|
|
if you have to filter a list of URLS, it should look like this: |
|
|
|
Using mirror.accum.se as mirror: |
|
```txt |
|
https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/amiwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz |
|
https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/amwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz |
|
https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/angwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz |
|
https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/anwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz |
|
``` |
|
|
|
Or with the official dumps: |
|
``` |
|
https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/amiwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz |
|
https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/amwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz |
|
https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/angwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz |
|
https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/anwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz |
|
``` |
|
|
|
## tools/wikipedia_pageview.py |
|
|
|
Not used in NEXT, but included. The idea is to accumulate all pageviews and filter each article based on pageviews. While it's a neat idea, I just didn't use it. |
|
|
|
## tools/wikipedia_mediaalias.py |
|
|
|
Pretty sure it's unfinished. I didn't use it in the end. Similar to pageview. Though someone could improvise and use it. |