--- license: apache-2.0 language: - en - zh --- # TLDR * website: [theedgemalaysia](https://theedgemalaysia.com/) * num. of webpages scraped: (inclusive of articles with no text) * link to dataset: https://huggingface.co/datasets/wanadzhar913/crawl-theedgemalaysia * last date of scraping: * status: **incomplete** # Methodology For [The Edge Malaysia](https://theedgemalaysia.com/), each of their articles seem to have a unique ID at the end of the url e.g., "677590" in "https://theedgemalaysia.com/node/677590". Hence, since we won't be able to do this by month, page no., etc., we'll use a **brute force** approach that tests every combination of numbers, such that we'll only scrape from a valid url. # Progress - [x] batch1 - [x] batch2 - [x] batch3 - [x] batch4 - [x] batch5 - [x] batch6 - [x] batch7 - [x] batch8 - [ ] batch9 - [ ] batch10 - [ ] batch11 - [ ] batch12 - [ ] batch13 - [ ] batch14