Commit
•
b1db6b2
1
Parent(s):
ca4c45d
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,34 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
---
|
7 |
+
# TLDR
|
8 |
+
|
9 |
+
* website: [theedgemalaysia](https://theedgemalaysia.com/)
|
10 |
+
* num. of webpages scraped: (inclusive of articles with no text)
|
11 |
+
* link to dataset: https://huggingface.co/datasets/wanadzhar913/crawl-theedgemalaysia
|
12 |
+
* last date of scraping:
|
13 |
+
* status: **incomplete**
|
14 |
+
|
15 |
+
# Methodology
|
16 |
+
|
17 |
+
For [The Edge Malaysia](https://theedgemalaysia.com/), each of their articles seem to have a unique ID at the end of the url e.g., "677590" in "https://theedgemalaysia.com/node/677590". Hence, since we won't be able to do this by month, page no., etc., we'll use a **brute force** approach that tests every combination of numbers, such that we'll only scrape from a valid url.
|
18 |
+
|
19 |
+
# Progress
|
20 |
+
|
21 |
+
- [x] batch1
|
22 |
+
- [x] batch2
|
23 |
+
- [x] batch3
|
24 |
+
- [x] batch4
|
25 |
+
- [x] batch5
|
26 |
+
- [x] batch6
|
27 |
+
- [x] batch7
|
28 |
+
- [x] batch8
|
29 |
+
- [ ] batch9
|
30 |
+
- [ ] batch10
|
31 |
+
- [ ] batch11
|
32 |
+
- [ ] batch12
|
33 |
+
- [ ] batch13
|
34 |
+
- [ ] batch14
|