fabiochiu commited on
Commit
979af3b
1 Parent(s): b2228a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md CHANGED
@@ -1,3 +1,36 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # Data source
6
+ This data has been collected through a standard scraping process from the [Medium website](https://medium.com/), looking for published articles.
7
+
8
+ # Data description
9
+ Each row in the data is a different article published on Medium. For each article, you have the following features:
10
+ - **title** *[string]*: The title of the article.
11
+ - **text** *[string]*: The text content of the article.
12
+ - **url** *[string]*: The URL associated to the article.
13
+ - **authors** *[list of strings]*: The article authors.
14
+ - **timestamp** *[string]*: The publication datetime of the article.
15
+ - **tags** *[list of strings]*: List of tags associated to the article.
16
+
17
+ # Data analysis
18
+ You can find a very quick data analysis in this [notebook](https://www.kaggle.com/code/fabiochiusano/medium-articles-simple-data-analysis).
19
+
20
+ # What can I do with this data?
21
+ - A multilabel classification model that assigns tags to articles.
22
+ - A seq2seq model that generates article titles.
23
+ - Text analysis.
24
+ - Finetune text generation models on the general domain of Medium, or on specific domains by filtering articles by the appropriate tags.
25
+
26
+ # Collection methodology
27
+ Scraping has been done with Python and the requests library. Starting from a random article on Medium, the next articles to scrape are selected by visiting:
28
+ 1. The author archive pages.
29
+ 2. The publication archive pages (if present).
30
+ 3. The tags archives (if present).
31
+
32
+ The article HTML pages have been parsed with the [newspaper Python library](https://github.com/codelucas/newspaper).
33
+
34
+ Published articles have been filtered for English articles only, using the Python [langdetect library](https://pypi.org/project/langdetect/).
35
+
36
+ As a consequence of the collection methodology, the scraped articles are coming from a not uniform publication date distribution. This means that there are articles published in 2016 and in 2022, but the number of articles in this dataset published in 2016 is not the same as the number of articles published in 2022. In particular, there is a strong prevalence of articles published in 2020. Have a look at the [accompanying notebook](https://www.kaggle.com/code/fabiochiusano/medium-articles-simple-data-analysis) to see the distribution of the publication dates.