File size: 5,900 Bytes
c9b11fb
 
 
 
 
 
 
 
 
 
 
 
 
a8cefbc
 
 
93151e3
a8cefbc
e4910fd
a8cefbc
 
e4910fd
a8cefbc
 
e4910fd
a8cefbc
 
93151e3
 
 
 
 
 
 
 
a8cefbc
e4910fd
a8cefbc
e4910fd
a8cefbc
e4910fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a8cefbc
 
 
 
e4910fd
a8cefbc
 
 
 
 
 
 
e4910fd
a8cefbc
 
 
 
 
 
 
 
e4910fd
a8cefbc
 
e4910fd
a8cefbc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e4910fd
a8cefbc
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
license: mit
task_categories:
- text-classification
- summarization
language:
- ta
tags:
- news
- tamil
pretty_name: Hindu Tamil News Dataset
size_categories:
- 10K<n<100K
---


# HinduTamil News Articles Dataset

# Overview
This dataset contains news articles in Tamil language scraped from the Hindu Tamil news website. Each article includes its title, author, city, published date, and text.

# Motivation
This dataset was created to provide a comprehensive collection of Tamil news articles for research and analysis purposes.

# Data Sources and collection method
The data in this dataset was collected from the Hindu Tamil news website (https://www.hindutamil.in/news/tamilnadu/).
Data was collected from the website using web scraping techniques.

- Sending HTTP Requests: The requests library in Python was utilized to send HTTP GET requests to the Hindu Tamil news website. These requests fetched the HTML content of each webpage.

- Parsing HTML Content: The BeautifulSoup library parsed the HTML content, enabling the extraction of specific elements from each webpage. This included gathering article URLs, titles, authors, published dates, and main article text.

- Iterative Scraping: Data was scraped iteratively from multiple pages of the website. Each webpage typically displayed a list of articles, and the URLs for each article were extracted. Subsequently, each article URL was visited to extract detailed information.

- Handling Errors and Timeouts: Error and timeout handling was implemented using try-except blocks to ensure smooth operation during the scraping process.

# Data Cleaning and Preprocessing

The dataset collected from the Hindu Tamil news website underwent several cleaning and preprocessing steps to ensure it was suitable for analysis and modeling. The steps employed include:

## Removing Duplicate Entries
- Duplicate entries were identified based on the 'Published' column, as articles with the same published date and time were considered duplicates.
- Duplicate rows were removed using the drop_duplicates() method from the pandas library, ensuring only unique articles remained.

## Handling NaN Values
- NaN values were handled by removing rows containing NaN values depending on the context. For instance, rows with NaN values in the 'City' or 'Author' column were dropped.
- Rows with missing data in essential columns were excluded from the final dataset.

## Filtering Out Irrelevant Information
- Irrelevant information such as author's comments, footer text, and advertisements were filtered out from the article text.
- Only the main content of the news article was retained.

## Formatting Published Dates
- Published dates were extracted from the article content and formatted into a standardized date-time format to ensure consistency.

# Process for Converting Unstructured Data to Structured Format

Unstructured data from the HTML content of the website was converted to structured format using BeautifulSoup.
Relevant information such as title, author, city, published date, and text were extracted from the HTML tags and organized into a tabular format.

# Dataset Structure
The dataset has the following structure:
- Title: Title of the news article
- Author: Author of the news article
- City: City mentioned in the news article
- Published: Published date and time of the news article
- Text: Main content of the news article

# Sample entries


| Title | Author | City | Published | Text |
| --- | --- | --- | --- | --- |
| ரூ.1.10 கோடி மான நஷ்டஈடு கோரி  | ஆர்.பாலசரவணக்குமார் | சென்னை | 2024-02-26 17:02:00 | சென்னை: அதிமுக முன்னாள் நிர்வாகி...  |
| தமிழ்நாடு காங்கிரஸ் கமிட்டியில்  | செய்திப்பிரிவு | சென்னை | 2024-02-26 16:52:00 | சென்னை: தமிழ்நாடு காங்கிரஸ் கமிட்டிக்கு இரண்டு துணைத்... |
| பரந்தூர் விமான நிலைய எதிர்ப்பு போராட்டம் | இரா.ஜெயப்பிரகாஷ் | காஞ்சிபுரம் | 2024-02-26 16:30:00 | காஞ்சிபுரம்: காஞ்சிபுரம் அருகே பரந்தூர் விமான நிலையத்துக்கு எதிராக... |

# Data Usage
Users can use this dataset for text analysis, natural language processing, and sentiment analysis of Tamil news articles.

# License

This dataset is provided under the MIT License. 

MIT License

Copyright (c) 2024 Shweta Sandeep Sukhtankar

Permission is hereby granted, free of charge, to any person obtaining a copy
of this dataset and associated documentation files (the "Dataset"), to deal
in the Dataset without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Dataset, and to permit persons to whom the Dataset is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Dataset.

THE DATASET IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE DATASET OR THE USE OR OTHER DEALINGS IN THE
DATASET.


# Citation
If you use this dataset, please cite it as:
Sukhtankar, Shweta. (2024). Tamil News Articles Dataset.