format37 commited on
Commit
4a8d20a
1 Parent(s): c399a2d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -0
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ru
4
+ - be
5
+ - uk
6
+ pretty_name: rtlm
7
+ license: cc-by-nc-2.0
8
+ task_categories:
9
+ - text-classification
10
+ tags:
11
+ - sociology
12
+ size_categories:
13
+ - 10K<n<100K
14
+ ---
15
+ dataset_info:
16
+ download_size: 455.989186 Mb # Total download size in bytes
17
+
18
+ # TV Channel Transcriptions
19
+ This dataset was transcribed with a whisper-large v2 model from the streams of TV channels, for research purposes.
20
+ [Project site](https://rtlm.info/)
21
+
22
+ ## Data Overview
23
+ The dataset contains a zip file for each channel, for each year.
24
+ * 2023.11.05 - 2024 # ORT
25
+ * 2023.11.12 - 2024 # Belarus 1
26
+ * 2023.11.12 - 2024 # 1+1
27
+ * 2023.11.26 - 2024 # Russia 1
28
+
29
+ Inside each zip file, there are text files with transcriptions of 5-10 minutes of video. 24/7 streams are split into 5-10-minute chunks.
30
+ These files do not include the 2024 year. To obtain the actual 2024 year data, you need to download files from the bucket:
31
+ * [2024_ORT](https://storage.googleapis.com/rtlm/2024_ORT.zip)
32
+ * [2024_belarusone](https://storage.googleapis.com/rtlm/2024_belarusone.zip)
33
+ * [2024_oneplusone](https://storage.googleapis.com/rtlm/2024_oneplusone.zip)
34
+ * [2024_russiaone](https://storage.googleapis.com/rtlm/2024_russiaone.zip)
35
+
36
+ ## Sample of downloading the actual dataset from the bucket
37
+ ```
38
+ import pandas as pd
39
+ import datetime
40
+ import os
41
+ import zipfile
42
+ import glob
43
+ import requests
44
+ import shutil
45
+
46
+ def download_datasets(urls, start_year, end_year):
47
+ # for year in range(start_year, end_year + 1):
48
+ for download_url in urls:
49
+ # download_url = url.replace('2023', str(year))
50
+ print(f'downloading {download_url}')
51
+ response = requests.get(download_url)
52
+ if response.status_code == 200:
53
+ file_name = download_url.split('/')[-1]
54
+ with open(file_name, 'wb') as f:
55
+ f.write(response.content)
56
+ print(f"Downloaded {file_name}")
57
+ else:
58
+ print(f"Failed to download {download_url}")
59
+
60
+ def load_data_to_df(projects):
61
+ current_year = datetime.datetime.now().year
62
+ all_data = []
63
+ for project in projects:
64
+ for year in range(2023, current_year + 1):
65
+ archive_name = f"{year}_{project}.zip"
66
+ # Assuming the archive is downloaded in the current working directory
67
+ with zipfile.ZipFile(archive_name, 'r') as z:
68
+ z.extractall("temp_data")
69
+ for filename in glob.glob(f"temp_data/data/transcriptions/{project}/*.txt"):
70
+ with open(filename, 'r', encoding='utf-8') as file:
71
+ text = file.read()
72
+ # Extract date and time from the filename
73
+ basename = os.path.basename(filename)
74
+ datetime_str = basename.split('.')[0] # Remove the file extension
75
+
76
+ # Split into date and time components
77
+ date_part, time_part = datetime_str.split('_')
78
+ # Format time part correctly (replace '-' with ':')
79
+ time_part_formatted = time_part.replace('-', ':')
80
+
81
+ # Combine date and time with a space
82
+ datetime_str_formatted = f"{date_part} {time_part_formatted}"
83
+
84
+ all_data.append({"project": project, "date": datetime_str_formatted, "text": text})
85
+
86
+ # Cleanup extracted files
87
+ shutil.rmtree(f"temp_data/data/transcriptions/{project}")
88
+ # Cleanup the remaining temporary directory
89
+ shutil.rmtree("temp_data")
90
+
91
+ return pd.DataFrame(all_data)
92
+
93
+ current_year = datetime.datetime.now().year
94
+ projects = ['ORT', 'belarusone', 'oneplusone', 'russiaone']
95
+ urls = []
96
+ for year in range(2023, current_year + 1):
97
+ for project in projects:
98
+ urls.append(f"https://storage.googleapis.com/rtlm/{year}_{project}.zip")
99
+ print(urls)
100
+ download_datasets(urls, 2023, current_year)
101
+
102
+ # Save to df
103
+ projects = [url.split('/')[-1].split('_')[1].split('.')[0] for url in urls]
104
+ # Create the DataFrame
105
+ df = load_data_to_df(projects)
106
+ print(df.head(2))
107
+ df.to_csv('rtlm.csv', index=False)
108
+ ```
109
+
110
+ ## Known issues
111
+ * The first part of Belarus one channel was created by two instances, so it contains some duplicate files.
112
+ * Due to technical issues or channel restrictions, some periods were not transcribed.
113
+ * Some transcriptions may contain hallucinations, in particular in a silence period. However, these hallucinations have stable signatures.
114
+
115
+ ## Disclaimer
116
+ The dataset is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the dataset or the use or other dealings in the dataset.
117
+
118
+ End users of the dataset are solely responsible for ensuring that their use complies with all applicable laws and copyrights. The dataset is based on transcriptions from open live streams of various TV channels and should be used in accordance with the Creative Commons Attribution-NonCommercial (CC BY-NC) license, respecting the non-commercial constraints and the need for attribution.
119
+
120
+ Please note that the use of this dataset might be subject to additional legal and ethical considerations, and it is the end user’s responsibility to determine whether their use of the dataset adheres to these considerations. The authors of this dataset make no representations or guarantees regarding the legality or ethicality of the dataset's use by third parties.