rtlm / README.md
format37's picture
Create README.md
4a8d20a
metadata
language:
  - ru
  - be
  - uk
pretty_name: rtlm
license: cc-by-nc-2.0
task_categories:
  - text-classification
tags:
  - sociology
size_categories:
  - 10K<n<100K

dataset_info: download_size: 455.989186 Mb # Total download size in bytes

TV Channel Transcriptions

This dataset was transcribed with a whisper-large v2 model from the streams of TV channels, for research purposes.
Project site

Data Overview

The dataset contains a zip file for each channel, for each year.

  • 2023.11.05 - 2024 # ORT
  • 2023.11.12 - 2024 # Belarus 1
  • 2023.11.12 - 2024 # 1+1
  • 2023.11.26 - 2024 # Russia 1

Inside each zip file, there are text files with transcriptions of 5-10 minutes of video. 24/7 streams are split into 5-10-minute chunks.
These files do not include the 2024 year. To obtain the actual 2024 year data, you need to download files from the bucket:

Sample of downloading the actual dataset from the bucket

import pandas as pd
import datetime
import os
import zipfile
import glob
import requests
import shutil

def download_datasets(urls, start_year, end_year):
    # for year in range(start_year, end_year + 1):
    for download_url in urls:
        # download_url = url.replace('2023', str(year))
        print(f'downloading {download_url}')
        response = requests.get(download_url)
        if response.status_code == 200:
            file_name = download_url.split('/')[-1]
            with open(file_name, 'wb') as f:
                f.write(response.content)
            print(f"Downloaded {file_name}")
        else:
            print(f"Failed to download {download_url}")

def load_data_to_df(projects):
    current_year = datetime.datetime.now().year
    all_data = []
    for project in projects:
        for year in range(2023, current_year + 1):
            archive_name = f"{year}_{project}.zip"
            # Assuming the archive is downloaded in the current working directory
            with zipfile.ZipFile(archive_name, 'r') as z:
                z.extractall("temp_data")
                for filename in glob.glob(f"temp_data/data/transcriptions/{project}/*.txt"):
                    with open(filename, 'r', encoding='utf-8') as file:
                        text = file.read()
                        # Extract date and time from the filename
                        basename = os.path.basename(filename)
                        datetime_str = basename.split('.')[0]  # Remove the file extension

                        # Split into date and time components
                        date_part, time_part = datetime_str.split('_')
                        # Format time part correctly (replace '-' with ':')
                        time_part_formatted = time_part.replace('-', ':')

                        # Combine date and time with a space
                        datetime_str_formatted = f"{date_part} {time_part_formatted}"

                        all_data.append({"project": project, "date": datetime_str_formatted, "text": text})

                # Cleanup extracted files
                shutil.rmtree(f"temp_data/data/transcriptions/{project}")
            # Cleanup the remaining temporary directory
            shutil.rmtree("temp_data")

    return pd.DataFrame(all_data)

current_year = datetime.datetime.now().year
projects = ['ORT', 'belarusone', 'oneplusone', 'russiaone']
urls = []
for year in range(2023, current_year + 1):
    for project in projects:
        urls.append(f"https://storage.googleapis.com/rtlm/{year}_{project}.zip")
print(urls)
download_datasets(urls, 2023, current_year)

# Save to df
projects = [url.split('/')[-1].split('_')[1].split('.')[0] for url in urls]
# Create the DataFrame
df = load_data_to_df(projects)
print(df.head(2))
df.to_csv('rtlm.csv', index=False)

Known issues

  • The first part of Belarus one channel was created by two instances, so it contains some duplicate files.
  • Due to technical issues or channel restrictions, some periods were not transcribed.
  • Some transcriptions may contain hallucinations, in particular in a silence period. However, these hallucinations have stable signatures.

Disclaimer

The dataset is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the dataset or the use or other dealings in the dataset.

End users of the dataset are solely responsible for ensuring that their use complies with all applicable laws and copyrights. The dataset is based on transcriptions from open live streams of various TV channels and should be used in accordance with the Creative Commons Attribution-NonCommercial (CC BY-NC) license, respecting the non-commercial constraints and the need for attribution.

Please note that the use of this dataset might be subject to additional legal and ethical considerations, and it is the end user’s responsibility to determine whether their use of the dataset adheres to these considerations. The authors of this dataset make no representations or guarantees regarding the legality or ethicality of the dataset's use by third parties.