File size: 4,116 Bytes
51e0d66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
import tarfile
import os
import shutil
import multiprocessing as mp
from pathlib import Path
from tqdm import tqdm
import pandas as pd

df = pd.read_parquet("df_train_v2.parquet")
df["filename_full"] = "/home/fatrek/data_network/faton/riksdagen_anforanden/data/rixvox_v2/" + df["filename"]
df = df.rename(columns={"sex": "gender"})

# Group by intressent_id and count occurences of each id in the dataset, keep the columns speaker
# and intressent_id and sort by the count of occurences

df["speaker_total_hours"] = df.groupby(["speaker", "party"])["duration"].transform("sum") / 3600
df_hours = df.groupby(["speaker", "party"]).first().sort_values("speaker_total_hours", ascending=False).reset_index()
df_hours = df_hours.sample(frac=1, random_state=1337)  # Shuffle the rows

# Set train equals to True until the cumulative sum of speaker_total_hours is 98% of the total
df_hours["train"] = df_hours["speaker_total_hours"].cumsum() / df_hours["speaker_total_hours"].sum() < 0.98
# Valid equals True until cumulative sum is 1% of the total
df_hours["valid"] = False
df_hours.loc[df_hours["train"] == False, "valid"] = (
    df_hours[df_hours["train"] == 0]["speaker_total_hours"].cumsum() / df_hours["speaker_total_hours"].sum() < 0.01
)
df_hours["test"] = (df_hours["train"] == False) & (df_hours["valid"] == False)  # The rest is test

# Create splits
df_train = pd.merge(df, df_hours.loc[df_hours["train"], ["speaker", "party"]], on=["speaker", "party"], how="inner")
df_valid = pd.merge(df, df_hours.loc[df_hours["valid"], ["speaker", "party"]], on=["speaker", "party"], how="inner")
df_test = pd.merge(df, df_hours.loc[df_hours["test"], ["speaker", "party"]], on=["speaker", "party"], how="inner")


def split_creator(df, observations_per_shard, shard_name):
    df["shard"] = range(0, len(df))
    df["shard"] = df["shard"] // observations_per_shard
    df["shard"] = shard_name + "_" + df["shard"].astype(str)
    return df["shard"]


df_train["shard"] = split_creator(df_train, 6500, "train")
df_valid["shard"] = split_creator(df_valid, 6500, "dev")
df_test["shard"] = split_creator(df_test, 6500, "test")

df_train["nr_words"] = df_train["text"].str.split().str.len()
df_train = df_train[df_train["nr_words"] <= 160].reset_index(drop=True)
df_train = df_train.drop(columns="nr_words")


def create_tar(df, data_folder="/home/fatrek/data_network/faton/rixvox/data"):
    shard_filename = df["shard"].reset_index(drop=True).values[0]
    shard_filename = shard_filename + ".tar.gz"
    split = df["shard"].reset_index(drop=True).str.extract(r"(.*)_")[0][0]  # train_0 -> train
    os.makedirs(os.path.join(data_folder, split), exist_ok=True)

    print(f"Creating tarfile: {os.path.join(data_folder, split, shard_filename)}")
    with tarfile.open(os.path.join(data_folder, split, shard_filename), "w:gz") as tar:
        for filename in df["filename_full"].values:
            tar.add(Path(filename), arcname=Path(filename).relative_to(Path(filename).parent.parent), recursive=False)


# Group by shard and split dataframes in to several dataframes in list
groups = df_train.groupby("shard")
df_train_list = [groups.get_group(x) for x in groups.groups]
groups = df_valid.groupby("shard")
df_valid_list = [groups.get_group(x) for x in groups.groups]
groups = df_test.groupby("shard")
df_test_list = [groups.get_group(x) for x in groups.groups]


data_folder = "/home/fatrek/data_network/faton/RixVox/data"

# for shard in df_train_list:
#     create_tar(shard, data_folder)

with mp.Pool(16) as pool:
    pool.map(create_tar, df_train_list)

with mp.Pool(1) as pool:
    pool.map(create_tar, df_valid_list)
    pool.map(create_tar, df_test_list)


df_train = df_train.drop(columns=["shard", "filename_full", "file_size"])
df_valid = df_valid.drop(columns=["shard", "filename_full", "file_size"])
df_test = df_test.drop(columns=["shard", "filename_full", "file_size"])

df_train.to_parquet(os.path.join("data", "train_metadata.parquet"), index=False)
df_valid.to_parquet(os.path.join("data", "dev_metadata.parquet"), index=False)
df_test.to_parquet(os.path.join("data", "test_metadata.parquet"), index=False)