File size: 1,952 Bytes
09b7d3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe1ed71
 
 
95d7862
 
fe1ed71
 
 
 
 
 
79987d1
fe1ed71
 
 
 
79987d1
fe1ed71
 
 
 
 
 
79987d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe1ed71
 
 
 
79987d1
fe1ed71
 
 
 
 
 
 
 
95d7862
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
language:
- tr
thumbnail:
tags:
- dataset
- turkish
- ted-multi
- cleaned

license: apache-2.0
datasets:
- ted-multi

---

# Turkish Ted talk translations 
# Created from ted-multi dataset

adding processing steps here if you want another language


```python
#using Turkish as target
target_lang="tr"  # change to your target lang


from datasets import load_dataset
#ted-multi is a multiple language translated dataset
#fits for our case , not to big and curated but need a simple processing

dataset = load_dataset("ted_multi")
dataset.cleanup_cache_files()

#original from patrick's
#chars_to_ignore_regex = '[,?.!\-\;\:\"“%‘”�—’…–]'  # change to the ignored characters of your fine-tuned model

#will use cahya/wav2vec2-base-turkish-artificial-cv
#checking inside model repository to find which chars removed (no run.sh)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'


import re

def extract_target_lang_entries(batch):
  #specific mapping for ted_multi dataset
  #need to find index of language in each translation as it can shift
  try:
    target_index_for_lang= batch["translations"]["language"].index(target_lang) 
  except ValueError:
    #target not in list empty it for later processing
    batch["text"] = None 
    return batch 

  #index_translation_pairs = zip(batch, target_index_for_batch)
  text= batch["translations"]["translation"][target_index_for_lang] 
  batch["text"] = re.sub(chars_to_ignore_regex, "", text.lower())
  return batch


#this dataset has additional columns need to say it
cols_to_remove =  ['translations', 'talk_name']
dataset = dataset.map(extract_target_lang_entries, remove_columns=cols_to_remove)


#on preocessing we tagged None for empty ones
dataset_cleaned = dataset.filter(lambda x: x['text'] is not None)
dataset_cleaned

from huggingface_hub import notebook_login

notebook_login()

dataset_cleaned.push_to_hub(f"{target_lang}_ted_talk_translated")

```