vanmanhnew commited on
Commit
cebc40e
·
verified ·
1 Parent(s): 3cd2322

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitignore +2 -0
  2. .gitmodules +3 -0
  3. LICENSE +201 -0
  4. README.md +118 -0
  5. docs/source/_static/pipeline.png +3 -0
  6. pipeline/convert_transcribe/--lang +0 -0
  7. pipeline/convert_transcribe/--model-size +0 -0
  8. pipeline/convert_transcribe/--root-dir +0 -0
  9. pipeline/convert_transcribe/--save-dir +0 -0
  10. pipeline/convert_transcribe/--use-faster-whisper +0 -0
  11. pipeline/convert_transcribe/--workers +0 -0
  12. pipeline/convert_transcribe/README.md +51 -0
  13. pipeline/convert_transcribe/convert_and_transcribe.py +244 -0
  14. pipeline/crawler/README.md +28 -0
  15. pipeline/crawler/cookies.txt +12 -0
  16. pipeline/crawler/download_from_youtube_channels.sh +23 -0
  17. pipeline/crawler/downloadgs2/xinchaotoilavanvo/Tây Du Ký (Phần 3)#UCbEGdsixsjZJCKYE8rergSw#OgD3IyAzX_E_972.webm +3 -0
  18. pipeline/crawler/kenh.txt +1 -0
  19. pipeline/crawler/log/audios_xinchaotoilavanvo.txt +24 -0
  20. pipeline/crawler/log/xinchaotoilavanvo.log +252 -0
  21. pipeline/force_alignment/README.md +18 -0
  22. pipeline/force_alignment/calculate_precision.py +70 -0
  23. pipeline/force_alignment/force_align.sh +18 -0
  24. pipeline/force_alignment/force_align_from_list.sh +18 -0
  25. pipeline/segmentation/README.md +41 -0
  26. pipeline/segmentation/filter_manifest.py +103 -0
  27. pipeline/segmentation/segment.sh +13 -0
  28. pipeline/segmentation/segment_from_list.sh +12 -0
  29. pipeline/segmentation/segment_from_manifests.py +52 -0
  30. pipeline/utils/force_alignment/README.md +49 -0
  31. pipeline/utils/force_alignment/align.py +189 -0
  32. pipeline/utils/force_alignment/align_utils.py +194 -0
  33. pipeline/utils/force_alignment/norm_config.py +277 -0
  34. pipeline/utils/force_alignment/punctuations.lst +188 -0
  35. pipeline/utils/force_alignment/text_normalization.py +92 -0
  36. pipeline/utils/textgrid2jsonl.py +22 -0
  37. pipeline/utils/uroman/.gitignore +35 -0
  38. pipeline/utils/uroman/LICENSE.txt +11 -0
  39. pipeline/utils/uroman/README.md +165 -0
  40. pipeline/utils/uroman/README.txt +141 -0
  41. pipeline/utils/uroman/bin/de-accent.pl +201 -0
  42. pipeline/utils/uroman/bin/string-distance.pl +99 -0
  43. pipeline/utils/uroman/bin/uroman-quick.pl +58 -0
  44. pipeline/utils/uroman/bin/uroman-tsv.sh +28 -0
  45. pipeline/utils/uroman/bin/uroman.pl +138 -0
  46. pipeline/utils/uroman/bin/uroman.py +0 -0
  47. pipeline/utils/uroman/data/Chinese_to_Pinyin.txt +0 -0
  48. pipeline/utils/uroman/data/NumProps.jsonl +0 -0
  49. pipeline/utils/uroman/data/Scripts.txt +174 -0
  50. pipeline/utils/uroman/data/UnicodeData.txt +0 -0
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ __pycache__
2
+ .DS_Store
.gitmodules ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ [submodule "pipeline/utils/uroman"]
2
+ path = pipeline/utils/uroman
3
+ url = https://github.com/isi-nlp/uroman.git
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GigaSpeech 2
2
+ [![arXiv](https://img.shields.io/badge/arXiv-Paper-COLOR.svg)](https://arxiv.org/pdf/2406.11546) [![hf](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-Dataset-yellow)](https://huggingface.co/datasets/speechcolab/gigaspeech2) [![GitHub](https://img.shields.io/badge/GitHub-Repo-green)](https://github.com/SpeechColab/GigaSpeech2) [![demo](https://img.shields.io/badge/WebPage-Demo-red)](https://huggingface.co/spaces/k2-fsa/automatic-speech-recognition)
3
+
4
+ This is the official repository of the GigaSpeech 2 dataset. For details of how we created the dataset, please refer to our [arXiv preprint paper](https://arxiv.org/pdf/2406.11546).
5
+
6
+ GigaSpeech 2 version: 2.0 (2024/06/19)
7
+
8
+ <div align="left">
9
+ <p><img src="https://github.com/yfyeung/GigaSpeech2/blob/main/docs/source/_static/pipeline.png" width=800></p>
10
+ </div>
11
+
12
+ ## Download
13
+ * The dataset is available at [HuggingFace](https://huggingface.co/datasets/speechcolab/gigaspeech2) and [ModelScope](https://modelscope.cn/datasets/AI-ModelScope/gigaspeech2).
14
+ * The pre-trained models are available at [Thai](https://huggingface.co/yfyeung/icefall-asr-gigaspeech2-th-zipformer-2024-06-20) and [Vietnamese](https://huggingface.co/zzasdf/icefall-asr-gigaspeech2-vi-zipformer).
15
+
16
+ ## Leaderboard
17
+
18
+ | **Contributor**| **Toolkit** | **Train Recipe** | **Train Data** | **Inference** |**Test CER/WER** |
19
+ |:---------------|:------------------|:------------------|:------------------|:------------------|:------------------:|
20
+ |||||
21
+ | <em>Baseline</em> | [Icefall](https://github.com/k2-fsa/icefall) | Zipformer/Stateless pruned RNN-T | GigaSpeech 2.0 th | TODO | 12.46 |
22
+ | <em>Baseline</em> | [Icefall](https://github.com/k2-fsa/icefall) | Zipformer/Stateless pruned RNN-T | GigaSpeech 2.0 id | TODO | 14.92 |
23
+ | <em>Baseline</em> | [Icefall](https://github.com/k2-fsa/icefall) | Zipformer/Stateless pruned RNN-T | GigaSpeech 2.0 vi | TODO | 12.83 |
24
+ | <em>Baseline</em> | [ESPNet](https://github.com/espnet/espnet) | Conformer/Transformer CTC/AED | GigaSpeech 2.0 th | TODO | 13.70 |
25
+ | <em>Baseline</em> | [ESPNet](https://github.com/espnet/espnet) | Conformer/Transformer CTC/AED | GigaSpeech 2.0 id | TODO | 15.50 |
26
+ | <em>Baseline</em> | [ESPNet](https://github.com/espnet/espnet) | Conformer/Transformer CTC/AED | GigaSpeech 2.0 vi | TODO | 14.60 |
27
+
28
+ ## Dataset
29
+
30
+ ### Audio Source
31
+ * Language: Thai, Indonesian, Vietnamese
32
+ * GigaSpeech 2 raw: 30,000 hours of automatically transcribed speech across Thai, Indonesian, and Vietnamese.
33
+ * GigaSpeech 2 refined: 10,000 hours of Thai, 6,000 hours each for Indonesian and Vietnamese.
34
+ * GigaSpeech 2 DEV & TEST: 10 hours for DEV and 10 hours for TEST per language, **transcribed by professional human annotators**, challenging and realistic.
35
+
36
+ ### Training Subsets
37
+ | | Thai (hours) | Indonesian (hours) | Vietnamese (hours) |
38
+ |:--------------------:|:------------:|:------------------:|:------------------:|
39
+ | GigaSpeech 2 raw | 12901.8 | 8112.9 | 7324.0 |
40
+ | GigaSpeech 2 refined | 10262.0 | 5714.0 | 6039.0 |
41
+
42
+ GigaSpeech 2 raw contains all the data from GigaSpeech 2 refined.
43
+
44
+ ### Evaluation Subsets
45
+ | | Thai (hours) | Indonesian (hours) | Vietnamese (hours) |
46
+ |:--------------------:|:------------:|:------------------:|:------------------:|
47
+ | GigaSpeech 2 DEV | 10.0 | 10.0 | 10.2 |
48
+ | GigaSpeech 2 TEST | 10.0 | 10.0 | 11.0 |
49
+
50
+ Evaluation subsets are **annotated by professional human annotators**.
51
+
52
+ ### Preparation Scripts
53
+ Soon available at [Lhotse](https://github.com/lhotse-speech/lhotse).
54
+
55
+ ### Metadata Walkthrough
56
+ Soon available.
57
+
58
+ ### Audio Processing
59
+ GigaSpeech 2 audio files are resampled to 16 kHz and converted to single-channel WAV format. For detailed implementation, refer to [pipeline/convert_transcribe/convert_and_transcribe.py](https://github.com/yfyeung/GigaSpeech2/blob/main/pipeline/convert_transcribe/convert_and_transcribe.py#L45).
60
+
61
+ ### Text Pre-Processing
62
+ Transcripts are normalized by applying NFKC, converting all characters to uppercase, removing punctuation, and mapping Arabic numerals to words in the respective languages.
63
+
64
+ ### Text Post-Processing (before scoring)
65
+ We standardize by applying NFKC, converting all characters to uppercase, removing punctuation, and merging consecutive whitespace or removing all whitespace from both hypothesis and reference text before CER/WER scoring to ensure apple-to-apple performance comparisons across different toolkits or commercial services.
66
+
67
+ We also provide the following code snippet, which is used in all the experiments reported in our paper and leaderboard.
68
+
69
+ ```python
70
+ import string
71
+ import unicodedata
72
+
73
+ def text_post_processing(text):
74
+ text = unicodedata.normalize("NFKC", text) # apply NFKC
75
+ text = text.upper() # convert to uppercase
76
+ text = text.replace("-", " ") # remove hyphen
77
+ text = re.sub("[{}]".format(string.punctuation), "", text) # remove punctuation
78
+ text = re.sub(r"\s+", "", text).strip() # remove all whitespace for Thai
79
+ return text
80
+ ```
81
+
82
+ ## Collaboration
83
+ We are a group of volunteers trying to make speech technologies easier to use. We welcome any kind of contributions. Currently, we are exploring the following directions. If you are interested in one of the directions and you think you will be able to help, please contact gigaspeech@speechcolab.org.
84
+
85
+ * Inference architecture for different pre-trained models
86
+ * Adding diverse audio source
87
+ * Benchmarking speech algorithms/services
88
+ * Building and releasing pre-trained models
89
+ * Supporting more languages
90
+ * Making new datasets with permissive licenses
91
+
92
+ ## Institutional Contributors
93
+ | Institution | Contribution |
94
+ |:------|:-----|
95
+ | [Shanghai Jiao Tong University](https://www.seiee.sjtu.edu.cn/) | Computing power; Data host; Researchers |
96
+ | [The Chinese University of Hong Kong](https://www.cuhk.edu.hk/chinese/index.html) | Researchers |
97
+ | [Tsinghua University](https://www.ee.tsinghua.edu.cn/en/) | Researchers |
98
+ | [Seasalt AI](https://seasalt.ai/) | Researchers |
99
+ | [Birch AI](https://birch.ai/) | Researchers |
100
+ | [Peng Cheng Laboratory](https://data-starcloud.pcl.ac.cn/) | Researchers; Computing power |
101
+ | [Dataocean AI](https://en.haitianruisheng.com/) | Evaluation data annotation |
102
+
103
+ ## Citation
104
+ Please cite our paper if you find this work useful:
105
+ ```
106
+ @article{gigaspeech2,
107
+ title={GigaSpeech 2: An Evolving, Large-Scale and Multi-domain ASR Corpus for Low-Resource Languages with Automated Crawling, Transcription and Refinement},
108
+ author={Yifan Yang and Zheshu Song and Jianheng Zhuo and Mingyu Cui and Jinpeng Li and Bo Yang and Yexing Du and Ziyang Ma and Xunying Liu and Ziyuan Wang and Ke Li and Shuai Fan and Kai Yu and Wei-Qiang Zhang and Guoguo Chen and Xie Chen},
109
+ journal={arXiv preprint arXiv:2406.11546},
110
+ year={2024},
111
+ }
112
+ ```
113
+
114
+ ## Contact
115
+ If you have any concerns, please contact gigaspeech@speechcolab.org.
116
+
117
+ ## Metadata Changelog
118
+ - 2024/06/19 v2.0: Initial release.
docs/source/_static/pipeline.png ADDED

Git LFS Details

  • SHA256: bf13de1827089aaaf1ed8104da8931d2ace5f41430ab1839c659a176634ed2a7
  • Pointer size: 132 Bytes
  • Size of remote file: 1.15 MB
pipeline/convert_transcribe/--lang ADDED
File without changes
pipeline/convert_transcribe/--model-size ADDED
File without changes
pipeline/convert_transcribe/--root-dir ADDED
File without changes
pipeline/convert_transcribe/--save-dir ADDED
File without changes
pipeline/convert_transcribe/--use-faster-whisper ADDED
File without changes
pipeline/convert_transcribe/--workers ADDED
File without changes
pipeline/convert_transcribe/README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Installation
2
+ ```shell
3
+ conda install ffmpeg
4
+ pip install ffmpeg-python
5
+ ```
6
+
7
+ ### Option 1: Standard Whisper
8
+
9
+ ```shell
10
+ pip install git+https://github.com/openai/whisper.git
11
+ ```
12
+
13
+ ### Option 2: Faster whisper
14
+
15
+ ```shell
16
+ pip install faster-whisper
17
+ ```
18
+
19
+ ## Usage
20
+ Refer to the language codes in the [Whisper repository](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10-L111).
21
+
22
+ ```shell
23
+ # Standard Whisper
24
+ python convert_and_transcribe.py
25
+ --lang [whisper language code] \
26
+ --root-dir [downloaded audio directory] \
27
+ --save-dir [output directory]
28
+
29
+ # Faster Whisper
30
+ python convert_and_transcribe.py
31
+ --lang [whisper language code] \
32
+ --root-dir [downloaded audio directory] \
33
+ --save-dir [output directory] \
34
+ --use-faster-whisper True
35
+ ```
36
+
37
+ For example:
38
+ ```shell
39
+ # Standard Whisper
40
+ python convert_and_transcribe.py
41
+ --lang zh \
42
+ --root-dir ./download \
43
+ --save-dir ./output_trans
44
+
45
+ # Faster Whisper
46
+ python convert_and_transcribe.py
47
+ --lang zh \
48
+ --root-dir ./download \
49
+ --save-dir ./output_trans \
50
+ --use-faster-whisper True
51
+ ```
pipeline/convert_transcribe/convert_and_transcribe.py ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import os
3
+ import shutil
4
+ import struct
5
+ import subprocess
6
+ from concurrent.futures import ProcessPoolExecutor
7
+ from functools import partial
8
+
9
+ from tqdm import tqdm
10
+
11
+
12
+ def read_wav_info(filename):
13
+ with open(filename, "rb") as wav_file:
14
+ a = wav_file.read(28)
15
+ sr = struct.unpack("i", a[24:28])[0]
16
+ channel = struct.unpack("h", a[22:24])[0]
17
+ length = (struct.unpack("i", a[4:8])[0] - 70) / channel / 2 / sr
18
+ return sr, length
19
+
20
+
21
+ def ffmpeg_convert(file_from, file_to):
22
+ try:
23
+ subprocess.call(
24
+ [
25
+ "ffmpeg",
26
+ "-loglevel",
27
+ "warning",
28
+ "-y",
29
+ "-i",
30
+ file_from,
31
+ "-ac",
32
+ "1",
33
+ "-ar",
34
+ "16000",
35
+ "-acodec",
36
+ "pcm_s16le",
37
+ file_to,
38
+ ]
39
+ )
40
+ except Exception as e:
41
+ print(file_to, e)
42
+ return
43
+
44
+
45
+ def video2wav(args):
46
+ video_format = {".mov", ".avi", ".flv", ".ogg", ".mp4", ".mkv", "webm"}
47
+ if not os.path.exists(args.wav_dir):
48
+ os.makedirs(args.wav_dir, exist_ok=True)
49
+
50
+ # Start multiprocess
51
+ executor = ProcessPoolExecutor(max_workers=args.workers)
52
+ print(f"> Using {args.workers} workers!")
53
+ futures = []
54
+ files = os.listdir(args.root_dir)
55
+ files = set([x for x in files if x[-4:] in video_format])
56
+ print("Total videos to convert: ", len(files))
57
+
58
+ for file_name in files:
59
+ file_raw = os.path.join(args.root_dir, file_name)
60
+ file_to = os.path.join(
61
+ args.wav_dir, file_name.split(".")[-2][-11:] + args.format
62
+ )
63
+ if os.path.exists(file_to):
64
+ continue
65
+ futures.append(executor.submit(partial(ffmpeg_convert, file_raw, file_to)))
66
+
67
+ result_list = [future.result() for future in tqdm(futures)]
68
+ print(len(result_list), "wavs resampled.")
69
+
70
+
71
+ def wav2whisper(args):
72
+ # remove exsit files
73
+ wav_dir = (
74
+ os.path.join(args.wav_dir, args.section)
75
+ if args.section is not None
76
+ else args.wav_dir
77
+ )
78
+ wavs = os.listdir(wav_dir)
79
+ exist_files = set([x.split(".")[0] for x in os.listdir(args.sub_dir)])
80
+ error_files = os.path.join(args.list_dir, "error_files.txt")
81
+ if os.path.exists(error_files):
82
+ lines = set(open(error_files, "r", encoding="utf-8").readlines())
83
+ open(error_files, "w", encoding="utf-8").write("".join(list(lines)))
84
+ lines = set(x.strip().split("/")[-1][:-4] for x in lines)
85
+ print(
86
+ "Recognized: ",
87
+ len(exist_files),
88
+ "Language mismatched:",
89
+ len(lines),
90
+ "Total skip:",
91
+ len(exist_files | lines),
92
+ )
93
+ exist_files |= lines
94
+ wavs = [x for x in wavs if x.split(".")[0] not in exist_files]
95
+
96
+ if args.use_faster_whisper:
97
+ from faster_whisper import WhisperModel
98
+
99
+ model = WhisperModel(args.model_size, device="cpu", compute_type="int8")
100
+ for wav in tqdm(wavs):
101
+ audio_path = os.path.join(wav_dir, wav)
102
+ segments, info = model.transcribe(audio_path, language=args.lang, beam_size=5)
103
+ if info.language != args.lang:
104
+ print(f"Expect {args.lang} Detected: {info.language} {audio_path}")
105
+ with open(error_files, "a") as f:
106
+ f.write(
107
+ f"Expect {args.lang} Detected: {info.language} {audio_path}\n"
108
+ )
109
+ continue
110
+ result_path = os.path.join(args.sub_dir, wav.replace("wav", "txt"))
111
+ with open(result_path, "w", encoding="utf-8") as f:
112
+ for segment in segments:
113
+ f.write(segment.text + "\n")
114
+ else:
115
+ import whisper
116
+ from whisper.utils import get_writer
117
+
118
+ def detect_language(audio_path):
119
+ # load audio and pad/trim it to fit 30 seconds (start from middle part)
120
+ audio = whisper.load_audio(audio_path)
121
+ audio = audio[int(len(audio) / 2) :]
122
+ audio = whisper.pad_or_trim(audio)
123
+
124
+ # make log-Mel spectrogram and move to the same device as the model
125
+ mel = whisper.log_mel_spectrogram(audio, n_mels=n_mels).to(model.device)
126
+
127
+ # detect the spoken language
128
+ _, probs = model.detect_language(mel)
129
+ return max(probs, key=probs.get)
130
+
131
+ model = whisper.load_model(args.model_size)
132
+ n_mels = 128 if "large" in args.model_size else 80
133
+ writer = get_writer("txt", args.sub_dir)
134
+ for wav in tqdm(wavs):
135
+ audio_path = os.path.join(wav_dir, wav)
136
+ lang = detect_language(audio_path)
137
+ if lang != args.lang:
138
+ print(f"Expect {args.lang} Detected: {lang} {audio_path}")
139
+ with open(error_files, "a", encoding="utf-8") as f:
140
+ f.write(f"Expect {args.lang} Detected: {lang} {audio_path}\n")
141
+ continue
142
+ subtitle = model.transcribe(audio_path, language=lang, beam_size=5)
143
+ writer(subtitle, audio_path)
144
+
145
+
146
+ def construct_corpus(args):
147
+ wav_dir = (
148
+ os.path.join(args.wav_dir, args.section)
149
+ if args.section is not None
150
+ else args.wav_dir
151
+ )
152
+ wavs = os.listdir(wav_dir)
153
+ wavs = set([x.split(".")[0] for x in wavs if x[-3:] == "wav"])
154
+ txts = os.listdir(args.sub_dir)
155
+ txts = set([x.split(".")[0] for x in txts if x[-3:] == "txt"])
156
+ corpus = os.listdir(args.corpus_dir)
157
+ corpus = set([x.split(".")[0] for x in corpus])
158
+ move_list = wavs & txts
159
+
160
+ for x in move_list:
161
+ if x in corpus:
162
+ continue
163
+ try:
164
+ read_wav_info(os.path.join(wav_dir, x + ".wav"))
165
+ except Exception as e:
166
+ print(x + ".wav", e)
167
+ continue
168
+ shutil.move(
169
+ os.path.join(wav_dir, x + ".wav"), os.path.join(args.corpus_dir, x + ".wav")
170
+ )
171
+ shutil.move(
172
+ os.path.join(args.sub_dir, x + ".txt"),
173
+ os.path.join(args.corpus_dir, x + ".txt"),
174
+ )
175
+ print("Total files in corpus:", len(os.listdir(args.corpus_dir)))
176
+
177
+
178
+ def main():
179
+ parser = argparse.ArgumentParser()
180
+ parser.add_argument(
181
+ "--lang",
182
+ type=str,
183
+ help="See ISO 639-1 codes for supported languages (total: 97).",
184
+ )
185
+ parser.add_argument(
186
+ "--format",
187
+ type=str,
188
+ default=".wav",
189
+ help="Set audio format, find other options in ffmpeg documentation.",
190
+ )
191
+ parser.add_argument(
192
+ "--list-dir", type=str, default="./list", help="Path to save channel lists."
193
+ )
194
+ parser.add_argument(
195
+ "--root-dir",
196
+ type=str,
197
+ default="./download",
198
+ help="Dictionary path of downloaded videos.",
199
+ )
200
+ parser.add_argument(
201
+ "--save-dir",
202
+ type=str,
203
+ default="./data",
204
+ help="Dictionary path to save audio files.",
205
+ )
206
+ parser.add_argument(
207
+ "--section", type=str, default=None, help="Section to transribe."
208
+ )
209
+ parser.add_argument(
210
+ "--model-size",
211
+ type=str,
212
+ default="large-v3",
213
+ help="Whisper model size (large, medium, small, base, tiny)",
214
+ )
215
+ parser.add_argument(
216
+ "--use-faster-whisper",
217
+ type=bool,
218
+ default=False,
219
+ help="Whether to use faster-whisper",
220
+ )
221
+ parser.add_argument("--workers", type=int, default=16, help="Multiprocess workers.")
222
+ args = parser.parse_args()
223
+
224
+ if not os.path.exists(args.save_dir):
225
+ os.makedirs(args.save_dir, exist_ok=True)
226
+ args.sub_dir = os.path.join(args.save_dir, "whisper")
227
+ args.wav_dir = os.path.join(args.save_dir, "audios")
228
+ args.corpus_dir = os.path.join(args.save_dir, "corpus")
229
+ os.makedirs(args.sub_dir, exist_ok=True)
230
+ os.makedirs(args.wav_dir, exist_ok=True)
231
+ os.makedirs(args.corpus_dir, exist_ok=True)
232
+
233
+ # convert video to wav
234
+ video2wav(args)
235
+
236
+ # process subtitle and video info
237
+ wav2whisper(args)
238
+
239
+ # Pair wav and txt file
240
+ construct_corpus(args)
241
+
242
+
243
+ if __name__ == "__main__":
244
+ main()
pipeline/crawler/README.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Installation
2
+ ```shell
3
+ pip install git+https://github.com/yt-dlp/yt-dlp.git
4
+ ```
5
+
6
+ ## Usage
7
+ ### Create a list of youtube channels
8
+ You need to create a list of YouTube channel names from which you intend to download audio. Save this list in a text file (e.g., `zh_channels.txt`) using the format `[channel name]\t@[channel id]`.
9
+
10
+ For example:
11
+ ```
12
+ Youth With You @iQIYIYouthWithYou
13
+ KUN @kun_global
14
+ Kun's Official Channel @kunsofficialchannel6831
15
+ ```
16
+
17
+ ### Get the cookies from the browser
18
+ You need to save the cookies in a text file (e.g., `cookies.txt`) using the Netscape format.
19
+
20
+ ### Start the download process
21
+ ```shell
22
+ ./download_from_youtube_channels.sh [channels list file] [download directory]
23
+ ```
24
+
25
+ For example:
26
+ ```shell
27
+ ./download_from_youtube_channels.sh zh_channels.txt ./download
28
+ ```
pipeline/crawler/cookies.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Netscape HTTP Cookie File
2
+ # This file is generated by yt-dlp. Do not edit.
3
+
4
+ .youtube.com TRUE / FALSE 0 PREF hl=en&tz=UTC
5
+ .youtube.com TRUE / TRUE 0 SOCS CAI
6
+ .youtube.com TRUE / TRUE 1742631660 GPS 1
7
+ .youtube.com TRUE / TRUE 0 YSC hBRthzI-Dbc
8
+ .youtube.com TRUE / TRUE 1758181861 __Secure-ROLLOUT_TOKEN CIGJ2PGoofq90QEQ4vPOjpqdjAMYoNz4jpqdjAM%3D
9
+ .youtube.com TRUE / TRUE 1758182110 VISITOR_INFO1_LIVE cRUSCnYG5HQ
10
+ .youtube.com TRUE / TRUE 1758182110 VISITOR_PRIVACY_METADATA CgJWThIEGgAgSg%3D%3D
11
+ .youtube.com TRUE / TRUE 1805702110 __Secure-YT_TVFAS t=484063&s=2
12
+ .youtube.com TRUE / TRUE 1758182110 DEVICE_INFO ChxOelE0TkRVek9ESTJPREk1TmpnNE9EVTFOUT09EN7Z+b4GGObX+b4G
pipeline/crawler/download_from_youtube_channels.sh ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #! /usr/bin/bash
2
+
3
+ echo "Read channels from file: $1, save to: $2"
4
+
5
+ log_dir=log
6
+
7
+ if [ ! -d $log_dir ]; then
8
+ mkdir $log_dir
9
+ fi
10
+
11
+ while read rows
12
+ do
13
+ echo "doc duoc:$rows"
14
+ channel=`echo "$rows" | awk -F"\t" '{print $2}'`
15
+ channel_name=`echo ${channel:1}`
16
+ echo "Processing channel: $channel, channel name: $channel_name"
17
+ yt-dlp -f 'ba' \
18
+ --download-archive $log_dir/audios_$channel_name.txt \
19
+ --cookies cookies.txt \
20
+ https://www.youtube.com/${channel}/videos -o ${2}/$channel_name'/%(title).20s#%(channel_id)s#%(id)s_%(duration)s.%(ext)s' \
21
+ > $log_dir/$channel_name.log
22
+ done < $1
23
+ echo "Finished."
pipeline/crawler/downloadgs2/xinchaotoilavanvo/Tây Du Ký (Phần 3)#UCbEGdsixsjZJCKYE8rergSw#OgD3IyAzX_E_972.webm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f198bf0316115cb24d03be935e6effb5464dd88a5beab841779fcdd3ba74fa24
3
+ size 15827494
pipeline/crawler/kenh.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Văn Vở @xinchaotoilavanvo
pipeline/crawler/log/audios_xinchaotoilavanvo.txt ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ youtube 5WOvQG9NaxQ
2
+ youtube h9VS1gdSwzY
3
+ youtube bv-xny0u2fQ
4
+ youtube nJDb-HyMOXo
5
+ youtube B7Pjc4C_ogA
6
+ youtube 9B1YeDReuGE
7
+ youtube iEoTNGXQL2I
8
+ youtube YBf-tAL8EAs
9
+ youtube FG0z-xkHNps
10
+ youtube 5GTaloeY3EQ
11
+ youtube rXfV60LJiZU
12
+ youtube gRR4NrXSWPo
13
+ youtube HTCV5axjk_s
14
+ youtube 2P6dfJldpsE
15
+ youtube egWDv6pRhDY
16
+ youtube 1MTVpK1zQpE
17
+ youtube IvoxwgtWWv8
18
+ youtube -qwS3q_Lu3c
19
+ youtube X9ldeKXTGRg
20
+ youtube P0ezv6GBoSU
21
+ youtube z_vn3fv-EF4
22
+ youtube OgD3IyAzX_E
23
+ youtube 4673DtpQkkU
24
+ youtube zechn7_mUrE
pipeline/crawler/log/xinchaotoilavanvo.log ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [youtube:tab] Extracting URL: https://www.youtube.com/@xinchaotoilavanvo/videos
2
+ [youtube:tab] @xinchaotoilavanvo/videos: Downloading webpage
3
+ [download] Downloading playlist: Vn V - Videos
4
+ [youtube:tab] UCbEGdsixsjZJCKYE8rergSw page 1: Downloading API JSON
5
+ [youtube:tab] Playlist Vn V - Videos: Downloading 56 items of 56
6
+ [download] Downloading item 1 of 56
7
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=5WOvQG9NaxQ
8
+ [youtube] 5WOvQG9NaxQ: Downloading webpage
9
+ [youtube] 5WOvQG9NaxQ: Downloading tv client config
10
+ [youtube] 5WOvQG9NaxQ: Downloading player 69f581a5
11
+ [youtube] 5WOvQG9NaxQ: Downloading tv player API JSON
12
+ [youtube] 5WOvQG9NaxQ: Downloading ios player API JSON
13
+ [youtube] 5WOvQG9NaxQ: Downloading m3u8 information
14
+ [info] 5WOvQG9NaxQ: Downloading 1 format(s): 251
15
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Nh�n qu theo quan n#UCbEGdsixsjZJCKYE8rergSw#5WOvQG9NaxQ_1156.webm
16
+
17
+ [download] Downloading item 2 of 56
18
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=h9VS1gdSwzY
19
+ [youtube] h9VS1gdSwzY: Downloading webpage
20
+ [youtube] h9VS1gdSwzY: Downloading tv client config
21
+ [youtube] h9VS1gdSwzY: Downloading tv player API JSON
22
+ [youtube] h9VS1gdSwzY: Downloading ios player API JSON
23
+ [youtube] h9VS1gdSwzY: Downloading m3u8 information
24
+ [info] h9VS1gdSwzY: Downloading 1 format(s): 251
25
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Harry Potter v� ph�n#UCbEGdsixsjZJCKYE8rergSw#h9VS1gdSwzY_2345.webm
26
+
27
+ [download] Downloading item 3 of 56
28
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=bv-xny0u2fQ
29
+ [youtube] bv-xny0u2fQ: Downloading webpage
30
+ [youtube] bv-xny0u2fQ: Downloading tv client config
31
+ [youtube] bv-xny0u2fQ: Downloading tv player API JSON
32
+ [youtube] bv-xny0u2fQ: Downloading ios player API JSON
33
+ [youtube] bv-xny0u2fQ: Downloading m3u8 information
34
+ [info] bv-xny0u2fQ: Downloading 1 format(s): 251
35
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Thi i anh h�ng tr#UCbEGdsixsjZJCKYE8rergSw#bv-xny0u2fQ_1572.webm
36
+
37
+ [download] Downloading item 4 of 56
38
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=nJDb-HyMOXo
39
+ [youtube] nJDb-HyMOXo: Downloading webpage
40
+ [youtube] nJDb-HyMOXo: Downloading tv client config
41
+ [youtube] nJDb-HyMOXo: Downloading tv player API JSON
42
+ [youtube] nJDb-HyMOXo: Downloading ios player API JSON
43
+ [youtube] nJDb-HyMOXo: Downloading m3u8 information
44
+ [info] nJDb-HyMOXo: Downloading 1 format(s): 251
45
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Thi i anh h�ng tr#UCbEGdsixsjZJCKYE8rergSw#nJDb-HyMOXo_1813.webm
46
+
47
+ [download] Downloading item 5 of 56
48
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=B7Pjc4C_ogA
49
+ [youtube] B7Pjc4C_ogA: Downloading webpage
50
+ [youtube] B7Pjc4C_ogA: Downloading tv client config
51
+ [youtube] B7Pjc4C_ogA: Downloading tv player API JSON
52
+ [youtube] B7Pjc4C_ogA: Downloading ios player API JSON
53
+ [youtube] B7Pjc4C_ogA: Downloading m3u8 information
54
+ [info] B7Pjc4C_ogA: Downloading 1 format(s): 251
55
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Nhng c�u chuyn hay#UCbEGdsixsjZJCKYE8rergSw#B7Pjc4C_ogA_2264.webm
56
+
57
+ [download] Downloading item 6 of 56
58
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=9B1YeDReuGE
59
+ [youtube] 9B1YeDReuGE: Downloading webpage
60
+ [youtube] 9B1YeDReuGE: Downloading tv client config
61
+ [youtube] 9B1YeDReuGE: Downloading tv player API JSON
62
+ [youtube] 9B1YeDReuGE: Downloading ios player API JSON
63
+ [youtube] 9B1YeDReuGE: Downloading m3u8 information
64
+ [info] 9B1YeDReuGE: Downloading 1 format(s): 251
65
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Gii th�ch to�n b v#UCbEGdsixsjZJCKYE8rergSw#9B1YeDReuGE_2932.webm
66
+
67
+ [download] Downloading item 7 of 56
68
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=iEoTNGXQL2I
69
+ [youtube] iEoTNGXQL2I: Downloading webpage
70
+ [youtube] iEoTNGXQL2I: Downloading tv client config
71
+ [youtube] iEoTNGXQL2I: Downloading tv player API JSON
72
+ [youtube] iEoTNGXQL2I: Downloading ios player API JSON
73
+ [youtube] iEoTNGXQL2I: Downloading m3u8 information
74
+ [info] iEoTNGXQL2I: Downloading 1 format(s): 251
75
+ [download] Destination: downloadgs2\xinchaotoilavanvo\S thi Gilgamesh ph#UCbEGdsixsjZJCKYE8rergSw#iEoTNGXQL2I_1655.webm
76
+
77
+ [download] Downloading item 8 of 56
78
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=YBf-tAL8EAs
79
+ [youtube] YBf-tAL8EAs: Downloading webpage
80
+ [youtube] YBf-tAL8EAs: Downloading tv client config
81
+ [youtube] YBf-tAL8EAs: Downloading tv player API JSON
82
+ [youtube] YBf-tAL8EAs: Downloading ios player API JSON
83
+ [youtube] YBf-tAL8EAs: Downloading m3u8 information
84
+ [info] YBf-tAL8EAs: Downloading 1 format(s): 251
85
+ [download] Destination: downloadgs2\xinchaotoilavanvo\S thi Gilgamesh ph#UCbEGdsixsjZJCKYE8rergSw#YBf-tAL8EAs_1390.webm
86
+
87
+ [download] Downloading item 9 of 56
88
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=FG0z-xkHNps
89
+ [youtube] FG0z-xkHNps: Downloading webpage
90
+ [youtube] FG0z-xkHNps: Downloading tv client config
91
+ [youtube] FG0z-xkHNps: Downloading tv player API JSON
92
+ [youtube] FG0z-xkHNps: Downloading ios player API JSON
93
+ [youtube] FG0z-xkHNps: Downloading m3u8 information
94
+ [info] FG0z-xkHNps: Downloading 1 format(s): 251
95
+ [download] Destination: downloadgs2\xinchaotoilavanvo\To�n b v 81 kip n#UCbEGdsixsjZJCKYE8rergSw#FG0z-xkHNps_2348.webm
96
+
97
+ [download] Downloading item 10 of 56
98
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=5GTaloeY3EQ
99
+ [youtube] 5GTaloeY3EQ: Downloading webpage
100
+ [youtube] 5GTaloeY3EQ: Downloading tv client config
101
+ [youtube] 5GTaloeY3EQ: Downloading tv player API JSON
102
+ [youtube] 5GTaloeY3EQ: Downloading ios player API JSON
103
+ [youtube] 5GTaloeY3EQ: Downloading m3u8 information
104
+ [info] 5GTaloeY3EQ: Downloading 1 format(s): 251
105
+ [download] Destination: downloadgs2\xinchaotoilavanvo\H�n S Tranh H�ng ph#UCbEGdsixsjZJCKYE8rergSw#5GTaloeY3EQ_1538.webm
106
+
107
+ [download] Downloading item 11 of 56
108
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=rXfV60LJiZU
109
+ [youtube] rXfV60LJiZU: Downloading webpage
110
+ [youtube] rXfV60LJiZU: Downloading tv client config
111
+ [youtube] rXfV60LJiZU: Downloading tv player API JSON
112
+ [youtube] rXfV60LJiZU: Downloading ios player API JSON
113
+ [youtube] rXfV60LJiZU: Downloading m3u8 information
114
+ [info] rXfV60LJiZU: Downloading 1 format(s): 251
115
+ [download] Destination: downloadgs2\xinchaotoilavanvo\H�n S Tranh H�ng ph#UCbEGdsixsjZJCKYE8rergSw#rXfV60LJiZU_1411.webm
116
+
117
+ [download] Downloading item 12 of 56
118
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=gRR4NrXSWPo
119
+ [youtube] gRR4NrXSWPo: Downloading webpage
120
+ [youtube] gRR4NrXSWPo: Downloading tv client config
121
+ [youtube] gRR4NrXSWPo: Downloading tv player API JSON
122
+ [youtube] gRR4NrXSWPo: Downloading ios player API JSON
123
+ [youtube] gRR4NrXSWPo: Downloading m3u8 information
124
+ [info] gRR4NrXSWPo: Downloading 1 format(s): 251
125
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Xu�n Thu - Chin Qu#UCbEGdsixsjZJCKYE8rergSw#gRR4NrXSWPo_5090.webm
126
+
127
+ [download] Downloading item 13 of 56
128
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=HTCV5axjk_s
129
+ [youtube] HTCV5axjk_s: Downloading webpage
130
+ [youtube] HTCV5axjk_s: Downloading tv client config
131
+ [youtube] HTCV5axjk_s: Downloading tv player API JSON
132
+ [youtube] HTCV5axjk_s: Downloading ios player API JSON
133
+ [youtube] HTCV5axjk_s: Downloading m3u8 information
134
+ [info] HTCV5axjk_s: Downloading 1 format(s): 251
135
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Thi Chin Quc tron#UCbEGdsixsjZJCKYE8rergSw#HTCV5axjk_s_2593.webm
136
+
137
+ [download] Downloading item 14 of 56
138
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=2P6dfJldpsE
139
+ [youtube] 2P6dfJldpsE: Downloading webpage
140
+ [youtube] 2P6dfJldpsE: Downloading tv client config
141
+ [youtube] 2P6dfJldpsE: Downloading tv player API JSON
142
+ [youtube] 2P6dfJldpsE: Downloading ios player API JSON
143
+ [youtube] 2P6dfJldpsE: Downloading m3u8 information
144
+ [info] 2P6dfJldpsE: Downloading 1 format(s): 251
145
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Gii th�ch Thn Tho#UCbEGdsixsjZJCKYE8rergSw#2P6dfJldpsE_4086.webm
146
+
147
+ [download] Downloading item 15 of 56
148
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=egWDv6pRhDY
149
+ [youtube] egWDv6pRhDY: Downloading webpage
150
+ [youtube] egWDv6pRhDY: Downloading tv client config
151
+ [youtube] egWDv6pRhDY: Downloading tv player API JSON
152
+ [youtube] egWDv6pRhDY: Downloading ios player API JSON
153
+ [youtube] egWDv6pRhDY: Downloading m3u8 information
154
+ [info] egWDv6pRhDY: Downloading 1 format(s): 251
155
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Gii th�ch Thi Xu�n#UCbEGdsixsjZJCKYE8rergSw#egWDv6pRhDY_2579.webm
156
+
157
+ [download] Downloading item 16 of 56
158
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=1MTVpK1zQpE
159
+ [youtube] 1MTVpK1zQpE: Downloading webpage
160
+ [youtube] 1MTVpK1zQpE: Downloading tv client config
161
+ [youtube] 1MTVpK1zQpE: Downloading tv player API JSON
162
+ [youtube] 1MTVpK1zQpE: Downloading ios player API JSON
163
+ [youtube] 1MTVpK1zQpE: Downloading m3u8 information
164
+ [info] 1MTVpK1zQpE: Downloading 1 format(s): 251
165
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Thn Thoi Bc �u (P#UCbEGdsixsjZJCKYE8rergSw#1MTVpK1zQpE_1551.webm
166
+
167
+ [download] Downloading item 17 of 56
168
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=IvoxwgtWWv8
169
+ [youtube] IvoxwgtWWv8: Downloading webpage
170
+ [youtube] IvoxwgtWWv8: Downloading tv client config
171
+ [youtube] IvoxwgtWWv8: Downloading tv player API JSON
172
+ [youtube] IvoxwgtWWv8: Downloading ios player API JSON
173
+ [youtube] IvoxwgtWWv8: Downloading m3u8 information
174
+ [info] IvoxwgtWWv8: Downloading 1 format(s): 251
175
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Gii th�ch to�n b T#UCbEGdsixsjZJCKYE8rergSw#IvoxwgtWWv8_3079.webm
176
+
177
+ [download] Downloading item 18 of 56
178
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=-qwS3q_Lu3c
179
+ [youtube] -qwS3q_Lu3c: Downloading webpage
180
+ [youtube] -qwS3q_Lu3c: Downloading tv client config
181
+ [youtube] -qwS3q_Lu3c: Downloading tv player API JSON
182
+ [youtube] -qwS3q_Lu3c: Downloading ios player API JSON
183
+ [youtube] -qwS3q_Lu3c: Downloading m3u8 information
184
+ [info] -qwS3q_Lu3c: Downloading 1 format(s): 251
185
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Harry Potter v� h�n #UCbEGdsixsjZJCKYE8rergSw#-qwS3q_Lu3c_1496.webm
186
+
187
+ [download] Downloading item 19 of 56
188
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=X9ldeKXTGRg
189
+ [youtube] X9ldeKXTGRg: Downloading webpage
190
+ [youtube] X9ldeKXTGRg: Downloading tv client config
191
+ [youtube] X9ldeKXTGRg: Downloading tv player API JSON
192
+ [youtube] X9ldeKXTGRg: Downloading ios player API JSON
193
+ [youtube] X9ldeKXTGRg: Downloading m3u8 information
194
+ [info] X9ldeKXTGRg: Downloading 1 format(s): 251
195
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Tng quan v thn th#UCbEGdsixsjZJCKYE8rergSw#X9ldeKXTGRg_1317.webm
196
+
197
+ [download] Downloading item 20 of 56
198
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=P0ezv6GBoSU
199
+ [youtube] P0ezv6GBoSU: Downloading webpage
200
+ [youtube] P0ezv6GBoSU: Downloading tv client config
201
+ [youtube] P0ezv6GBoSU: Downloading tv player API JSON
202
+ [youtube] P0ezv6GBoSU: Downloading ios player API JSON
203
+ [youtube] P0ezv6GBoSU: Downloading m3u8 information
204
+ [info] P0ezv6GBoSU: Downloading 1 format(s): 251
205
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Thn Thoi Bc �u (P#UCbEGdsixsjZJCKYE8rergSw#P0ezv6GBoSU_1141.webm
206
+
207
+ [download] Downloading item 21 of 56
208
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=z_vn3fv-EF4
209
+ [youtube] z_vn3fv-EF4: Downloading webpage
210
+ [youtube] z_vn3fv-EF4: Downloading tv client config
211
+ [youtube] z_vn3fv-EF4: Downloading tv player API JSON
212
+ [youtube] z_vn3fv-EF4: Downloading ios player API JSON
213
+ [youtube] z_vn3fv-EF4: Downloading m3u8 information
214
+ [info] z_vn3fv-EF4: Downloading 1 format(s): 251
215
+ [download] Destination: downloadgs2\xinchaotoilavanvo\Thn Thoi Bc �u (P#UCbEGdsixsjZJCKYE8rergSw#z_vn3fv-EF4_1329.webm
216
+
217
+ [download] Downloading item 22 of 56
218
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=OgD3IyAzX_E
219
+ [youtube] OgD3IyAzX_E: Downloading webpage
220
+ [youtube] OgD3IyAzX_E: Downloading tv client config
221
+ [youtube] OgD3IyAzX_E: Downloading tv player API JSON
222
+ [youtube] OgD3IyAzX_E: Downloading ios player API JSON
223
+ [youtube] OgD3IyAzX_E: Downloading m3u8 information
224
+ [info] OgD3IyAzX_E: Downloading 1 format(s): 251
225
+ [download] Destination: downloadgs2\xinchaotoilavanvo\T�y Du K� (Phn 3)#UCbEGdsixsjZJCKYE8rergSw#OgD3IyAzX_E_972.webm
226
+
227
+ [download] Downloading item 23 of 56
228
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=4673DtpQkkU
229
+ [youtube] 4673DtpQkkU: Downloading webpage
230
+ [youtube] 4673DtpQkkU: Downloading tv client config
231
+ [youtube] 4673DtpQkkU: Downloading tv player API JSON
232
+ [youtube] 4673DtpQkkU: Downloading ios player API JSON
233
+ [youtube] 4673DtpQkkU: Downloading m3u8 information
234
+ [info] 4673DtpQkkU: Downloading 1 format(s): 251
235
+ [download] Destination: downloadgs2\xinchaotoilavanvo\T�y Du K� (Phn 2)#UCbEGdsixsjZJCKYE8rergSw#4673DtpQkkU_1013.webm
236
+
237
+ [download] Downloading item 24 of 56
238
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=zechn7_mUrE
239
+ [youtube] zechn7_mUrE: Downloading webpage
240
+ [youtube] zechn7_mUrE: Downloading tv client config
241
+ [youtube] zechn7_mUrE: Downloading tv player API JSON
242
+ [youtube] zechn7_mUrE: Downloading ios player API JSON
243
+ [youtube] zechn7_mUrE: Downloading m3u8 information
244
+ [info] zechn7_mUrE: Downloading 1 format(s): 251
245
+ [download] Destination: downloadgs2\xinchaotoilavanvo\To�n b v Thy H t#UCbEGdsixsjZJCKYE8rergSw#zechn7_mUrE_4505.webm
246
+
247
+ [download] Downloading item 25 of 56
248
+ [youtube] Extracting URL: https://www.youtube.com/watch?v=72Fot34ipYk
249
+ [youtube] 72Fot34ipYk: Downloading webpage
250
+ [youtube] 72Fot34ipYk: Downloading tv client config
251
+ [youtube] 72Fot34ipYk: Downloading tv player API JSON
252
+ [youtube] 72Fot34ipYk: Downloading ios player API JSON
pipeline/force_alignment/README.md ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Installation
2
+ ```shell
3
+ git submodule update --init --recursive
4
+ pip install torchaudio==2.1.0 # >= 2.1.0
5
+ pip install sox
6
+ pip install dataclasses
7
+ ```
8
+
9
+ ## Usage
10
+ You need to specify the [ISO 639-2 language code](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes).
11
+ ```shell
12
+ ./force_align.sh [transcription directory] [output directory] [ISO 639-2 language code]
13
+ ```
14
+
15
+ For example:
16
+ ```shell
17
+ ./force_align.sh ./output_trans ./output_force_align zho
18
+ ```
pipeline/force_alignment/calculate_precision.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ from pathlib import Path
4
+
5
+ import textgrid
6
+
7
+
8
+ def parse_args():
9
+ parser = argparse.ArgumentParser()
10
+ parser.add_argument("--hyp-dir", type=Path, required=True)
11
+ parser.add_argument("--ref-dir", type=Path, required=True)
12
+ return parser.parse_args()
13
+
14
+
15
+ def calculate_overlap_time(start1, end1, start2, end2):
16
+ return max(0, min(end1, end2) - max(start1, start2))
17
+
18
+
19
+ def calculate_precision(hyp_intervals, ref_intervals):
20
+ total_true_positive_time = 0
21
+ total_predicted_time = 0
22
+
23
+ for hyp_start, hyp_end in hyp_intervals:
24
+ hyp_duration = hyp_end - hyp_start
25
+ total_predicted_time += hyp_duration
26
+ overlap_time_with_all_refs = 0
27
+
28
+ for ref_start, ref_end in ref_intervals:
29
+ overlap = calculate_overlap_time(hyp_start, hyp_end, ref_start, ref_end)
30
+ overlap_time_with_all_refs += overlap
31
+
32
+ total_true_positive_time += overlap_time_with_all_refs
33
+
34
+ precision = total_true_positive_time / total_predicted_time
35
+
36
+ return precision
37
+
38
+
39
+ def main():
40
+ args = parse_args()
41
+
42
+ for ref_path in args.ref_dir.rglob("*.TextGrid"):
43
+ file_id = ref_path.stem
44
+ hyp_path = args.hyp_dir / (file_id + "_manifest.jsonl")
45
+
46
+ assert hyp_path.exists(), f"{hyp_path} does not exist."
47
+
48
+ tg = textgrid.TextGrid.fromFile(ref_path)
49
+ ref_intervals = []
50
+ for interval in tg[0]:
51
+ if len(interval.mark) > 0:
52
+ start = interval.minTime
53
+ end = interval.maxTime
54
+ ref_intervals.append((start, end))
55
+
56
+ hyp_intervals = []
57
+ with open(hyp_path, "r") as f:
58
+ for line in f:
59
+ data = json.loads(line)
60
+ if len(data["text"]) > 0:
61
+ start = data["audio_start_sec"]
62
+ duration = data["duration"]
63
+ end = start + duration
64
+ hyp_intervals.append((start, end))
65
+
66
+ print(file_id, calculate_precision(hyp_intervals, ref_intervals))
67
+
68
+
69
+ if __name__ == "__main__":
70
+ main()
pipeline/force_alignment/force_align.sh ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #! /usr/bin/bash
2
+
3
+ corpus_dir=${1%/}
4
+ output_dir=${2%/}
5
+ lang=$3
6
+
7
+ for text in ${corpus_dir}/*.txt; do
8
+ id=$(basename "${text}" .txt)
9
+ echo "Process $id"
10
+ wav="${corpus_dir}/${id}.wav"
11
+
12
+ python /content/drive/MyDrive/GigaSpeech2/pipeline/utils/force_alignment/align.py \
13
+ -a $wav \
14
+ -t $text \
15
+ --lang $lang \
16
+ --output-dir $output_dir \
17
+ --uroman /content/drive/MyDrive/GigaSpeech2/pipeline/utils/uroman/bin
18
+ done
pipeline/force_alignment/force_align_from_list.sh ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #! /usr/bin/bash
2
+
3
+ corpus_dir=${2%/}
4
+ output_dir=${3%/}
5
+ lang=$4
6
+
7
+ while IFS= read -r text; do
8
+ id=$(basename "${text}" .txt)
9
+ echo "Process $id"
10
+ wav="${corpus_dir}/${id}.wav"
11
+
12
+ python ../utils/force_alignment/align.py \
13
+ -a $wav \
14
+ -t $text \
15
+ --lang $lang \
16
+ --output-dir $output_dir \
17
+ --uroman ../utils/uroman/bin
18
+ done < $1
pipeline/segmentation/README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Usage
2
+ ### Installation
3
+ ```shell
4
+ pip install fasttext
5
+ ```
6
+
7
+ ### Download the language identification model
8
+ ```shell
9
+ wget -P /tmp https://dl.fbaipublicfiles.com/fasttext/supervised-models/lid.176.bin
10
+ ```
11
+
12
+ ### Modify the filtering rules
13
+ ```shell
14
+ vim filter_manifest.py
15
+ ```
16
+
17
+ ### Filter the segments
18
+ ```shell
19
+ python filter_manifest.py \
20
+ --input-dir [forced-aligned manifests directory] \
21
+ --output-dir [output directory] \
22
+ --lid-model-path [path to lid.176.bin]
23
+ ```
24
+
25
+ For example:
26
+ ```shell
27
+ python filter_manifest.py \
28
+ --input-dir ./output_force_align \
29
+ --output-dir ./output_filter \
30
+ --lid-model-path ./lid.176.bin
31
+ ```
32
+
33
+ ### Segmentation the audio files
34
+ ```shell
35
+ ./segment.sh [filtered manifests directory] [output directory]
36
+ ```
37
+
38
+ For example:
39
+ ```shell
40
+ ./segment.sh ./output_filter ./output_segment
41
+ ```
pipeline/segmentation/filter_manifest.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import re
4
+ from abc import ABC, abstractmethod
5
+ from pathlib import Path
6
+
7
+ import fasttext
8
+ from tqdm import tqdm
9
+
10
+
11
+ class FilterStrategy(ABC):
12
+ @abstractmethod
13
+ def apply(self, line):
14
+ pass
15
+
16
+
17
+ class CharsetFilter(FilterStrategy):
18
+ def __init__(self):
19
+ thai_chars = r"\u0E00-\u0E7F"
20
+ digits = r"\u0030-\u0039"
21
+ blank_symbol = r"\s"
22
+ valid_symbols = thai_chars + digits + blank_symbol
23
+ self.valid_pattern = re.compile(f"[^{valid_symbols}]")
24
+
25
+ def apply(self, line):
26
+ return not self.valid_pattern.search(line["text"])
27
+
28
+
29
+ class LanguageConfidenceFilter(FilterStrategy):
30
+ def __init__(self, model_path, confidence_threshold=0.95):
31
+ self.model = fasttext.load_model(model_path)
32
+ self.confidence_threshold = confidence_threshold
33
+
34
+ def apply(self, line):
35
+ labels, probabilities = self.model.predict(line["text"], k=1)
36
+ return probabilities[0] >= self.confidence_threshold
37
+
38
+
39
+ class AudioDurationFilter(FilterStrategy):
40
+ def __init__(self, min_keep_duration=1, max_keep_duration=30):
41
+ self.min_keep_duration = min_keep_duration
42
+ self.max_keep_duration = max_keep_duration
43
+
44
+ def apply(self, line):
45
+ return (
46
+ line["duration"] >= self.min_keep_duration
47
+ and line["duration"] <= self.max_keep_duration
48
+ )
49
+
50
+
51
+ class ContentFilter:
52
+ def __init__(self, strategies):
53
+ self.strategies = strategies
54
+
55
+ def __call__(self, line):
56
+ for strategy in self.strategies:
57
+ if not strategy.apply(line):
58
+ return False
59
+ return True
60
+
61
+
62
+ def parse_args():
63
+ parser = argparse.ArgumentParser()
64
+ parser.add_argument("--input-dir", type=Path, required=True)
65
+ parser.add_argument("--output-dir", type=Path, required=True)
66
+ parser.add_argument("--lid-model-path", type=str, required=True)
67
+ return parser.parse_args()
68
+
69
+
70
+ def filter_manifests(input_dir, output_dir, content_filter):
71
+ total_cnt = 0
72
+ valid_cnt = 0
73
+ for manifest_path in tqdm(input_dir.rglob("*.jsonl"), desc="Filtering manifests"):
74
+ filtered_manifest_path = output_dir / ("filtered_" + manifest_path.name)
75
+
76
+ with open(manifest_path, "r", encoding="utf-8") as reader, open(
77
+ filtered_manifest_path, "w", encoding="utf-8"
78
+ ) as writer:
79
+ for line in reader:
80
+ line = json.loads(line)
81
+ total_cnt += 1
82
+ if content_filter(line):
83
+ writer.write(json.dumps(line) + "\n")
84
+ valid_cnt += 1
85
+
86
+ print(
87
+ f"total segments: {total_cnt}, valid segments: {valid_cnt}, filtered rate: {1 - valid_cnt / total_cnt}"
88
+ )
89
+
90
+
91
+ def main():
92
+ args = parse_args()
93
+ strategies = [
94
+ CharsetFilter(),
95
+ LanguageConfidenceFilter(args.lid_model_path, 0.99),
96
+ AudioDurationFilter(2, 30),
97
+ ]
98
+ content_filter = ContentFilter(strategies)
99
+ filter_manifests(args.input_dir, args.output_dir, content_filter)
100
+
101
+
102
+ if __name__ == "__main__":
103
+ main()
pipeline/segmentation/segment.sh ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #! /usr/bin/bash
2
+
3
+ manifest_dir=${1%/}
4
+ output_dir=${2%/}
5
+
6
+ for manifest in ${manifest_dir}/*.jsonl; do
7
+ id=$(basename "${manifest}" .jsonl)
8
+ echo "Process $id"
9
+
10
+ python segment_from_manifests.py \
11
+ -m $manifest \
12
+ -o $output_dir
13
+ done
pipeline/segmentation/segment_from_list.sh ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #! /usr/bin/bash
2
+
3
+ output_dir=${2%/}
4
+
5
+ while IFS= read -r manifest; do
6
+ id=$(basename "${manifest}" .jsonl)
7
+ echo "Process $id"
8
+
9
+ python segment_from_manifests.py \
10
+ -m $manifest \
11
+ -o $output_dir
12
+ done < $1
pipeline/segmentation/segment_from_manifests.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+
5
+ import sox
6
+
7
+
8
+ def main(args):
9
+ os.makedirs(args.output_dir, exist_ok=True)
10
+ audio_id = (
11
+ os.path.basename(args.manifest_filepath)
12
+ .replace("_manifest", "")
13
+ .replace("filtered_", "")
14
+ .split(".")[0]
15
+ )
16
+ segment_dir = os.path.join(args.output_dir, audio_id)
17
+ os.makedirs(segment_dir, exist_ok=True)
18
+ text_file_path = os.path.join(segment_dir, f"{audio_id}.trans.txt")
19
+
20
+ with open(args.manifest_filepath, "r") as reader, open(
21
+ text_file_path, "w"
22
+ ) as writer:
23
+ for i, line in enumerate(reader):
24
+ segment_id = f"{audio_id}-{i}"
25
+ line = json.loads(line)
26
+
27
+ audio_filepath = line["audio_filepath"]
28
+ audio_start_sec = line["audio_start_sec"]
29
+ audio_end_sec = audio_start_sec + line["duration"]
30
+
31
+ output_file = os.path.join(segment_dir, f"{segment_id}.wav")
32
+ tfm = sox.Transformer()
33
+ tfm.trim(audio_start_sec, audio_end_sec)
34
+ tfm.build_file(audio_filepath, output_file)
35
+
36
+ text = line["text"]
37
+ writer.write(segment_id + " " + text + "\n")
38
+
39
+
40
+ if __name__ == "__main__":
41
+ parser = argparse.ArgumentParser(description="Segment long audio files")
42
+ parser.add_argument(
43
+ "-m", "--manifest-filepath", type=str, help="Path to manifest file "
44
+ )
45
+ parser.add_argument(
46
+ "-o",
47
+ "--output-dir",
48
+ type=str,
49
+ help="Output directory to store segmented audio files",
50
+ )
51
+ args = parser.parse_args()
52
+ main(args)
pipeline/utils/force_alignment/README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Force Alignment for GigaSpeech 2
2
+ Mainly from [fairseq](https://github.com/facebookresearch/fairseq/tree/main/examples/mms/data_prep), modified for force alignment in GigaSpeech 2
3
+ # Data Preparation
4
+
5
+ We describe the process of aligning long audio files with their transcripts and generating shorter audio segments below.
6
+
7
+ - Step 1: Download and install torchaudio>=2.1.0 . We have open sourced the CTC forced alignment algorithm described in our paper via [torchaudio](https://github.com/pytorch/audio/pull/3348).
8
+ ```
9
+ pip install --pre torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
10
+ ```
11
+
12
+ - Step 2: Download [uroman](https://github.com/isi-nlp/uroman) from Github. It is a universal romanizer which converts text in any script to the Latin alphabet. Use [this link](https://www.isi.edu/~ulf/uroman.html) to try their web interface.
13
+ ```
14
+ git clone git@github.com:isi-nlp/uroman.git
15
+ ```
16
+
17
+ - Step 3: Install a few other dependencies
18
+ ```
19
+ pip install sox
20
+ pip install dataclasses
21
+ ```
22
+
23
+ - Step 4: Create a text file containing the transcript for a (long) audio file. Each line in the text file will correspond to a separate audio segment that will be generated upon alignment.
24
+
25
+ Example content of the input text file :
26
+ ```
27
+ Text of the desired first segment
28
+ Text of the desired second segment
29
+ Text of the desired third segment
30
+ ```
31
+
32
+ - Step 5: Run forced alignment and segment the audio file into shorter segments.
33
+ ```
34
+ python align_and_segment.py --audio /path/to/audio.wav --textfile /path/to/textfile --lang <iso> --outdir /path/to/output --uroman /path/to/uroman/bin
35
+ ```
36
+
37
+ The above code will generated the audio segments under output directory based on the content of each line in the input text file. The `manifest.json` file consisting of the of segmented audio filepaths and their corresponding transcripts.
38
+
39
+ ```
40
+ > head /path/to/output/manifest.json
41
+
42
+ {"audio_start_sec": 0.0, "audio_filepath": "/path/to/output/segment1.flac", "duration": 6.8, "text": "she wondered afterwards how she could have spoken with that hard serenity how she could have", "normalized_text": "she wondered afterwards how she could have spoken with that hard serenity how she could have", "uroman_tokens": "s h e w o n d e r e d a f t e r w a r d s h o w s h e c o u l d h a v e s p o k e n w i t h t h a t h a r d s e r e n i t y h o w s h e c o u l d h a v e"}
43
+ {"audio_start_sec": 6.8, "audio_filepath": "/path/to/output/segment2.flac", "duration": 5.3, "text": "gone steadily on with story after story poem after poem till", "normalized_text": "gone steadily on with story after story poem after poem till", "uroman_tokens": "g o n e s t e a d i l y o n w i t h s t o r y a f t e r s t o r y p o e m a f t e r p o e m t i l l"}
44
+ {"audio_start_sec": 12.1, "audio_filepath": "/path/to/output/segment3.flac", "duration": 5.9, "text": "allan's grip on her hands relaxed and he fell into a heavy tired sleep", "normalized_text": "allan's grip on her hands relaxed and he fell into a heavy tired sleep", "uroman_tokens": "a l l a n ' s g r i p o n h e r h a n d s r e l a x e d a n d h e f e l l i n t o a h e a v y t i r e d s l e e p"}
45
+ ```
46
+
47
+ To visualize the segmented audio files, [Speech Data Explorer](https://github.com/NVIDIA/NeMo/tree/main/tools/speech_data_explorer) tool from NeMo toolkit can be used.
48
+
49
+ As our alignment model outputs uroman tokens for input audio in any language, it also works with non-english audio and their corresponding transcripts.
pipeline/utils/force_alignment/align.py ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import json
3
+ import os
4
+
5
+ import sox
6
+ import torch
7
+ import torchaudio
8
+ import torchaudio.functional as F
9
+
10
+ from align_utils import (get_spans, get_uroman_tokens, load_model_dict,
11
+ merge_repeats, time_to_frame)
12
+ from text_normalization import text_normalize
13
+
14
+ SAMPLING_FREQ = 16000
15
+ EMISSION_INTERVAL = 30
16
+ DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
17
+
18
+
19
+ def generate_emissions(model, audio_file):
20
+ waveform, _ = torchaudio.load(audio_file) # waveform: channels X T
21
+ waveform = waveform.to(DEVICE)
22
+ total_duration = sox.file_info.duration(audio_file)
23
+
24
+ audio_sf = sox.file_info.sample_rate(audio_file)
25
+ assert audio_sf == SAMPLING_FREQ
26
+
27
+ emissions_arr = []
28
+ with torch.inference_mode():
29
+ i = 0
30
+ while i < total_duration:
31
+ segment_start_time, segment_end_time = (i, i + EMISSION_INTERVAL)
32
+
33
+ context = EMISSION_INTERVAL * 0.1
34
+ input_start_time = max(segment_start_time - context, 0)
35
+ input_end_time = min(segment_end_time + context, total_duration)
36
+ waveform_split = waveform[
37
+ :,
38
+ int(SAMPLING_FREQ * input_start_time) : int(
39
+ SAMPLING_FREQ * (input_end_time)
40
+ ),
41
+ ]
42
+
43
+ model_outs, _ = model(waveform_split)
44
+ emissions_ = model_outs[0]
45
+ emission_start_frame = time_to_frame(segment_start_time)
46
+ emission_end_frame = time_to_frame(segment_end_time)
47
+ offset = time_to_frame(input_start_time)
48
+
49
+ emissions_ = emissions_[
50
+ emission_start_frame - offset : emission_end_frame - offset, :
51
+ ]
52
+ emissions_arr.append(emissions_)
53
+ i += EMISSION_INTERVAL
54
+
55
+ emissions = torch.cat(emissions_arr, dim=0).squeeze()
56
+ emissions = torch.log_softmax(emissions, dim=-1)
57
+
58
+ stride = float(waveform.size(1) * 1000 / emissions.size(0) / SAMPLING_FREQ)
59
+
60
+ return emissions, stride
61
+
62
+
63
+ def get_alignments(
64
+ audio_file,
65
+ tokens,
66
+ model,
67
+ dictionary,
68
+ use_star,
69
+ ):
70
+ # Generate emissions
71
+ emissions, stride = generate_emissions(model, audio_file)
72
+ T, N = emissions.size()
73
+ if use_star:
74
+ emissions = torch.cat([emissions, torch.zeros(T, 1).to(DEVICE)], dim=1)
75
+
76
+ # Force Alignment
77
+ if tokens:
78
+ token_indices = [
79
+ dictionary[c] for c in " ".join(tokens).split(" ") if c in dictionary
80
+ ]
81
+ else:
82
+ print(f"Empty transcript!!!!! for audio file {audio_file}")
83
+ token_indices = []
84
+
85
+ blank = dictionary["<blank>"]
86
+
87
+ targets = torch.tensor(token_indices, dtype=torch.int32).to(DEVICE)
88
+
89
+ input_lengths = torch.tensor(emissions.shape[0]).unsqueeze(-1)
90
+ target_lengths = torch.tensor(targets.shape[0]).unsqueeze(-1)
91
+ path, _ = F.forced_align(
92
+ emissions.unsqueeze(0),
93
+ targets.unsqueeze(0),
94
+ input_lengths,
95
+ target_lengths,
96
+ blank=blank,
97
+ )
98
+ path = path.squeeze().to("cpu").tolist()
99
+
100
+ segments = merge_repeats(path, {v: k for k, v in dictionary.items()})
101
+ return segments, stride
102
+
103
+
104
+ def main(args):
105
+ raw_transcripts = []
106
+ with open(args.text_filepath) as f:
107
+ raw_transcripts = [line.strip() for line in f]
108
+ print("Read {} lines from {}".format(len(raw_transcripts), args.text_filepath))
109
+
110
+ transcripts = []
111
+ norm_transcripts = []
112
+ for line in raw_transcripts:
113
+ transcript, norm_transcript = text_normalize(line.strip(), args.lang)
114
+ if len(norm_transcript) > 0:
115
+ transcripts.append(transcript)
116
+ norm_transcripts.append(norm_transcript)
117
+ tokens = get_uroman_tokens(norm_transcripts, args.uroman_path, args.lang)
118
+
119
+ model, dictionary = load_model_dict()
120
+ model = model.to(DEVICE)
121
+ if args.use_star:
122
+ dictionary["<star>"] = len(dictionary)
123
+ tokens = ["<star>"] + tokens
124
+ transcripts = ["<star>"] + transcripts
125
+ norm_transcripts = ["<star>"] + norm_transcripts
126
+
127
+ segments, stride = get_alignments(
128
+ args.audio_filepath,
129
+ tokens,
130
+ model,
131
+ dictionary,
132
+ args.use_star,
133
+ )
134
+ # Get spans of each line in input text file
135
+ spans = get_spans(tokens, segments, stride)
136
+
137
+ audio_id = os.path.basename(args.audio_filepath).split(".")[0]
138
+ os.makedirs(args.output_dir, exist_ok=True)
139
+ with open(f"{args.output_dir}/{audio_id}_manifest.jsonl", "w") as f:
140
+ for i, t in enumerate(norm_transcripts):
141
+ span = spans[i]
142
+ seg_start_idx = span[0].start
143
+ seg_end_idx = span[-1].end
144
+
145
+ audio_start_sec = seg_start_idx * stride / 1000
146
+ audio_end_sec = seg_end_idx * stride / 1000
147
+
148
+ sample = {
149
+ "audio_filepath": args.audio_filepath,
150
+ "audio_start_sec": audio_start_sec,
151
+ "duration": audio_end_sec - audio_start_sec,
152
+ "text": transcripts[i],
153
+ }
154
+ f.write(json.dumps(sample) + "\n")
155
+
156
+ return segments, stride
157
+
158
+
159
+ if __name__ == "__main__":
160
+ parser = argparse.ArgumentParser(description="Align and segment long audio files")
161
+ parser.add_argument(
162
+ "-a", "--audio-filepath", type=str, help="Path to input audio file"
163
+ )
164
+ parser.add_argument(
165
+ "-t", "--text-filepath", type=str, help="Path to input text file "
166
+ )
167
+ parser.add_argument(
168
+ "-l", "--lang", type=str, default="eng", help="ISO code of the language"
169
+ )
170
+ parser.add_argument(
171
+ "-u", "--uroman-path", type=str, default="eng", help="Location to uroman/bin"
172
+ )
173
+ parser.add_argument(
174
+ "-s",
175
+ "--use-star",
176
+ action="store_true",
177
+ help="Use star at the start of transcript",
178
+ )
179
+ parser.add_argument(
180
+ "-o",
181
+ "--output-dir",
182
+ type=str,
183
+ help="Output directory to store segmented audio files",
184
+ )
185
+ print("Using torch version:", torch.__version__)
186
+ print("Using torchaudio version:", torchaudio.__version__)
187
+ print("Using device: ", DEVICE)
188
+ args = parser.parse_args()
189
+ main(args)
pipeline/utils/force_alignment/align_utils.py ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import os
3
+ import re
4
+ import tempfile
5
+ from dataclasses import dataclass
6
+
7
+ import torch
8
+ from torchaudio.models import wav2vec2_model
9
+
10
+ # iso codes with specialized rules in uroman
11
+ special_isos_uroman = "ara, bel, bul, deu, ell, eng, fas, grc, ell, eng, heb, kaz, kir, lav, lit, mkd, mkd2, oss, pnt, pus, rus, srp, srp2, tur, uig, ukr, yid".split(
12
+ ","
13
+ )
14
+ special_isos_uroman = [i.strip() for i in special_isos_uroman]
15
+
16
+
17
+ def normalize_uroman(text):
18
+ text = text.lower()
19
+ text = re.sub("([^a-z' ])", " ", text)
20
+ text = re.sub(" +", " ", text)
21
+ return text.strip()
22
+
23
+
24
+ def get_uroman_tokens(norm_transcripts, uroman_root_dir, iso=None):
25
+ tf = tempfile.NamedTemporaryFile()
26
+ tf2 = tempfile.NamedTemporaryFile()
27
+ with open(tf.name, "w") as f:
28
+ for t in norm_transcripts:
29
+ f.write(t + "\n")
30
+
31
+ assert os.path.exists(f"{uroman_root_dir}/uroman.pl"), "uroman not found"
32
+ cmd = f"perl {uroman_root_dir}/uroman.pl"
33
+ if iso in special_isos_uroman:
34
+ cmd += f" -l {iso} "
35
+ cmd += f" < {tf.name} > {tf2.name}"
36
+ os.system(cmd)
37
+ outtexts = []
38
+ with open(tf2.name) as f:
39
+ for line in f:
40
+ line = " ".join(line.strip())
41
+ line = re.sub(r"\s+", " ", line).strip()
42
+ outtexts.append(line)
43
+ assert len(outtexts) == len(norm_transcripts)
44
+ uromans = []
45
+ for ot in outtexts:
46
+ uromans.append(normalize_uroman(ot))
47
+ return uromans
48
+
49
+
50
+ @dataclass
51
+ class Segment:
52
+ label: str
53
+ start: int
54
+ end: int
55
+
56
+ def __repr__(self):
57
+ return f"{self.label}: [{self.start:5d}, {self.end:5d})"
58
+
59
+ @property
60
+ def length(self):
61
+ return self.end - self.start
62
+
63
+
64
+ def merge_repeats(path, idx_to_token_map):
65
+ i1, i2 = 0, 0
66
+ segments = []
67
+ while i1 < len(path):
68
+ while i2 < len(path) and path[i1] == path[i2]:
69
+ i2 += 1
70
+ segments.append(Segment(idx_to_token_map[path[i1]], i1, i2 - 1))
71
+ i1 = i2
72
+ return segments
73
+
74
+
75
+ def time_to_frame(time):
76
+ stride_msec = 20
77
+ frames_per_sec = 1000 / stride_msec
78
+ return int(time * frames_per_sec)
79
+
80
+
81
+ def load_model_dict():
82
+ model_path_name = "/tmp/ctc_alignment_mling_uroman_model.pt"
83
+
84
+ print("Downloading model and dictionary...")
85
+ if os.path.exists(model_path_name):
86
+ print("Model path already exists. Skipping downloading....")
87
+ else:
88
+ torch.hub.download_url_to_file(
89
+ "https://dl.fbaipublicfiles.com/mms/torchaudio/ctc_alignment_mling_uroman/model.pt",
90
+ model_path_name,
91
+ )
92
+ assert os.path.exists(model_path_name)
93
+ state_dict = torch.load(model_path_name, map_location="cpu")
94
+
95
+ model = wav2vec2_model(
96
+ extractor_mode="layer_norm",
97
+ extractor_conv_layer_config=[
98
+ (512, 10, 5),
99
+ (512, 3, 2),
100
+ (512, 3, 2),
101
+ (512, 3, 2),
102
+ (512, 3, 2),
103
+ (512, 2, 2),
104
+ (512, 2, 2),
105
+ ],
106
+ extractor_conv_bias=True,
107
+ encoder_embed_dim=1024,
108
+ encoder_projection_dropout=0.0,
109
+ encoder_pos_conv_kernel=128,
110
+ encoder_pos_conv_groups=16,
111
+ encoder_num_layers=24,
112
+ encoder_num_heads=16,
113
+ encoder_attention_dropout=0.0,
114
+ encoder_ff_interm_features=4096,
115
+ encoder_ff_interm_dropout=0.1,
116
+ encoder_dropout=0.0,
117
+ encoder_layer_norm_first=True,
118
+ encoder_layer_drop=0.1,
119
+ aux_num_out=31,
120
+ )
121
+ model.load_state_dict(state_dict)
122
+ model.eval()
123
+
124
+ dict_path_name = "/tmp/ctc_alignment_mling_uroman_model.dict"
125
+ if os.path.exists(dict_path_name):
126
+ print("Dictionary path already exists. Skipping downloading....")
127
+ else:
128
+ torch.hub.download_url_to_file(
129
+ "https://dl.fbaipublicfiles.com/mms/torchaudio/ctc_alignment_mling_uroman/dictionary.txt",
130
+ dict_path_name,
131
+ )
132
+ assert os.path.exists(dict_path_name)
133
+ dictionary = {}
134
+ with open(dict_path_name) as f:
135
+ dictionary = {l.strip(): i for i, l in enumerate(f.readlines())}
136
+
137
+ return model, dictionary
138
+
139
+
140
+ def get_spans(tokens, segments, stride):
141
+ ltr_idx = 0
142
+ tokens_idx = 0
143
+ intervals = []
144
+ start, end = (0, 0)
145
+ sil = "<blank>"
146
+ for seg_idx, seg in enumerate(segments):
147
+ if tokens_idx == len(tokens):
148
+ assert seg_idx == len(segments) - 1
149
+ assert seg.label == "<blank>"
150
+ continue
151
+ cur_token = tokens[tokens_idx].split(" ")
152
+ ltr = cur_token[ltr_idx]
153
+ if seg.label == "<blank>":
154
+ continue
155
+ assert seg.label == ltr
156
+ if (ltr_idx) == 0:
157
+ start = seg_idx
158
+ if ltr_idx == len(cur_token) - 1:
159
+ ltr_idx = 0
160
+ tokens_idx += 1
161
+ intervals.append((start, seg_idx))
162
+ while tokens_idx < len(tokens) and len(tokens[tokens_idx]) == 0:
163
+ intervals.append((seg_idx, seg_idx))
164
+ tokens_idx += 1
165
+ else:
166
+ ltr_idx += 1
167
+ spans = []
168
+ for idx, (start, end) in enumerate(intervals):
169
+ span = segments[start : end + 1]
170
+ # don't need the segments to be connected
171
+ if start > 0:
172
+ prev_seg = segments[start - 1]
173
+ if prev_seg.label == sil:
174
+ pad_start = (
175
+ prev_seg.start
176
+ if (idx == 0)
177
+ else int((prev_seg.start + prev_seg.end) / 2)
178
+ )
179
+ pad_start = max(
180
+ pad_start, math.floor(prev_seg.end - 0.1 * 1000 / stride)
181
+ )
182
+ span = [Segment(sil, pad_start, span[0].start)] + span
183
+ if end + 1 < len(segments):
184
+ next_seg = segments[end + 1]
185
+ if next_seg.label == sil:
186
+ pad_end = (
187
+ next_seg.end
188
+ if (idx == len(intervals) - 1)
189
+ else math.floor((next_seg.start + next_seg.end) / 2)
190
+ )
191
+ pad_end = min(pad_end, math.floor(next_seg.start + 0.5 * 1000 / stride))
192
+ span = span + [Segment(sil, span[-1].end, pad_end)]
193
+ spans.append(span)
194
+ return spans
pipeline/utils/force_alignment/norm_config.py ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+
4
+ colon = ":"
5
+ comma = ","
6
+ exclamation_mark = "!"
7
+ period = re.escape(".")
8
+ question_mark = re.escape("?")
9
+ semicolon = ";"
10
+
11
+ dash = re.escape("-")
12
+ underscore = "_"
13
+
14
+ left_curly_bracket = "{"
15
+ right_curly_bracket = "}"
16
+ quotation_mark = '"'
17
+ single_quotation_mark = "'"
18
+
19
+ basic_punc = (
20
+ period
21
+ + question_mark
22
+ + comma
23
+ + colon
24
+ + exclamation_mark
25
+ + left_curly_bracket
26
+ + right_curly_bracket
27
+ + dash
28
+ + underscore
29
+ + single_quotation_mark
30
+ )
31
+
32
+ # General punc unicode block (0x2000-0x206F)
33
+ zero_width_space = r"\u200B"
34
+ zero_width_nonjoiner = r"\u200C"
35
+ left_to_right_mark = r"\u200E"
36
+ right_to_left_mark = r"\u200F"
37
+ left_to_right_embedding = r"\u202A"
38
+ pop_directional_formatting = r"\u202C"
39
+
40
+ # Here are some commonly ill-typed versions of apostrophe
41
+ right_single_quotation_mark = r"\u2019"
42
+ left_single_quotation_mark = r"\u2018"
43
+
44
+ # Language specific definitions
45
+ # Spanish
46
+ inverted_exclamation_mark = r"\u00A1"
47
+ inverted_question_mark = r"\u00BF"
48
+
49
+
50
+ # Hindi
51
+ hindi_danda = "\u0964"
52
+
53
+ # Egyptian Arabic
54
+ # arabic_percent = r"\u066A"
55
+ arabic_comma = r"\u060C"
56
+ arabic_question_mark = r"\u061F"
57
+ arabic_semicolon = r"\u061B"
58
+ arabic_diacritics = r"\u064B-\u0652"
59
+
60
+
61
+ arabic_subscript_alef_and_inverted_damma = r"\u0656-\u0657"
62
+
63
+
64
+ # Chinese
65
+ full_stop = r"\u3002"
66
+ full_comma = r"\uFF0C"
67
+ full_exclamation_mark = r"\uFF01"
68
+ full_question_mark = r"\uFF1F"
69
+ full_semicolon = r"\uFF1B"
70
+ full_colon = r"\uFF1A"
71
+ full_parentheses = r"\uFF08\uFF09"
72
+ quotation_mark_horizontal = r"\u300C-\u300F"
73
+ quotation_mark_vertical = r"\uFF41-\uFF44"
74
+ title_marks = r"\u3008-\u300B"
75
+ wavy_low_line = r"\uFE4F"
76
+ ellipsis = r"\u22EF"
77
+ enumeration_comma = r"\u3001"
78
+ hyphenation_point = r"\u2027"
79
+ forward_slash = r"\uFF0F"
80
+ wavy_dash = r"\uFF5E"
81
+ box_drawings_light_horizontal = r"\u2500"
82
+ fullwidth_low_line = r"\uFF3F"
83
+ chinese_punc = (
84
+ full_stop
85
+ + full_comma
86
+ + full_exclamation_mark
87
+ + full_question_mark
88
+ + full_semicolon
89
+ + full_colon
90
+ + full_parentheses
91
+ + quotation_mark_horizontal
92
+ + quotation_mark_vertical
93
+ + title_marks
94
+ + wavy_low_line
95
+ + ellipsis
96
+ + enumeration_comma
97
+ + hyphenation_point
98
+ + forward_slash
99
+ + wavy_dash
100
+ + box_drawings_light_horizontal
101
+ + fullwidth_low_line
102
+ )
103
+
104
+ # Armenian
105
+ armenian_apostrophe = r"\u055A"
106
+ emphasis_mark = r"\u055B"
107
+ exclamation_mark = r"\u055C"
108
+ armenian_comma = r"\u055D"
109
+ armenian_question_mark = r"\u055E"
110
+ abbreviation_mark = r"\u055F"
111
+ armenian_full_stop = r"\u0589"
112
+ armenian_punc = (
113
+ armenian_apostrophe
114
+ + emphasis_mark
115
+ + exclamation_mark
116
+ + armenian_comma
117
+ + armenian_question_mark
118
+ + abbreviation_mark
119
+ + armenian_full_stop
120
+ )
121
+
122
+ lesser_than_symbol = r"&lt;"
123
+ greater_than_symbol = r"&gt;"
124
+
125
+ lesser_than_sign = r"\u003c"
126
+ greater_than_sign = r"\u003e"
127
+
128
+ nbsp_written_form = r"&nbsp"
129
+
130
+ # Quotation marks
131
+ left_double_quotes = r"\u201c"
132
+ right_double_quotes = r"\u201d"
133
+ left_double_angle = r"\u00ab"
134
+ right_double_angle = r"\u00bb"
135
+ left_single_angle = r"\u2039"
136
+ right_single_angle = r"\u203a"
137
+ low_double_quotes = r"\u201e"
138
+ low_single_quotes = r"\u201a"
139
+ high_double_quotes = r"\u201f"
140
+ high_single_quotes = r"\u201b"
141
+
142
+ all_punct_quotes = (
143
+ left_double_quotes
144
+ + right_double_quotes
145
+ + left_double_angle
146
+ + right_double_angle
147
+ + left_single_angle
148
+ + right_single_angle
149
+ + low_double_quotes
150
+ + low_single_quotes
151
+ + high_double_quotes
152
+ + high_single_quotes
153
+ + right_single_quotation_mark
154
+ + left_single_quotation_mark
155
+ )
156
+ mapping_quotes = (
157
+ "["
158
+ + high_single_quotes
159
+ + right_single_quotation_mark
160
+ + left_single_quotation_mark
161
+ + "]"
162
+ )
163
+
164
+
165
+ # Digits
166
+ english_digits = r"\u0030-\u0039"
167
+ bengali_digits = r"\u09e6-\u09ef"
168
+ khmer_digits = r"\u17e0-\u17e9"
169
+ devanagari_digits = r"\u0966-\u096f"
170
+ oriya_digits = r"\u0b66-\u0b6f"
171
+ extended_arabic_indic_digits = r"\u06f0-\u06f9"
172
+ kayah_li_digits = r"\ua900-\ua909"
173
+ fullwidth_digits = r"\uff10-\uff19"
174
+ malayam_digits = r"\u0d66-\u0d6f"
175
+ myanmar_digits = r"\u1040-\u1049"
176
+ roman_numeral = r"\u2170-\u2179"
177
+ nominal_digit_shapes = r"\u206f"
178
+
179
+ # Load punctuations from MMS-lab data
180
+ with open(f"{os.path.dirname(__file__)}/punctuations.lst", "r") as punc_f:
181
+ punc_list = punc_f.readlines()
182
+
183
+ punct_pattern = r""
184
+ for punc in punc_list:
185
+ # the first character in the tab separated line is the punc to be removed
186
+ punct_pattern += re.escape(punc.split("\t")[0])
187
+
188
+ shared_digits = (
189
+ english_digits
190
+ + bengali_digits
191
+ + khmer_digits
192
+ + devanagari_digits
193
+ + oriya_digits
194
+ + extended_arabic_indic_digits
195
+ + kayah_li_digits
196
+ + fullwidth_digits
197
+ + malayam_digits
198
+ + myanmar_digits
199
+ + roman_numeral
200
+ + nominal_digit_shapes
201
+ )
202
+
203
+ shared_punc_list = (
204
+ basic_punc
205
+ + all_punct_quotes
206
+ + greater_than_sign
207
+ + lesser_than_sign
208
+ + inverted_question_mark
209
+ + full_stop
210
+ + semicolon
211
+ + armenian_punc
212
+ + inverted_exclamation_mark
213
+ + arabic_comma
214
+ + enumeration_comma
215
+ + hindi_danda
216
+ + quotation_mark
217
+ + arabic_semicolon
218
+ + arabic_question_mark
219
+ + chinese_punc
220
+ + punct_pattern
221
+ )
222
+
223
+ shared_mappping = {
224
+ lesser_than_symbol: "",
225
+ greater_than_symbol: "",
226
+ nbsp_written_form: "",
227
+ r"(\S+)" + mapping_quotes + r"(\S+)": r"\1'\2",
228
+ }
229
+
230
+ shared_deletion_list = (
231
+ left_to_right_mark
232
+ + zero_width_nonjoiner
233
+ + arabic_subscript_alef_and_inverted_damma
234
+ + zero_width_space
235
+ + arabic_diacritics
236
+ + pop_directional_formatting
237
+ + right_to_left_mark
238
+ + left_to_right_embedding
239
+ )
240
+
241
+ norm_config = {
242
+ "*": {
243
+ "lower_case": True,
244
+ "punc_set": shared_punc_list,
245
+ "del_set": shared_deletion_list,
246
+ "mapping": shared_mappping,
247
+ "digit_set": shared_digits,
248
+ "unicode_norm": "NFKC",
249
+ "rm_diacritics": False,
250
+ }
251
+ }
252
+
253
+ # =============== Mongolian ===============#
254
+ norm_config["mon"] = norm_config["*"].copy()
255
+ # add soft hyphen to punc list to match with fleurs
256
+ norm_config["mon"]["del_set"] += r"\u00AD"
257
+
258
+ norm_config["khk"] = norm_config["mon"].copy()
259
+
260
+ # =============== Hebrew ===============#
261
+ norm_config["heb"] = norm_config["*"].copy()
262
+ # add "HEBREW POINT" symbols to match with fleurs
263
+ norm_config["heb"]["del_set"] += r"\u05B0-\u05BF\u05C0-\u05CF"
264
+
265
+ # =============== Thai ===============#
266
+ norm_config["tha"] = norm_config["*"].copy()
267
+ # add "Zero width joiner" symbols to match with fleurs
268
+ norm_config["tha"]["punc_set"] += r"\u200D"
269
+
270
+ # =============== Arabic ===============#
271
+ norm_config["ara"] = norm_config["*"].copy()
272
+ norm_config["ara"]["mapping"]["ٱ"] = "ا"
273
+ norm_config["arb"] = norm_config["ara"].copy()
274
+
275
+ # =============== Javanese ===============#
276
+ norm_config["jav"] = norm_config["*"].copy()
277
+ norm_config["jav"]["rm_diacritics"] = True
pipeline/utils/force_alignment/punctuations.lst ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+  7355 INVALID UNICODE 0x81
2
+  5265 INVALID UNICODE 0x90
3
+  75 INVALID UNICODE 0x8
4
+  31 INVALID UNICODE 0x8d
5
+ ” 3 INVALID UNICODE 0x94
6
+  2 INVALID UNICODE 0x8f
7
+  2 INVALID UNICODE 0x1a
8
+  1 INVALID UNICODE 0x9d
9
+ “ 1 INVALID UNICODE 0x93
10
+ ’ 1 INVALID UNICODE 0x92
11
+  8647 INVALID UNICODE 0xe295
12
+  6650 INVALID UNICODE 0xf21d
13
+  6234 INVALID UNICODE 0xf62d
14
+  4815 INVALID UNICODE 0xf173
15
+  4789 INVALID UNICODE 0xe514
16
+  4409 INVALID UNICODE 0xe293
17
+  3881 INVALID UNICODE 0xf523
18
+  3788 INVALID UNICODE 0xe233
19
+  2448 INVALID UNICODE 0xf50f
20
+  2177 INVALID UNICODE 0xe232
21
+  1955 INVALID UNICODE 0xea7b
22
+  1926 INVALID UNICODE 0xf172
23
+  973 INVALID UNICODE 0xe290
24
+  972 INVALID UNICODE 0xf519
25
+  661 INVALID UNICODE 0xe292
26
+  591 INVALID UNICODE 0xe328
27
+  509 INVALID UNICODE 0xe2fa
28
+  458 INVALID UNICODE 0xe234
29
+  446 INVALID UNICODE 0xe043
30
+  419 INVALID UNICODE 0xe040
31
+  399 INVALID UNICODE 0xe2fb
32
+  387 INVALID UNICODE 0xe32b
33
+  381 INVALID UNICODE 0xe236
34
+  374 INVALID UNICODE 0xf511
35
+  314 INVALID UNICODE 0xe517
36
+  296 INVALID UNICODE 0xe2fe
37
+  293 INVALID UNICODE 0xe492
38
+  291 INVALID UNICODE 0xf52d
39
+  289 INVALID UNICODE 0xe2fc
40
+  195 INVALID UNICODE 0xf521
41
+  190 INVALID UNICODE 0xe516
42
+  182 INVALID UNICODE 0xe041
43
+  178 INVALID UNICODE 0xf529
44
+  113 INVALID UNICODE 0xe2f9
45
+  87 INVALID UNICODE 0xe2d9
46
+  78 INVALID UNICODE 0xe32a
47
+  76 INVALID UNICODE 0xe291
48
+  74 INVALID UNICODE 0xe296
49
+  66 INVALID UNICODE 0xe518
50
+  52 INVALID UNICODE 0xe32c
51
+  46 INVALID UNICODE 0xe2db
52
+  41 INVALID UNICODE 0xe231
53
+  34 INVALID UNICODE 0xf522
54
+  33 INVALID UNICODE 0xf518
55
+  32 INVALID UNICODE 0xf513
56
+  27 INVALID UNICODE 0xe32d
57
+  25 INVALID UNICODE 0xe32e
58
+  23 INVALID UNICODE 0xe06b
59
+  15 INVALID UNICODE 0xea01
60
+  12 INVALID UNICODE 0xe294
61
+  11 INVALID UNICODE 0xe203
62
+  8 INVALID UNICODE 0xf218
63
+  7 INVALID UNICODE 0xe070
64
+  7 INVALID UNICODE 0xe013
65
+  5 INVALID UNICODE 0xe2de
66
+  4 INVALID UNICODE 0xe493
67
+  3 INVALID UNICODE 0xf7e8
68
+  3 INVALID UNICODE 0xf7d0
69
+  3 INVALID UNICODE 0xe313
70
+  2 INVALID UNICODE 0xe329
71
+  2 INVALID UNICODE 0xe06d
72
+  2 INVALID UNICODE 0xe003
73
+  1 INVALID UNICODE 0xf50e
74
+  1 INVALID UNICODE 0xf171
75
+  1 INVALID UNICODE 0xe01d
76
+  71 NOMINAL DIGIT SHAPES 0x206f
77
+ ⁠ 3 WORD JOINER 0x2060
78
+ ― 126545 HORIZONTAL BAR 0x2015
79
+ ־ 1028 HEBREW PUNCTUATION MAQAF 0x5be
80
+ ) 98429 RIGHT PARENTHESIS 0x29
81
+ ] 27108 RIGHT SQUARE BRACKET 0x5d
82
+ ⌋ 1567 RIGHT FLOOR 0x230b
83
+ 〕 97 RIGHT TORTOISE SHELL BRACKET 0x3015
84
+ 】 36 RIGHT BLACK LENTICULAR BRACKET 0x3011
85
+ ﴾ 14 ORNATE LEFT PARENTHESIS 0xfd3e
86
+ & 170517 AMPERSAND 0x26
87
+ ། 106330 TIBETAN MARK SHAD 0xf0d
88
+ ። 90203 ETHIOPIC FULL STOP 0x1362
89
+ ፥ 60484 ETHIOPIC COLON 0x1365
90
+ ༌ 60464 TIBETAN MARK DELIMITER TSHEG BSTAR 0xf0c
91
+ ။ 51567 MYANMAR SIGN SECTION 0x104b
92
+ / 46929 SOLIDUS 0x2f
93
+ ၊ 38042 MYANMAR SIGN LITTLE SECTION 0x104a
94
+ · 37985 MIDDLE DOT 0xb7
95
+ ‸ 36310 CARET 0x2038
96
+ * 34793 ASTERISK 0x2a
97
+ ۔ 32432 ARABIC FULL STOP 0x6d4
98
+ ፤ 31906 ETHIOPIC SEMICOLON 0x1364
99
+ ၏ 21519 MYANMAR SYMBOL GENITIVE 0x104f
100
+ ។ 20834 KHMER SIGN KHAN 0x17d4
101
+ ꓾ 15773 LISU PUNCTUATION COMMA 0xa4fe
102
+ ᙮ 13473 CANADIAN SYLLABICS FULL STOP 0x166e
103
+ ꤯ 12892 KAYAH LI SIGN SHYA 0xa92f
104
+ ⵰ 11478 TIFINAGH SEPARATOR MARK 0x2d70
105
+ ꓿ 11118 LISU PUNCTUATION FULL STOP 0xa4ff
106
+ ॥ 10763 DEVANAGARI DOUBLE DANDA 0x965
107
+ ؞ 10403 ARABIC TRIPLE DOT PUNCTUATION MARK 0x61e
108
+ ၍ 8936 MYANMAR SYMBOL COMPLETED 0x104d
109
+ · 8431 GREEK ANO TELEIA 0x387
110
+ † 7477 DAGGER 0x2020
111
+ ၌ 6632 MYANMAR SYMBOL LOCATIVE 0x104c
112
+ ፣ 5719 ETHIOPIC COMMA 0x1363
113
+ ៖ 5528 KHMER SIGN CAMNUC PII KUUH 0x17d6
114
+ ꤮ 4791 KAYAH LI SIGN CWI 0xa92e
115
+ ※ 3439 REFERENCE MARK 0x203b
116
+ ፦ 2727 ETHIOPIC PREFACE COLON 0x1366
117
+ • 1749 BULLET 0x2022
118
+ ¶ 1507 PILCROW SIGN 0xb6
119
+ ၎ 1386 MYANMAR SYMBOL AFOREMENTIONED 0x104e
120
+ ﹖ 1224 SMALL QUESTION MARK 0xfe56
121
+ ; 975 GREEK QUESTION MARK 0x37e
122
+ … 827 HORIZONTAL ELLIPSIS 0x2026
123
+ % 617 PERCENT SIGN 0x25
124
+ ・ 468 KATAKANA MIDDLE DOT 0x30fb
125
+ ༎ 306 TIBETAN MARK NYIS SHAD 0xf0e
126
+ ‡ 140 DOUBLE DAGGER 0x2021
127
+ # 137 NUMBER SIGN 0x23
128
+ @ 125 COMMERCIAL AT 0x40
129
+ ፡ 121 ETHIOPIC WORDSPACE 0x1361
130
+ ៚ 55 KHMER SIGN KOOMUUT 0x17da
131
+ ៕ 49 KHMER SIGN BARIYOOSAN 0x17d5
132
+ ﹐ 10 SMALL COMMA 0xfe50
133
+ ༅ 6 TIBETAN MARK CLOSING YIG MGO SGAB MA 0xf05
134
+ ༄ 6 TIBETAN MARK INITIAL YIG MGO MDUN MA 0xf04
135
+ . 2 FULLWIDTH FULL STOP 0xff0e
136
+ ﹗ 2 SMALL EXCLAMATION MARK 0xfe57
137
+ ﹕ 2 SMALL COLON 0xfe55
138
+ ‰ 2 PER MILLE SIGN 0x2030
139
+ ・ 1 HALFWIDTH KATAKANA MIDDLE DOT 0xff65
140
+ ( 98504 LEFT PARENTHESIS 0x28
141
+ [ 27245 LEFT SQUARE BRACKET 0x5b
142
+ ⌊ 1567 LEFT FLOOR 0x230a
143
+ 〔 95 LEFT TORTOISE SHELL BRACKET 0x3014
144
+ 【 36 LEFT BLACK LENTICULAR BRACKET 0x3010
145
+ ﴿ 14 ORNATE RIGHT PARENTHESIS 0xfd3f
146
+ _ 4851 LOW LINE 0x5f
147
+ $ 72 DOLLAR SIGN 0x24
148
+ € 14 EURO SIGN 0x20ac
149
+ £ 2 POUND SIGN 0xa3
150
+ ~ 27462 TILDE 0x7e
151
+ = 11450 EQUALS SIGN 0x3d
152
+ | 8430 VERTICAL LINE 0x7c
153
+ − 3971 MINUS SIGN 0x2212
154
+ ≫ 1904 MUCH GREATER-THAN 0x226b
155
+ ≪ 1903 MUCH LESS-THAN 0x226a
156
+ + 1450 PLUS SIGN 0x2b
157
+ < 345 FULLWIDTH LESS-THAN SIGN 0xff1c
158
+ > 344 FULLWIDTH GREATER-THAN SIGN 0xff1e
159
+ ¬ 5 NOT SIGN 0xac
160
+ × 4 MULTIPLICATION SIGN 0xd7
161
+ → 2 RIGHTWARDS ARROW 0x2192
162
+ ᙭ 537 CANADIAN SYLLABICS CHI SIGN 0x166d
163
+ ° 499 DEGREE SIGN 0xb0
164
+ ႟ 421 MYANMAR SYMBOL SHAN EXCLAMATION 0x109f
165
+ � 192 REPLACEMENT CHARACTER 0xfffd
166
+ ⌟ 54 BOTTOM RIGHT CORNER 0x231f
167
+ ⌞ 54 BOTTOM LEFT CORNER 0x231e
168
+ © 2 COPYRIGHT SIGN 0xa9
169
+   40 NARROW NO-BREAK SPACE 0x202f
170
+   1 SIX-PER-EM SPACE 0x2006
171
+ ˜ 40261 SMALL TILDE 0x2dc
172
+ ^ 6469 CIRCUMFLEX ACCENT 0x5e
173
+ ¯ 20 MACRON 0xaf
174
+ ˇ 191442 CARON 0x2c7
175
+ ⁿ 38144 SUPERSCRIPT LATIN SMALL LETTER N 0x207f
176
+ ـ 9440 ARABIC TATWEEL 0x640
177
+ ๆ 6766 THAI CHARACTER MAIYAMOK 0xe46
178
+ ៗ 3310 KHMER SIGN LEK TOO 0x17d7
179
+ 々 678 IDEOGRAPHIC ITERATION MARK 0x3005
180
+ ໆ 430 LAO KO LA 0xec6
181
+ ー 319 KATAKANA-HIRAGANA PROLONGED SOUND MARK 0x30fc
182
+ ⁱ 137 SUPERSCRIPT LATIN SMALL LETTER I 0x2071
183
+ ৷ 11056 BENGALI CURRENCY NUMERATOR FOUR 0x9f7
184
+ ⅓ 26 VULGAR FRACTION ONE THIRD 0x2153
185
+ ½ 26 VULGAR FRACTION ONE HALF 0xbd
186
+ ¼ 4 VULGAR FRACTION ONE QUARTER 0xbc
187
+ ⅟ 1 FRACTION NUMERATOR ONE 0x215f
188
+ ⁄ 57 FRACTION SLASH 0x2044
pipeline/utils/force_alignment/text_normalization.py ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import re
3
+ import unicodedata
4
+
5
+ from norm_config import norm_config
6
+
7
+
8
+ def text_normalize(
9
+ text, iso_code, lower_case=True, remove_numbers=True, remove_brackets=False
10
+ ):
11
+ """Given a text, normalize it by changing to lower case, removing punctuations, removing words that only contain digits and removing extra spaces
12
+
13
+ Args:
14
+ text: The string to be normalized
15
+ remove_numbers: Boolean flag to specify if words containing only digits should be removed
16
+
17
+ Returns:
18
+ normalized_text: the string after all normalization
19
+
20
+ """
21
+
22
+ config = norm_config.get(iso_code, norm_config["*"])
23
+
24
+ for field in [
25
+ "lower_case",
26
+ "punc_set",
27
+ "del_set",
28
+ "mapping",
29
+ "digit_set",
30
+ "unicode_norm",
31
+ ]:
32
+ if field not in config:
33
+ config[field] = norm_config["*"][field]
34
+
35
+ text = unicodedata.normalize(config["unicode_norm"], text)
36
+
37
+ # Convert to lower case
38
+ if config["lower_case"] and lower_case:
39
+ text = text.lower()
40
+
41
+ # brackets
42
+ # always text inside brackets with numbers in them. Usually corresponds to "(Sam 23:17)"
43
+ text = re.sub(r"\([^\)]*\d[^\)]*\)", " ", text)
44
+ if remove_brackets:
45
+ text = re.sub(r"\([^\)]*\)", " ", text)
46
+
47
+ # Apply mappings
48
+ for old, new in config["mapping"].items():
49
+ text = re.sub(old, new, text)
50
+
51
+ # Replace punctutations with space
52
+ punct_pattern = r"[" + config["punc_set"]
53
+ punct_pattern += "]"
54
+ text = re.sub(punct_pattern, " ", text)
55
+
56
+ # remove characters in delete list
57
+ delete_patten = r"[" + config["del_set"] + "]"
58
+ text = re.sub(delete_patten, "", text)
59
+
60
+ # Remove words containing only digits
61
+ # We check for 3 cases a)text starts with a number b) a number is present somewhere in the middle of the text c) the text ends with a number
62
+ # For each case we use lookaround regex pattern to see if the digit pattern in preceded and followed by whitespaces, only then we replace the numbers with space
63
+ # The lookaround enables overlapping pattern matches to be replaced
64
+
65
+ if remove_numbers:
66
+ digits_pattern = "[" + config["digit_set"]
67
+ digits_pattern += "]+"
68
+ complete_digit_pattern = (
69
+ r"^"
70
+ + digits_pattern
71
+ + "(?=\s)|(?<=\s)"
72
+ + digits_pattern
73
+ + "(?=\s)|(?<=\s)"
74
+ + digits_pattern
75
+ + "$"
76
+ )
77
+ normalized_text = re.sub(complete_digit_pattern, " ", text)
78
+
79
+ if config["rm_diacritics"]:
80
+ from unidecode import unidecode
81
+
82
+ normalized_text = unidecode(normalized_text)
83
+
84
+ # Remove extra spaces
85
+ text = re.sub(r"\s+", " ", text).strip()
86
+ normalized_text = re.sub(r"\s+", " ", normalized_text).strip()
87
+
88
+ # Remove repeating symbols
89
+ repeat_pattern = r"(.)\1{4,}"
90
+ normalized_text = re.sub(repeat_pattern, r"\1", normalized_text)
91
+
92
+ return text, normalized_text
pipeline/utils/textgrid2jsonl.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import textgrid
4
+ from lhotse.utils import add_durations
5
+
6
+
7
+ for i in os.listdir("textgrid"):
8
+ tg_path = os.path.join("textgrid", i)
9
+ tg = textgrid.TextGrid()
10
+ tg.read(tg_path)
11
+
12
+ manifest_path = i.replace(".TextGrid", "_manifest.jsonl")
13
+ with open("mms/" + manifest_path, "w") as f:
14
+ for interval in tg.tiers[0]:
15
+ if len(interval.mark) == 0:
16
+ continue
17
+ line = {}
18
+ line["audio_filepath"] = os.path.join("/data/shared/Thai_test_merge/wav", i.replace(".TextGrid", ".wav"))
19
+ line["audio_start_sec"] = interval.minTime
20
+ line["duration"] = add_durations(interval.maxTime, -interval.minTime, sampling_rate=16000)
21
+ line["text"] = interval.mark
22
+ f.write(json.dumps(line) + "\n")
pipeline/utils/uroman/.gitignore ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ !Build/
2
+ .last_cover_stats
3
+ /META.yml
4
+ /META.json
5
+ /MYMETA.*
6
+ *.o
7
+ *.pm.tdy
8
+ *.bs
9
+
10
+ # Devel::Cover
11
+ cover_db/
12
+
13
+ # Devel::NYTProf
14
+ nytprof.out
15
+
16
+ # Dizt::Zilla
17
+ /.build/
18
+
19
+ # Module::Build
20
+ _build/
21
+ Build
22
+ Build.bat
23
+
24
+ # Module::Install
25
+ inc/
26
+
27
+ # ExtUtils::MakeMaker
28
+ /blib/
29
+ /_eumm/
30
+ /*.gz
31
+ /Makefile
32
+ /Makefile.old
33
+ /MANIFEST.bak
34
+ /pm_to_blib
35
+ /*.zip
pipeline/utils/uroman/LICENSE.txt ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright (C) 2015-2020 Ulf Hermjakob, USC Information Sciences Institute
2
+
3
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4
+
5
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6
+
7
+ Any publication of projects using uroman shall acknowledge its use: "This project uses the universal romanizer software 'uroman' written by Ulf Hermjakob, USC Information Sciences Institute (2015-2020)".
8
+ Bibliography: Ulf Hermjakob, Jonathan May, and Kevin Knight. 2018. Out-of-the-box universal romanization tool uroman. In Proceedings of the 56th Annual Meeting of Association for Computational Linguistics, Demo Track.
9
+
10
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
11
+
pipeline/utils/uroman/README.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # uroman
2
+
3
+ *uroman* is a *universal romanizer*. It converts text in any script to the Latin alphabet.
4
+
5
+ Version: 1.2.8
6
+ Release date: April 23, 2021
7
+ Author: Ulf Hermjakob, USC Information Sciences Institute
8
+
9
+
10
+ ### Usage
11
+ ```bash
12
+ $ uroman.pl [-l <lang-code>] [--chart] [--no-cache] < STDIN
13
+ where the optional <lang-code> is a 3-letter languages code, e.g. ara, bel, bul, deu, ell, eng, fas,
14
+ grc, ell, eng, heb, kaz, kir, lav, lit, mkd, mkd2, oss, pnt, pus, rus, srp, srp2, tur, uig, ukr, yid.
15
+ --chart specifies chart output (in JSON format) to represent alternative romanizations.
16
+ --no-cache disables caching.
17
+ ```
18
+ ### Examples
19
+ ```bash
20
+ $ bin/uroman.pl < text/zho.txt
21
+ $ bin/uroman.pl -l tur < text/tur.txt
22
+ $ bin/uroman.pl -l heb --chart < text/heb.txt
23
+ $ bin/uroman.pl < test/multi-script.txt > test/multi-script.uroman.txt
24
+ ```
25
+
26
+ Identifying the input as Arabic, Belarusian, Bulgarian, English, Farsi, German,
27
+ Ancient Greek, Modern Greek, Pontic Greek, Hebrew, Kazakh, Kyrgyz, Latvian,
28
+ Lithuanian, North Macedonian, Russian, Serbian, Turkish, Ukrainian, Uyghur or
29
+ Yiddish will improve romanization for those languages as some letters in those
30
+ languages have different sound values from other languages using the same script
31
+ (French, Russian, Hebrew respectively).
32
+ No effect for other languages in this version.
33
+
34
+ ### Bibliography
35
+ Ulf Hermjakob, Jonathan May, and Kevin Knight. 2018. Out-of-the-box universal romanization tool uroman. In Proceedings of the 56th Annual Meeting of Association for Computational Linguistics, Demo Track. ACL-2018 Best Demo Paper Award. [Paper in ACL Anthology](https://www.aclweb.org/anthology/P18-4003) | [Poster](https://www.isi.edu/~ulf/papers/poster-uroman-acl2018.pdf) | [BibTex](https://www.aclweb.org/anthology/P18-4003.bib)
36
+
37
+ ### Change History
38
+ Changes in version 1.2.8
39
+ * Updated to Unicode 13.0 (2021), which supports several new scripts (10% larger UnicodeData.txt).
40
+ * Improved support for Georgian.
41
+ * Preserve various symbols (as opposed to mapping to the symbols' names).
42
+ * Various small improvements.
43
+
44
+ Changes in version 1.2.7
45
+ * Improved support for Pashto.
46
+
47
+ Changes in version 1.2.6
48
+ * Improved support for Ukrainian, Russian and Ogham (ancient Irish script).
49
+ * Added support for English Braille.
50
+ * Added alternative Romanization for North Macedonian and Serbian (mkd2/srp2)
51
+ reflecting a casual style that many native speakers of those languages use
52
+ when writing text in Latin script, e.g. non-accented single letters (e.g. "s")
53
+ rather than phonetically motivated combinations of letters (e.g. "sh").
54
+ * When a line starts with "::lcode xyz ", the new uroman version will switch to
55
+ that language for that line. This is used for the new reference test file.
56
+ * Various small improvements.
57
+
58
+ Changes in version 1.2.5
59
+ * Improved support for Armenian and eight languages using Cyrillic scripts.
60
+ -- For Serbian and Macedonian, which are often written in both Cyrillic
61
+ and Latin scripts, uroman will map both official versions to the same
62
+ romanized text, e.g. both "Ниш" and "Niš" will be mapped to "Nish" (which
63
+ properly reflects the pronunciation of the city's name).
64
+ For both Serbian and Macedonian, casual writers often use a simplified
65
+ Latin form without diacritics, e.g. "s" to represent not only Cyrillic "с"
66
+ and Latin "s", but also "ш" or "š", even if this conflates "s" and "sh" and
67
+ other such pairs. The casual romanization can be simulated by using
68
+ alternative uroman language codes "srp2" and "mkd2", which romanize
69
+ both "Ниш" and "Niš" to "Nis" to reflect the casual Latin spelling.
70
+ * Various small improvements.
71
+
72
+ Changes in version 1.2.4
73
+ * Bug-fix that generated two emtpy lines for each empty line in cache mode.
74
+
75
+ Changes in version 1.2
76
+ * Run-time improvement based on (1) token-based caching and (2) shortcut
77
+ romanization (identity) of ASCII strings for default 1-best (non-chart)
78
+ output. Speed-up by a factor of 10 for Bengali and Uyghur on medium and
79
+ large size texts.
80
+ * Incremental improvements for Farsi, Amharic, Russian, Hebrew and related
81
+ languages.
82
+ * Richer lattice structure (more alternatives) for "Romanization" of English
83
+ to support better matching to romanizations of other languages.
84
+ Changes output only when --chart option is specified. No change in output for
85
+ default 1-best output, which for ASCII characters is always the input string.
86
+
87
+ Changes in version 1.1 (major upgrade)
88
+ * Offers chart output (in JSON format) to represent alternative romanizations.
89
+ -- Location of first character is defined to be "line: 1, start:0, end:0".
90
+ * Incremental improvements of Hebrew and Greek romanization; Chinese numbers.
91
+ * Improved web-interface at http://www.isi.edu/~ulf/uroman.html
92
+ -- Shows corresponding original and romanization text in red
93
+ when hovering over a text segment.
94
+ -- Shows alternative romanizations when hovering over romanized text
95
+ marked by dotted underline.
96
+ -- Added right-to-left script detection and improved display for right-to-left
97
+ script text (as determined line by line).
98
+ -- On-page support for some scripts that are often not pre-installed on users'
99
+ computers (Burmese, Egyptian, Klingon).
100
+
101
+ Changes in version 1.0 (major upgrade)
102
+ * Upgraded principal internal data structure from string to lattice.
103
+ * Improvements mostly in vowelization of South and Southeast Asian languages.
104
+ * Vocalic 'r' more consistently treated as vowel (no additional vowel added).
105
+ * Repetition signs (Japanese/Chinese/Thai/Khmer/Lao) are mapped to superscript 2.
106
+ * Japanese Katakana middle dots now mapped to ASCII space.
107
+ * Tibetan intersyllabic mark now mapped to middle dot (U+00B7).
108
+ * Some corrections regarding analysis of Chinese numbers.
109
+ * Many more foreign diacritics and punctuation marks dropped or mapped to ASCII.
110
+ * Zero-width characters dropped, except line/sentence-initial byte order marks.
111
+ * Spaces normalized to ASCII space.
112
+ * Fixed bug that in some cases mapped signs (such as dagger or bullet) to their verbal descriptions.
113
+ * Tested against previous version of uroman with a new uroman visual diff tool.
114
+ * Almost an order of magnitude faster.
115
+
116
+ Changes in version 0.7 (minor upgrade)
117
+ * Added script uroman-quick.pl for Arabic script languages, incl. Uyghur.
118
+ Much faster, pre-caching mapping of Arabic to Latin characters, simple greedy processing.
119
+ Will not convert material from non-Arabic blocks such as any (somewhat unusual) Cyrillic
120
+ or Chinese characters in Uyghur texts.
121
+
122
+ Changes in version 0.6 (minor upgrade)
123
+ * Added support for two letter characters used in Uzbek:
124
+ (1) character "ʻ" ("modifier letter turned comma", which modifies preceding "g" and "u" letters)
125
+ (2) character "ʼ" ("modifier letter apostrophe", which Uzbek uses to mark a glottal stop).
126
+ Both are now mapped to "'" (plain ASCII apostrophe).
127
+ * Added support for Uyghur vowel characters such as "ې" (Arabic e) and "ۆ" (Arabic oe)
128
+ even when they are not preceded by "ئ" (yeh with hamza above).
129
+ * Added support for Arabic semicolon "؛", Arabic ligature forms for phrases such as "ﷺ"
130
+ ("sallallahou alayhe wasallam" = "prayer of God be upon him and his family and peace")
131
+ * Added robustness for Arabic letter presentation forms (initial/medial/final/isolated).
132
+ However, it is strongly recommended to normalize any presentation form Arabic letters
133
+ to their non-presentation form before calling uroman.
134
+ * Added force flush directive ($|=1;).
135
+
136
+ Changes in version 0.5 (minor upgrade)
137
+ * Improvements for Uyghur (make sure to use language option: -l uig)
138
+
139
+ Changes in version 0.4 (minor upgrade)
140
+ * Improvements for Thai (special cases for vowel/consonant reordering, e.g. for "sara o"; dropped some aspiration 'h's)
141
+ * Minor change for Arabic (added "alef+fathatan" = "an")
142
+
143
+ New features in version 0.3
144
+ * Covers Mandarin (Chinese)
145
+ * Improved romanization for numerous languages
146
+ * Preserves capitalization (e.g. from Latin, Cyrillic, Greek scripts)
147
+ * Maps from native digits to Western numbers
148
+ * Faster for South Asian languages
149
+
150
+ ### Other features
151
+ * Web interface: http://www.isi.edu/~ulf/uroman.html
152
+ * Vowelization is provided when locally computable, e.g. for many South Asian languages and Tibetan.
153
+
154
+ ### Limitations
155
+ * The current version of uroman has a few limitations, some of which we plan to address in future versions.
156
+ For Japanese, *uroman* currently romanizes hiragana and katakana as expected, but kanji are interpreted as Chinese characters and romanized as such.
157
+ For Egyptian hieroglyphs, only single-sound phonetic characters and numbers are currently romanized.
158
+ For Linear B, only phonetic syllabic characters are romanized.
159
+ For some other extinct scripts such as cuneiform, no romanization is provided.
160
+ * A romanizer is not a full transliterator. For example, this version of
161
+ uroman does not vowelize text that lacks explicit vowelization such as
162
+ normal text in Arabic and Hebrew (without diacritics/points).
163
+
164
+ ### Acknowledgments
165
+ This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116, and by research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, Air Force Laboratory, DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
pipeline/utils/uroman/README.txt ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ uroman version 1.2.8
2
+ Release date: April 23, 2021
3
+ Author: Ulf Hermjakob, USC Information Sciences Institute
4
+
5
+ uroman is a universal romanizer. It converts text in any script to the Latin alphabet.
6
+
7
+ Usage: uroman.pl [-l <lang-code>] [--chart] [--no-cache] < STDIN
8
+ where the optional <lang-code> is a 3-letter languages code, e.g. ara, bel, bul, deu, ell, eng, fas,
9
+ grc, ell, eng, heb, kaz, kir, lav, lit, mkd, mkd2, oss, pnt, pus, rus, srp, srp2, tur, uig, ukr, yid.
10
+ --chart specifies chart output (in JSON format) to represent alternative romanizations.
11
+ --no-cache disables caching.
12
+ Examples: bin/uroman.pl < text/zho.txt
13
+ bin/uroman.pl -l tur < text/tur.txt
14
+ bin/uroman.pl -l heb --chart < text/heb.txt
15
+ bin/uroman.pl < test/multi-script.txt > test/multi-script.uroman.txt
16
+
17
+ Identifying the input as Arabic, Belarusian, Bulgarian, English, Farsi, German,
18
+ Ancient Greek, Modern Greek, Pontic Greek, Hebrew, Kazakh, Kyrgyz, Latvian,
19
+ Lithuanian, North Macedonian, Russian, Serbian, Turkish, Ukrainian, Uyghur or Yiddish
20
+ will improve romanization for those languages as some letters in those languages
21
+ have different sound values from other languages using the same script.
22
+ No effect for other languages in this version.
23
+
24
+ Bibliography: Ulf Hermjakob, Jonathan May, and Kevin Knight. 2018. Out-of-the-box universal romanization tool uroman. In Proceedings of the 56th Annual Meeting of Association for Computational Linguistics, Demo Track. [Best Demo Paper Award]
25
+
26
+ Changes in version 1.2.8
27
+ * Improved support for Georgian.
28
+ * Updated UnicodeData.txt to version 13 (2021) with several new scripts (10% larger).
29
+ * Preserve various symbols (as opposed to mapping to the symbols' names).
30
+ * Various small improvements.
31
+ Changes in version 1.2.7
32
+ * Improved support for Pashto.
33
+ Changes in version 1.2.6
34
+ * Improved support for Ukrainian, Russian and Ogham (ancient Irish script).
35
+ * Added support for English Braille.
36
+ * Added alternative Romanization for North Macedonian and Serbian (mkd2/srp2)
37
+ reflecting a casual style that many native speakers of those languages use
38
+ when writing text in Latin script, e.g. non-accented single letters (e.g. "s")
39
+ rather than phonetically motivated combinations of letters (e.g. "sh").
40
+ * When a line starts with "::lcode xyz ", the new uroman version will switch to
41
+ that language for that line. This is used for the new reference test file.
42
+ * Various small improvements.
43
+ Changes in version 1.2.5
44
+ * Improved support for Armenian and eight languages using Cyrillic scripts.
45
+ -- For Serbian and Macedonian, which are often written in both Cyrillic
46
+ and Latin scripts, uroman will map both official versions to the same
47
+ romanized text, e.g. both "Ниш" and "Niš" will be mapped to "Nish" (which
48
+ properly reflects the pronunciation of the city's name).
49
+ For both Serbian and Macedonian, casual writers often use a simplified
50
+ Latin form without diacritics, e.g. "s" to represent not only Cyrillic "с"
51
+ and Latin "s", but also "ш" or "š", even if this conflates "s" and "sh" and
52
+ other such pairs. The casual romanization can be simulated by using
53
+ alternative uroman language codes "srp2" and "mkd2", which romanize
54
+ both "Ниш" and "Niš" to "Nis" to reflect the casual Latin spelling.
55
+ * Various small improvements.
56
+ Changes in version 1.2.4
57
+ * Added support for Tifinagh (a script used for Berber languages).
58
+ * Bug-fix that generated two emtpy lines for each empty line in cache mode.
59
+ Changes in version 1.2.3
60
+ * Exclude emojis, dingbats, many other pictographs from being romanized (e.g. to "face")
61
+ Changes in version 1.2
62
+ * Run-time improvement based on (1) token-based caching and (2) shortcut
63
+ romanization (identity) of ASCII strings for default 1-best (non-chart)
64
+ output. Speed-up by a factor of 10 for Bengali and Uyghur on medium and
65
+ large size texts.
66
+ * Incremental improvements for Farsi, Amharic, Russian, Hebrew and related
67
+ languages.
68
+ * Richer lattice structure (more alternatives) for "Romanization" of English
69
+ to support better matching to romanizations of other languages.
70
+ Changes output only when --chart option is specified. No change in output for
71
+ default 1-best output, which for ASCII characters is always the input string.
72
+ Changes in version 1.1 (major upgrade)
73
+ * Offers chart output (in JSON format) to represent alternative romanizations.
74
+ -- Location of first character is defined to be "line: 1, start:0, end:0".
75
+ * Incremental improvements of Hebrew and Greek romanization; Chinese numbers.
76
+ * Improved web-interface at http://www.isi.edu/~ulf/uroman.html
77
+ -- Shows corresponding original and romanization text in red
78
+ when hovering over a text segment.
79
+ -- Shows alternative romanizations when hovering over romanized text
80
+ marked by dotted underline.
81
+ -- Added right-to-left script detection and improved display for right-to-left
82
+ script text (as determined line by line).
83
+ -- On-page support for some scripts that are often not pre-installed on users'
84
+ computers (Burmese, Egyptian, Klingon).
85
+ Changes in version 1.0 (major upgrade)
86
+ * Upgraded principal internal data structure from string to lattice.
87
+ * Improvements mostly in vowelization of South and Southeast Asian languages.
88
+ * Vocalic 'r' more consistently treated as vowel (no additional vowel added).
89
+ * Repetition signs (Japanese/Chinese/Thai/Khmer/Lao) are mapped to superscript 2.
90
+ * Japanese Katakana middle dots now mapped to ASCII space.
91
+ * Tibetan intersyllabic mark now mapped to middle dot (U+00B7).
92
+ * Some corrections regarding analysis of Chinese numbers.
93
+ * Many more foreign diacritics and punctuation marks dropped or mapped to ASCII.
94
+ * Zero-width characters dropped, except line/sentence-initial byte order marks.
95
+ * Spaces normalized to ASCII space.
96
+ * Fixed bug that in some cases mapped signs (such as dagger or bullet) to their verbal descriptions.
97
+ * Tested against previous version of uroman with a new uroman visual diff tool.
98
+ * Almost an order of magnitude faster.
99
+ Changes in version 0.7 (minor upgrade)
100
+ * Added script uroman-quick.pl for Arabic script languages, incl. Uyghur.
101
+ Much faster, pre-caching mapping of Arabic to Latin characters, simple greedy processing.
102
+ Will not convert material from non-Arabic blocks such as any (somewhat unusual) Cyrillic
103
+ or Chinese characters in Uyghur texts.
104
+ Changes in version 0.6 (minor upgrade)
105
+ * Added support for two letter characters used in Uzbek:
106
+ (1) character "ʻ" ("modifier letter turned comma", which modifies preceding "g" and "u" letters)
107
+ (2) character "ʼ" ("modifier letter apostrophe", which Uzbek uses to mark a glottal stop).
108
+ Both are now mapped to "'" (plain ASCII apostrophe).
109
+ * Added support for Uyghur vowel characters such as "ې" (Arabic e) and "ۆ" (Arabic oe)
110
+ even when they are not preceded by "ئ" (yeh with hamza above).
111
+ * Added support for Arabic semicolon "؛", Arabic ligature forms for phrases such as "ﷺ"
112
+ ("sallallahou alayhe wasallam" = "prayer of God be upon him and his family and peace")
113
+ * Added robustness for Arabic letter presentation forms (initial/medial/final/isolated).
114
+ However, it is strongly recommended to normalize any presentation form Arabic letters
115
+ to their non-presentation form before calling uroman.
116
+ * Added force flush directive ($|=1;).
117
+ Changes in version 0.5 (minor upgrade)
118
+ * Improvements for Uyghur (make sure to use language option: -l uig)
119
+ Changes in version 0.4 (minor upgrade)
120
+ * Improvements for Thai (special cases for vowel/consonant reordering, e.g. for "sara o"; dropped some aspiration 'h's)
121
+ * Minor change for Arabic (added "alef+fathatan" = "an")
122
+ New features in version 0.3
123
+ * Covers Mandarin (Chinese)
124
+ * Improved romanization for numerous languages
125
+ * Preserves capitalization (e.g. from Latin, Cyrillic, Greek scripts)
126
+ * Maps from native digits to Western numbers
127
+ * Faster for South Asian languages
128
+
129
+ Other features
130
+ * Web interface: http://www.isi.edu/~ulf/uroman.html
131
+ * Vowelization is provided when locally computable, e.g. for many South Asian
132
+ languages and Tibetan.
133
+
134
+ Limitations
135
+ * This version of uroman assumes all CJK ideographs to be Mandarin (Chinese).
136
+ This means that Japanese kanji are incorrectly romanized; however, Japanese
137
+ hiragana and katakana are properly romanized.
138
+ * A romanizer is not a full transliterator. For example, this version of
139
+ uroman does not vowelize text that lacks explicit vowelization such as
140
+ normal text in Arabic and Hebrew (without diacritics/points).
141
+
pipeline/utils/uroman/bin/de-accent.pl ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/perl -w
2
+
3
+ sub print_version {
4
+ print STDERR "$0 version 1.1\n";
5
+ print STDERR " Author: Ulf Hermjakob\n";
6
+ print STDERR " Last changed: March 14, 2011\n";
7
+ }
8
+
9
+ sub print_usage {
10
+ print STDERR "$0 [options] < with_accents.txt > without_accents.txt\n";
11
+ print STDERR " -h or -help\n";
12
+ print STDERR " -v or -version\n";
13
+ }
14
+
15
+ sub de_accent_string {
16
+ local($s) = @_;
17
+
18
+ # $s =~ tr/A-Z/a-z/;
19
+ unless (0) {
20
+ # Latin-1
21
+ if ($s =~ /\xC3[\x80-\xBF]/) {
22
+ $s =~ s/(À|Á|Â|Ã|Ä|Å)/A/g;
23
+ $s =~ s/Æ/Ae/g;
24
+ $s =~ s/Ç/C/g;
25
+ $s =~ s/Ð/D/g;
26
+ $s =~ s/(È|É|Ê|Ë)/E/g;
27
+ $s =~ s/(Ì|Í|Î|Ï)/I/g;
28
+ $s =~ s/Ñ/N/g;
29
+ $s =~ s/(Ò|Ó|Ô|Õ|Ö|Ø)/O/g;
30
+ $s =~ s/(Ù|Ú|Û|Ü)/U/g;
31
+ $s =~ s/Þ/Th/g;
32
+ $s =~ s/Ý/Y/g;
33
+ $s =~ s/(à|á|â|ã|ä|å)/a/g;
34
+ $s =~ s/æ/ae/g;
35
+ $s =~ s/ç/c/g;
36
+ $s =~ s/(è|é|ê|ë)/e/g;
37
+ $s =~ s/(ì|í|î|ï)/i/g;
38
+ $s =~ s/ð/d/g;
39
+ $s =~ s/ñ/n/g;
40
+ $s =~ s/(ò|ó|ô|õ|ö)/o/g;
41
+ $s =~ s/ß/ss/g;
42
+ $s =~ s/þ/th/g;
43
+ $s =~ s/(ù|ú|û|ü)/u/g;
44
+ $s =~ s/(ý|ÿ)/y/g;
45
+ }
46
+ # Latin Extended-A
47
+ if ($s =~ /[\xC4-\xC5][\x80-\xBF]/) {
48
+ $s =~ s/(Ā|Ă|Ą)/A/g;
49
+ $s =~ s/(ā|ă|ą)/a/g;
50
+ $s =~ s/(Ć|Ĉ|Ċ|Č)/C/g;
51
+ $s =~ s/(ć|ĉ|ċ|č)/c/g;
52
+ $s =~ s/(Ď|Đ)/D/g;
53
+ $s =~ s/(ď|đ)/d/g;
54
+ $s =~ s/(Ē|Ĕ|Ė|Ę|Ě)/E/g;
55
+ $s =~ s/(ē|ĕ|ė|ę|ě)/e/g;
56
+ $s =~ s/(Ĝ|Ğ|Ġ|Ģ)/G/g;
57
+ $s =~ s/(ĝ|ğ|ġ|ģ)/g/g;
58
+ $s =~ s/(Ĥ|Ħ)/H/g;
59
+ $s =~ s/(ĥ|ħ)/h/g;
60
+ $s =~ s/(Ĩ|Ī|Ĭ|Į|İ)/I/g;
61
+ $s =~ s/(ĩ|ī|ĭ|į|ı)/i/g;
62
+ $s =~ s/IJ/Ij/g;
63
+ $s =~ s/ij/ij/g;
64
+ $s =~ s/Ĵ/J/g;
65
+ $s =~ s/ĵ/j/g;
66
+ $s =~ s/Ķ/K/g;
67
+ $s =~ s/(ķ|ĸ)/k/g;
68
+ $s =~ s/(Ĺ|Ļ|Ľ|Ŀ|Ł)/L/g;
69
+ $s =~ s/(ļ|ľ|ŀ|ł)/l/g;
70
+ $s =~ s/(Ń|Ņ|Ň|Ŋ)/N/g;
71
+ $s =~ s/(ń|ņ|ň|ʼn|ŋ)/n/g;
72
+ $s =~ s/(Ō|Ŏ|Ő)/O/g;
73
+ $s =~ s/(ō|ŏ|ő)/o/g;
74
+ $s =~ s/Œ/Oe/g;
75
+ $s =~ s/œ/oe/g;
76
+ $s =~ s/(Ŕ|Ŗ|Ř)/R/g;
77
+ $s =~ s/(ŕ|ŗ|ř)/r/g;
78
+ $s =~ s/(Ś|Ŝ|Ş|Š)/S/g;
79
+ $s =~ s/(ś|ŝ|ş|š|ſ)/s/g;
80
+ $s =~ s/(Ţ|Ť|Ŧ)/T/g;
81
+ $s =~ s/(ţ|ť|ŧ)/t/g;
82
+ $s =~ s/(Ũ|Ū|Ŭ|Ů|Ű|Ų)/U/g;
83
+ $s =~ s/(ũ|ū|ŭ|ů|ű|ų)/u/g;
84
+ $s =~ s/Ŵ/W/g;
85
+ $s =~ s/ŵ/w/g;
86
+ $s =~ s/(Ŷ|Ÿ)/Y/g;
87
+ $s =~ s/ŷ/y/g;
88
+ $s =~ s/(Ź|Ż|Ž)/Z/g;
89
+ $s =~ s/(ź|ż|ž)/z/g;
90
+ }
91
+ # Latin Extended Additional
92
+ if ($s =~ /\xE1[\xB8-\xBF][\x80-\xBF]/) {
93
+ $s =~ s/(ḁ|ạ|ả|ấ|ầ|ẩ|ẫ|ậ|ắ|ằ|ẳ|ẵ|ặ|ẚ)/a/g;
94
+ $s =~ s/(ḃ|ḅ|ḇ)/b/g;
95
+ $s =~ s/(ḉ)/c/g;
96
+ $s =~ s/(ḋ|ḍ|ḏ|ḑ|ḓ)/d/g;
97
+ $s =~ s/(ḕ|ḗ|ḙ|ḛ|ḝ|ẹ|ẻ|ẽ|ế|ề|ể|ễ|ệ)/e/g;
98
+ $s =~ s/(ḟ)/f/g;
99
+ $s =~ s/(ḡ)/g/g;
100
+ $s =~ s/(ḣ|ḥ|ḧ|ḩ|ḫ)/h/g;
101
+ $s =~ s/(ḭ|ḯ|ỉ|ị)/i/g;
102
+ $s =~ s/(ḱ|ḳ|ḵ)/k/g;
103
+ $s =~ s/(ḷ|ḹ|ḻ|ḽ)/l/g;
104
+ $s =~ s/(ḿ|ṁ|ṃ)/m/g;
105
+ $s =~ s/(ṅ|ṇ|ṉ|ṋ)/m/g;
106
+ $s =~ s/(ọ|ỏ|ố|ồ|ổ|ỗ|ộ|ớ|ờ|ở|ỡ|ợ|ṍ|ṏ|ṑ|ṓ)/o/g;
107
+ $s =~ s/(ṕ|ṗ)/p/g;
108
+ $s =~ s/(ṙ|ṛ|ṝ|ṟ)/r/g;
109
+ $s =~ s/(ṡ|ṣ|ṥ|ṧ|ṩ|ẛ)/s/g;
110
+ $s =~ s/(ṫ|ṭ|ṯ|ṱ)/t/g;
111
+ $s =~ s/(ṳ|ṵ|ṷ|ṹ|ṻ|ụ|ủ|ứ|ừ|ử|ữ|ự)/u/g;
112
+ $s =~ s/(ṽ|ṿ)/v/g;
113
+ $s =~ s/(ẁ|ẃ|ẅ|ẇ|ẉ|ẘ)/w/g;
114
+ $s =~ s/(ẋ|ẍ)/x/g;
115
+ $s =~ s/(ẏ|ỳ|ỵ|ỷ|ỹ|ẙ)/y/g;
116
+ $s =~ s/(ẑ|ẓ|ẕ)/z/g;
117
+ $s =~ s/(Ḁ|Ạ|Ả|Ấ|Ầ|Ẩ|Ẫ|Ậ|Ắ|Ằ|Ẳ|Ẵ|Ặ)/A/g;
118
+ $s =~ s/(Ḃ|Ḅ|Ḇ)/B/g;
119
+ $s =~ s/(Ḉ)/C/g;
120
+ $s =~ s/(Ḋ|Ḍ|Ḏ|Ḑ|Ḓ)/D/g;
121
+ $s =~ s/(Ḕ|Ḗ|Ḙ|Ḛ|Ḝ|Ẹ|Ẻ|Ẽ|Ế|Ề|Ể|Ễ|Ệ)/E/g;
122
+ $s =~ s/(Ḟ)/F/g;
123
+ $s =~ s/(Ḡ)/G/g;
124
+ $s =~ s/(Ḣ|Ḥ|Ḧ|Ḩ|Ḫ)/H/g;
125
+ $s =~ s/(Ḭ|Ḯ|Ỉ|Ị)/I/g;
126
+ $s =~ s/(Ḱ|Ḳ|Ḵ)/K/g;
127
+ $s =~ s/(Ḷ|Ḹ|Ḻ|Ḽ)/L/g;
128
+ $s =~ s/(Ḿ|Ṁ|Ṃ)/M/g;
129
+ $s =~ s/(Ṅ|Ṇ|Ṉ|Ṋ)/N/g;
130
+ $s =~ s/(Ṍ|Ṏ|Ṑ|Ṓ|Ọ|Ỏ|Ố|Ồ|Ổ|Ỗ|Ộ|Ớ|Ờ|Ở|Ỡ|Ợ)/O/g;
131
+ $s =~ s/(Ṕ|Ṗ)/P/g;
132
+ $s =~ s/(Ṙ|Ṛ|Ṝ|Ṟ)/R/g;
133
+ $s =~ s/(Ṡ|Ṣ|Ṥ|Ṧ|Ṩ)/S/g;
134
+ $s =~ s/(Ṫ|Ṭ|Ṯ|Ṱ)/T/g;
135
+ $s =~ s/(Ṳ|Ṵ|Ṷ|Ṹ|Ṻ|Ụ|Ủ|Ứ|Ừ|Ử|Ữ|Ự)/U/g;
136
+ $s =~ s/(Ṽ|Ṿ)/V/g;
137
+ $s =~ s/(Ẁ|Ẃ|Ẅ|Ẇ|Ẉ)/W/g;
138
+ $s =~ s/(Ẍ)/X/g;
139
+ $s =~ s/(Ẏ|Ỳ|Ỵ|Ỷ|Ỹ)/Y/g;
140
+ $s =~ s/(Ẑ|Ẓ|Ẕ)/Z/g;
141
+ }
142
+ # Greek letters
143
+ if ($s =~ /\xCE[\x86-\xAB]/) {
144
+ $s =~ s/ά/α/g;
145
+ $s =~ s/έ/ε/g;
146
+ $s =~ s/ί/ι/g;
147
+ $s =~ s/ϊ/ι/g;
148
+ $s =~ s/ΐ/ι/g;
149
+ $s =~ s/ό/ο/g;
150
+ $s =~ s/ύ/υ/g;
151
+ $s =~ s/ϋ/υ/g;
152
+ $s =~ s/ΰ/υ/g;
153
+ $s =~ s/ώ/ω/g;
154
+ $s =~ s/Ά/Α/g;
155
+ $s =~ s/Έ/Ε/g;
156
+ $s =~ s/Ή/Η/g;
157
+ $s =~ s/Ί/Ι/g;
158
+ $s =~ s/Ϊ/Ι/g;
159
+ $s =~ s/Ύ/Υ/g;
160
+ $s =~ s/Ϋ/Υ/g;
161
+ $s =~ s/Ώ/Ω/g;
162
+ }
163
+ # Cyrillic letters
164
+ if ($s =~ /\xD0[\x80-\xAF]/) {
165
+ $s =~ s/Ѐ/Е/g;
166
+ $s =~ s/Ё/Е/g;
167
+ $s =~ s/Ѓ/Г/g;
168
+ $s =~ s/Ќ/К/g;
169
+ $s =~ s/Ѝ/И/g;
170
+ $s =~ s/Й/И/g;
171
+ $s =~ s/ѐ/е/g;
172
+ $s =~ s/ё/е/g;
173
+ $s =~ s/ѓ/г/g;
174
+ $s =~ s/ќ/к/g;
175
+ $s =~ s/ѝ/и/g;
176
+ $s =~ s/й/и/g;
177
+ }
178
+ }
179
+ return $s;
180
+ }
181
+
182
+ while (@ARGV) {
183
+ $arg = shift @ARGV;
184
+ if ($arg =~ /^-*(h|help)$/i) {
185
+ &print_usage;
186
+ exit 1;
187
+ } elsif ($arg =~ /^-*(v|version)$/i) {
188
+ &print_version;
189
+ exit 1;
190
+ } else {
191
+ print STDERR "Ignoring unrecognized argument $arg\n";
192
+ }
193
+ }
194
+
195
+ $line_number = 0;
196
+ while (<>) {
197
+ $line_number++;
198
+ print &de_accent_string($_);
199
+ }
200
+ exit 0;
201
+
pipeline/utils/uroman/bin/string-distance.pl ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/perl -w
2
+
3
+ # Author: Ulf Hermjakob
4
+ # Release date: October 13, 2019
5
+
6
+ # Usage: string-distance.pl {-lc1 <language-code>} {-lc2 <language-code>} < STDIN > STDOUT
7
+ # Example: string-distance.pl -lc1 rus -lc2 ukr < STDIN > STDOUT
8
+ # Example: string-distance.pl < ../test/string-similarity-test-input.txt
9
+ # Input format: two strings per line (tab-separated, in Latin script)
10
+ # Strings in non-Latin scripts should first be romanized. (Recommended script: uroman.pl)
11
+ # Output format: repetition of the two input strings, plus the string distance between them (tab-separated).
12
+ # Additional output meta info lines at the top are marked with an initial #.
13
+ #
14
+ # The script uses data from a string-distance-cost-rules file that lists costs,
15
+ # where the default cost is "1" with lower costs for differences in vowels,
16
+ # duplicate consonants, "f" vs. "ph" etc.
17
+ # Language cost rules can be language-specific and context-sensitive.
18
+
19
+ $|=1;
20
+
21
+ use FindBin;
22
+ use Cwd "abs_path";
23
+ use File::Basename qw(dirname);
24
+ use File::Spec;
25
+
26
+ my $bin_dir = abs_path(dirname($0));
27
+ my $root_dir = File::Spec->catfile($bin_dir, File::Spec->updir());
28
+ my $data_dir = File::Spec->catfile($root_dir, "data");
29
+ my $lib_dir = File::Spec->catfile($root_dir, "lib");
30
+
31
+ use lib "$FindBin::Bin/../lib";
32
+ use List::Util qw(min max);
33
+ use NLP::utilities;
34
+ use NLP::stringDistance;
35
+ $util = NLP::utilities;
36
+ $sd = NLP::stringDistance;
37
+ $verbose = 0;
38
+ $separator = "\t";
39
+
40
+ $cost_rule_filename = File::Spec->catfile($data_dir, "string-distance-cost-rules.txt");
41
+
42
+ $lang_code1 = "eng";
43
+ $lang_code2 = "eng";
44
+ %ht = ();
45
+
46
+ while (@ARGV) {
47
+ $arg = shift @ARGV;
48
+ if ($arg =~ /^-+lc1$/) {
49
+ $lang_code_candidate = shift @ARGV;
50
+ $lang_code1 = $lang_code_candidate if $lang_code_candidate =~ /^[a-z]{3,3}$/;
51
+ } elsif ($arg =~ /^-+lc2$/) {
52
+ $lang_code_candidate = shift @ARGV;
53
+ $lang_code2 = $lang_code_candidate if $lang_code_candidate =~ /^[a-z]{3,3}$/;
54
+ } elsif ($arg =~ /^-+(v|verbose)$/) {
55
+ $verbose = shift @ARGV;
56
+ } else {
57
+ print STDERR "Ignoring unrecognized arg $arg\n";
58
+ }
59
+ }
60
+
61
+ $sd->load_string_distance_data($cost_rule_filename, *ht, $verbose);
62
+ print STDERR "Loaded resources.\n" if $verbose;
63
+
64
+ my $chart_id = 0;
65
+ my $line_number = 0;
66
+ print "# Lang-code-1: $lang_code1 Lang-code-2: $lang_code2\n";
67
+ while (<>) {
68
+ $line_number++;
69
+ if ($verbose) {
70
+ if ($line_number =~ /000$/) {
71
+ if ($line_number =~ /0000$/) {
72
+ print STDERR $line_number;
73
+ } else {
74
+ print STDERR ".";
75
+ }
76
+ }
77
+ }
78
+ my $line = $_;
79
+ $line =~ s/^\xEF\xBB\xBF//;
80
+ next if $line =~ /^\s*(\#.*)?$/;
81
+ my $s1;
82
+ my $s2;
83
+ if (($s1, $s2) = ($line =~ /^("(?:\\"|[^"])*"|\S+)$separator("(?:\\"|[^"])*"|\S+)\s*$/)) {
84
+ $s1 = $util->dequote_string($s1);
85
+ $s2 = $util->dequote_string($s2);
86
+ } elsif ($line =~ /^\s*(#.*)$/) {
87
+ } else {
88
+ print STDERR "Could not process line $line_number: $line" if $verbose;
89
+ print "\n";
90
+ next;
91
+ }
92
+
93
+ $cost = $sd->quick_romanized_string_distance_by_chart($s1, $s2, *ht, "", $lang_code1, $lang_code2);
94
+ print "$s1\t$s2\t$cost\n";
95
+ }
96
+ print STDERR "\n" if $verbose;
97
+
98
+ exit 0;
99
+
pipeline/utils/uroman/bin/uroman-quick.pl ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/perl -w
2
+
3
+ # uroman Nov. 12, 2015 - July 25, 2016
4
+ # version v0.7
5
+ # Author: Ulf Hermjakob
6
+
7
+ # Usage: uroman-quick.pl {-l [tur|uig|ukr|yid]} < STDIN
8
+ # currently only for Arabic script languages, incl. Uyghur
9
+
10
+ $|=1;
11
+
12
+ use FindBin;
13
+ use Cwd "abs_path";
14
+ use File::Basename qw(dirname);
15
+ use File::Spec;
16
+
17
+ my $bin_dir = abs_path(dirname($0));
18
+ my $root_dir = File::Spec->catfile($bin_dir, File::Spec->updir());
19
+ my $data_dir = File::Spec->catfile($root_dir, "data");
20
+ my $lib_dir = File::Spec->catfile($root_dir, "lib");
21
+
22
+ use lib "$FindBin::Bin/../lib";
23
+ use NLP::Romanizer;
24
+ use NLP::UTF8;
25
+ $romanizer = NLP::Romanizer;
26
+ %ht = ();
27
+ $lang_code = "";
28
+
29
+ while (@ARGV) {
30
+ $arg = shift @ARGV;
31
+ if ($arg =~ /^-+(l|lc|lang-code)$/) {
32
+ $lang_code = lc (shift @ARGV || "")
33
+ } else {
34
+ print STDERR "Ignoring unrecognized arg $arg\n";
35
+ }
36
+ }
37
+
38
+ $romanization_table_arabic_block_filename = File::Spec->catfile($data_dir, "romanization-table-arabic-block.txt");
39
+ $romanization_table_filename = File::Spec->catfile($data_dir, "romanization-table.txt");
40
+
41
+ $romanizer->load_romanization_table(*ht, $romanization_table_arabic_block_filename);
42
+ $romanizer->load_romanization_table(*ht, $romanization_table_filename);
43
+
44
+ $line_number = 0;
45
+ while (<>) {
46
+ $line_number++;
47
+ my $line = $_;
48
+ print $romanizer->quick_romanize($line, $lang_code, *ht) . "\n";
49
+ if ($line_number =~ /0000$/) {
50
+ print STDERR $line_number;
51
+ } elsif ($line_number =~ /000$/) {
52
+ print STDERR ".";
53
+ }
54
+ }
55
+ print STDERR "\n";
56
+
57
+ exit 0;
58
+
pipeline/utils/uroman/bin/uroman-tsv.sh ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+ # Created by Thamme Gowda on June 17, 2019
3
+
4
+ DIR=$(dirname "${BASH_SOURCE[0]}") # get the directory name
5
+ # DIR=$(realpath "${DIR}") # resolve its full path if need be
6
+
7
+ if [[ $# -lt 1 || $# -gt 2 ]]; then
8
+ >&2 echo "ERROR: invalid args"
9
+ >&2 echo "Usage: <input.tsv> [<output.tsv>]"
10
+ exit 2
11
+ fi
12
+
13
+ INP=$1
14
+ OUT=$2
15
+
16
+ CMD=$DIR/uroman.pl
17
+
18
+ function romanize(){
19
+ paste <(cut -f1 $INP) <(cut -f2 $INP | $CMD)
20
+ }
21
+
22
+ if [[ -n $OUT ]]; then
23
+ romanize > $OUT
24
+ else
25
+ romanize
26
+ fi
27
+
28
+
pipeline/utils/uroman/bin/uroman.pl ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/perl -w
2
+
3
+ # uroman Nov. 12, 2015 - Apr. 23, 2021
4
+ $version = "v1.2.8";
5
+ # Author: Ulf Hermjakob
6
+
7
+ # Usage: uroman.pl {-l [ara|bel|bul|deu|ell|eng|fas|grc|heb|kaz|kir|lav|lit|mkd|mkd2|oss|pnt|rus|srp|srp2|tur|uig|ukr|yid]} {--chart|--offset-mapping} {--no-cache} {--workset} < STDIN
8
+ # Example: cat workset.txt | uroman.pl --offset-mapping --workset
9
+
10
+ $|=1;
11
+
12
+ use FindBin;
13
+ use Cwd "abs_path";
14
+ use File::Basename qw(dirname);
15
+ use File::Spec;
16
+
17
+ my $bin_dir = abs_path(dirname($0));
18
+ my $root_dir = File::Spec->catfile($bin_dir, File::Spec->updir());
19
+ my $data_dir = File::Spec->catfile($root_dir, "data");
20
+ my $lib_dir = File::Spec->catfile($root_dir, "lib");
21
+
22
+ use lib "$FindBin::Bin/../lib";
23
+ use NLP::Chinese;
24
+ use NLP::Romanizer;
25
+ use NLP::UTF8;
26
+ use NLP::utilities;
27
+ use JSON;
28
+ $chinesePM = NLP::Chinese;
29
+ $romanizer = NLP::Romanizer;
30
+ $util = NLP::utilities;
31
+ %ht = ();
32
+ %pinyin_ht = ();
33
+ $lang_code = "";
34
+ $return_chart_p = 0;
35
+ $return_offset_mappings_p = 0;
36
+ $workset_p = 0;
37
+ $cache_rom_tokens_p = 1;
38
+
39
+ $script_data_filename = File::Spec->catfile($data_dir, "Scripts.txt");
40
+ $unicode_data_overwrite_filename = File::Spec->catfile($data_dir, "UnicodeDataOverwrite.txt");
41
+ $unicode_data_filename = File::Spec->catfile($data_dir, "UnicodeData.txt");
42
+ $romanization_table_filename = File::Spec->catfile($data_dir, "romanization-table.txt");
43
+ $chinese_tonal_pinyin_filename = File::Spec->catfile($data_dir, "Chinese_to_Pinyin.txt");
44
+
45
+ while (@ARGV) {
46
+ $arg = shift @ARGV;
47
+ if ($arg =~ /^-+(l|lc|lang-code)$/) {
48
+ $lang_code = lc (shift @ARGV || "")
49
+ } elsif ($arg =~ /^-+chart$/i) {
50
+ $return_chart_p = 1;
51
+ } elsif ($arg =~ /^-+workset$/i) {
52
+ $workset_p = 1;
53
+ } elsif ($arg =~ /^-+offset[-_]*map/i) {
54
+ $return_offset_mappings_p = 1;
55
+ } elsif ($arg =~ /^-+unicode[-_]?data/i) {
56
+ $filename = shift @ARGV;
57
+ if (-r $filename) {
58
+ $unicode_data_filename = $filename;
59
+ } else {
60
+ print STDERR "Ignoring invalid UnicodeData filename $filename\n";
61
+ }
62
+ } elsif ($arg =~ /^-+(no-tok-cach|no-cach)/i) {
63
+ $cache_rom_tokens_p = 0;
64
+ } else {
65
+ print STDERR "Ignoring unrecognized arg $arg\n";
66
+ }
67
+ }
68
+
69
+ $romanizer->load_script_data(*ht, $script_data_filename);
70
+ $romanizer->load_unicode_data(*ht, $unicode_data_filename);
71
+ $romanizer->load_unicode_overwrite_romanization(*ht, $unicode_data_overwrite_filename);
72
+ $romanizer->load_romanization_table(*ht, $romanization_table_filename);
73
+ $chinese_to_pinyin_not_yet_loaded_p = 1;
74
+ $current_date = $util->datetime("dateTtime");
75
+ $lang_code_clause = ($lang_code) ? " \"lang-code\":\"$lang_code\",\n" : "";
76
+
77
+ print "{\n \"romanizer\":\"uroman $version (Ulf Hermjakob, USC/ISI)\",\n \"date\":\"$current_date\",\n$lang_code_clause \"romanization\": [\n" if $return_chart_p;
78
+ my $line_number = 0;
79
+ my $chart_result = "";
80
+ while (<>) {
81
+ $line_number++;
82
+ my $line = $_;
83
+ my $snt_id = "";
84
+ if ($workset_p) {
85
+ next if $line =~ /^#/;
86
+ if (($i_value, $s_value) = ($line =~ /^(\S+\.\d+)\s(.*)$/)) {
87
+ $snt_id = $i_value;
88
+ $line = "$s_value\n";
89
+ } else {
90
+ next;
91
+ }
92
+ }
93
+ if ($chinese_to_pinyin_not_yet_loaded_p && $chinesePM->string_contains_utf8_cjk_unified_ideograph_p($line)) {
94
+ $chinesePM->read_chinese_tonal_pinyin_files(*pinyin_ht, $chinese_tonal_pinyin_filename);
95
+ $chinese_to_pinyin_not_yet_loaded_p = 0;
96
+ }
97
+ if ($return_chart_p) {
98
+ print $chart_result;
99
+ *chart_ht = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return chart", $line_number);
100
+ $chart_result = $romanizer->chart_to_json_romanization_elements(0, $chart_ht{N_CHARS}, *chart_ht, $line_number);
101
+ } elsif ($return_offset_mappings_p) {
102
+ ($best_romanization, $offset_mappings) = $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "return offset mappings", $line_number, 0);
103
+ print "::snt-id $snt_id\n" if $workset_p;
104
+ print "::orig $line";
105
+ print "::rom $best_romanization\n";
106
+ print "::align $offset_mappings\n\n";
107
+ } elsif ($cache_rom_tokens_p) {
108
+ print $romanizer->romanize_by_token_with_caching($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n";
109
+ } else {
110
+ print $romanizer->romanize($line, $lang_code, "", *ht, *pinyin_ht, 0, "", $line_number) . "\n";
111
+ }
112
+ }
113
+ $chart_result =~ s/,(\s*)$/$1/;
114
+ print $chart_result;
115
+ print " ]\n}\n" if $return_chart_p;
116
+
117
+ $dev_test_p = 0;
118
+ if ($dev_test_p) {
119
+ $n_suspicious_code_points = 0;
120
+ $n_instances = 0;
121
+ foreach $char_name (sort { hex($ht{UTF_NAME_TO_UNICODE}->{$a}) <=> hex($ht{UTF_NAME_TO_UNICODE}->{$b}) }
122
+ keys %{$ht{SUSPICIOUS_ROMANIZATION}}) {
123
+ $unicode_value = $ht{UTF_NAME_TO_UNICODE}->{$char_name};
124
+ $utf8_string = $ht{UTF_NAME_TO_CODE}->{$char_name};
125
+ foreach $romanization (sort keys %{$ht{SUSPICIOUS_ROMANIZATION}->{$char_name}}) {
126
+ $count = $ht{SUSPICIOUS_ROMANIZATION}->{$char_name}->{$romanization};
127
+ $s = ($count == 1) ? "" : "s";
128
+ print STDERR "*** Suspiciously lengthy romanization:\n" unless $n_suspicious_code_points;
129
+ print STDERR "::s $utf8_string ::t $romanization ::comment $char_name (U+$unicode_value)\n";
130
+ $n_suspicious_code_points++;
131
+ $n_instances += $count;
132
+ }
133
+ }
134
+ print STDERR " *** Total of $n_suspicious_code_points suspicious code points ($n_instances instance$s)\n" if $n_suspicious_code_points;
135
+ }
136
+
137
+ exit 0;
138
+
pipeline/utils/uroman/bin/uroman.py ADDED
The diff for this file is too large to render. See raw diff
 
pipeline/utils/uroman/data/Chinese_to_Pinyin.txt ADDED
The diff for this file is too large to render. See raw diff
 
pipeline/utils/uroman/data/NumProps.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
pipeline/utils/uroman/data/Scripts.txt ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ::script-name Adlam
2
+ ::script-name Aegean
3
+ ::script-name Ahom
4
+ ::script-name Anatolian Hieroglyph
5
+ ::script-name Arabic ::direction right-to-left
6
+ ::script-name Arabic-Indic
7
+ ::script-name Armenian
8
+ ::script-name Avestan
9
+ ::script-name Balinese
10
+ ::script-name Bamum
11
+ ::script-name Bassa Vah
12
+ ::script-name Batak
13
+ ::script-name Bengali ::abugida-default-vowel a
14
+ ::script-name Bhaiksuki
15
+ ::script-name Bopomofo ::language Chinese
16
+ ::script-name Brahmi ::abugida-default-vowel a
17
+ ::script-name Braille
18
+ ::script-name Buginese
19
+ ::script-name Buhid
20
+ ::script-name Canadian Syllabics
21
+ ::script-name Carian
22
+ ::script-name Caucasian Albanian
23
+ ::script-name Chakma
24
+ ::script-name Cham
25
+ ::script-name Cherokee
26
+ ::script-name Chorasmian
27
+ ::script-name Coptic
28
+ ::script-name Cuneiform
29
+ ::script-name Cypro-Minoan
30
+ ::script-name Cypriot
31
+ ::script-name Cyrillic
32
+ ::script-name CJK ::alt-script-name Chinese, Kanji ::language Chinese, Japanese, Korean, Mandarin
33
+ ::script-name Deseret
34
+ ::script-name Devanagari ::abugida-default-vowel a
35
+ ::script-name Dives Akuru
36
+ ::script-name Dogra
37
+ ::script-name Duployan
38
+ ::script-name Egyptian Hieroglyph ::alt-script-name Egyptian
39
+ ::script-name Elbasan
40
+ ::script-name Elymaic
41
+ ::script-name Ethiopic
42
+ ::script-name Extended Arabic-Indic
43
+ ::script-name Georgian
44
+ ::script-name Glagolitic
45
+ ::script-name Gothic
46
+ ::script-name Grantha
47
+ ::script-name Greek
48
+ ::script-name Greek Acrophonic
49
+ ::script-name Gujarati ::abugida-default-vowel a
50
+ ::script-name Gunjala Gondi
51
+ ::script-name Gurmukhi ::abugida-default-vowel a
52
+ ::script-name Hangul ::language Korean
53
+ ::script-name Hangzhou
54
+ ::script-name Hanifi Rohingya
55
+ ::script-name Hanunoo
56
+ ::script-name Hatran
57
+ ::script-name Hebrew ::direction right-to-left
58
+ ::script-name Hiragana ::language Japanese
59
+ ::script-name Indic Siyaq
60
+ ::script-name Imperial Aramaic
61
+ ::script-name Inscriptional Pahlavi
62
+ ::script-name Inscriptional Parthian
63
+ ::script-name Javanese
64
+ ::script-name Kaithi
65
+ ::script-name Kannada ::abugida-default-vowel a
66
+ ::script-name Katakana ::language Japanese
67
+ ::script-name Kawi
68
+ ::script-name Kayah Li
69
+ ::script-name Kharoshthi
70
+ ::script-name Khitan Small Script
71
+ ::script-name Khmer ::abugida-default-vowel a, o
72
+ ::script-name Khojki
73
+ ::script-name Khudawadi
74
+ ::script-name Klingon
75
+ ::script-name Lao
76
+ ::script-name Lepcha
77
+ ::script-name Latin
78
+ ::script-name Limbu
79
+ ::script-name Linear A
80
+ ::script-name Linear B
81
+ ::script-name Lisu
82
+ ::script-name Lycian
83
+ ::script-name Lydian
84
+ ::script-name Mahajani
85
+ ::script-name Makasar
86
+ ::script-name Malayalam ::abugida-default-vowel a
87
+ ::script-name Mandaic
88
+ ::script-name Manichaean
89
+ ::script-name Marchen
90
+ ::script-name Masaram Gondi
91
+ ::script-name Mayan
92
+ ::script-name Medefaidrin
93
+ ::script-name Meetei Mayek
94
+ ::script-name Mende Kikakui
95
+ ::script-name Meroitic Cursive
96
+ ::script-name Meroitic Hieroglyphic
97
+ ::script-name Miao
98
+ ::script-name Modi ::abugida-default-vowel a
99
+ ::script-name Mongolian
100
+ ::script-name Mro
101
+ ::script-name Multani
102
+ ::script-name Myanmar ::alt-script-name Burmese ::abugida-default-vowel a
103
+ ::script-name Nabataean
104
+ ::script-name Nag Mundari
105
+ ::script-name Nandinagari
106
+ ::script-name New Tai Lue
107
+ ::script-name Newa
108
+ ::script-name Nko ::direction right-to-left
109
+ ::script-name North Indic
110
+ ::script-name Nushu
111
+ ::script-name Nyiakeng Puachue Hmong
112
+ ::script-name Ogham
113
+ ::script-name Ol Chiki
114
+ ::script-name Old Hungarian
115
+ ::script-name Old Italic
116
+ ::script-name Old Permic
117
+ ::script-name Old Persian
118
+ ::script-name Old North Arabian
119
+ ::script-name Old Sogdian
120
+ ::script-name Old South Arabian
121
+ ::script-name Old Turkic
122
+ ::script-name Old Uyghur
123
+ ::script-name Oriya ::alt-script-name Odia ::abugida-default-vowel a
124
+ ::script-name Osage
125
+ ::script-name Osmanya
126
+ ::script-name Ottoman Siyaq
127
+ ::script-name Pahawh Hmong
128
+ ::script-name Palmyrene
129
+ ::script-name Pau Cin Hau
130
+ ::script-name Phags-Pa
131
+ ::script-name Phaistos Disc
132
+ ::script-name Phoenician
133
+ ::script-name Psalter Pahlavi
134
+ ::script-name Rejang
135
+ ::script-name Rumi
136
+ ::script-name Runic
137
+ ::script-name Samaritan
138
+ ::script-name Saurashtra
139
+ ::script-name Sharada
140
+ ::script-name Shavian
141
+ ::script-name Siddham
142
+ ::script-name SignWriting
143
+ ::script-name Sinhala ::abugida-default-vowel a
144
+ ::script-name Sogdian
145
+ ::script-name Sora Sompeng
146
+ ::script-name Soyombo
147
+ ::script-name Sundanese ::abugida-default-vowel a
148
+ ::script-name Syloti Nagri
149
+ ::script-name Syriac
150
+ ::script-name Tagalog
151
+ ::script-name Tagbanwa
152
+ ::script-name Tai Le
153
+ ::script-name Tai Tham
154
+ ::script-name Tai Viet
155
+ ::script-name Takri
156
+ ::script-name Tamil ::abugida-default-vowel a
157
+ ::script-name Tangsa
158
+ ::script-name Tangut
159
+ ::script-name Telugu ::abugida-default-vowel a
160
+ ::script-name Thaana ::direction right-to-left
161
+ ::script-name Thai
162
+ ::script-name Tibetan ::abugida-default-vowel a
163
+ ::script-name Tifinagh
164
+ ::script-name Tirhuta
165
+ ::script-name Toto
166
+ ::script-name Ugaritic
167
+ ::script-name Vai
168
+ ::script-name Vedic
169
+ ::script-name Vithkuqi
170
+ ::script-name Wancho
171
+ ::script-name Warang Citi
172
+ ::script-name Yezidi
173
+ ::script-name Yi
174
+ ::script-name Zanabazar Square
pipeline/utils/uroman/data/UnicodeData.txt ADDED
The diff for this file is too large to render. See raw diff