drewThomasson commited on
Commit
2732f79
1 Parent(s): ca3c2f4

Upload 22 files

Browse files
Notebooks/Kaggel Archive Code/4.wav ADDED
Binary file (543 kB). View file
 
Notebooks/Kaggel Archive Code/LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2024 Drew Thomasson
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
Notebooks/Kaggel Archive Code/README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # this is a sample for running on kaggle and it may not be updated frequently
2
+
3
+ # ebook2audiobook kaggle eddition
4
+ Generates an audiobook with chapters and ebook metadata using Calibre and Xtts from Coqui tts, and with optional voice cloning, and supports multiple languages
5
+
6
+ # import this notebook to kaggle
7
+ https://github.com/Rihcus/ebook2audiobookXTTS/blob/main/kaggle-ebook2audiobook-demo.ipynb
8
+
9
+ ## Features
10
+
11
+ - Converts eBooks to text format using Calibre's `ebook-convert` tool.
12
+ - Splits the eBook into chapters for structured audio conversion.
13
+ - Uses XTTS from Coqui TTS for high-quality text-to-speech conversion.
14
+ - Optional voice cloning feature using a provided voice file.
15
+ - Supports different languages for text-to-speech conversion, with English as the default.
16
+ - Confirmed to run on only 4 gb ram
17
+
18
+ ## Requirements
19
+
20
+ - Python 3.x
21
+ - `coqui-tts` Python package
22
+ - Calibre (for eBook conversion)
23
+ - FFmpeg (for audiobook file creation)
24
+ - Optional: Custom voice file for voice cloning
25
+
26
+ ### Installation Instructions for Dependencies
27
+
28
+ Install Python 3.x from [Python.org](https://www.python.org/downloads/).
29
+
30
+ Install Calibre:
31
+ - Ubuntu: `sudo apt-get install -y calibre`
32
+ - macOS: `brew install calibre`
33
+ - Windows(Powershell in Administrator mode): `choco install calibre`
34
+
35
+ Install FFmpeg:
36
+ - Ubuntu: `sudo apt-get install -y ffmpeg`
37
+ - macOS: `brew install ffmpeg`
38
+ - Windows(Powershell in Administrator mode): `choco install ffmpeg`
39
+
40
+ Install Mecab for (Non Latin-based Languages tts support)(Optional):
41
+ - Ubuntu: `sudo apt-get install -y mecab libmecab-dev mecab-ipadic-utf8`
42
+ - macOS: `brew install mecab`, `brew install mecab-ipadic`
43
+ - Windows(Powershell in Administrator mode no support for mecab-ipadic easy install so no Japanese for windows :/): `choco install mecab `
44
+
45
+ Install Python packages:
46
+ ```bash
47
+ pip install tts pydub nltk beautifulsoup4 ebooklib tqdm
48
+ ```
49
+ (For non Latin-based Languages tts support)(Optional)
50
+ `python -m unidic download`
51
+ ```bash
52
+ pip install mecab mecab-python3 unidic
53
+ ```
54
+
55
+ ### Supported Languages
56
+
57
+ The script supports the following languages for text-to-speech conversion:
58
+
59
+ English (en),
60
+ Spanish (es),
61
+ French (fr),
62
+ German (de),
63
+ Italian (it),
64
+ Portuguese (pt),
65
+ Polish (pl),
66
+ Turkish (tr),
67
+ Russian (ru),
68
+ Dutch (nl),
69
+ Czech (cs),
70
+ Arabic (ar),
71
+ Chinese (zh-cn),
72
+ Japanese (ja),
73
+ Hungarian (hu),
74
+ Korean (ko)
75
+
76
+ Specify the language code when running the script to use these languages.
77
+
78
+ ### Usage
79
+
80
+ Navigate to the script's directory in the terminal and execute one of the following commands:
81
+ If you have any trouble getting it to run in Windows then it should run fine in WSL2
82
+
83
+ Basic Usage: ALL PARAMETERS ARE MANDATORY WHEN CALLED THE SCRIPT
84
+
85
+ ```bash
86
+ python ebook2audiobook.py <path_to_ebook_file> [path_to_voice_file] [language_code]
87
+ ```
88
+ Replace <path_to_ebook_file> with the path to your eBook file.
89
+ include <path_to_voice_file> for voice cloning.
90
+ include <language_code> to specify the language
91
+
92
+
93
+ ## Demo
94
+
95
+
96
+
97
+ https://github.com/DrewThomasson/ebook2audiobookXTTS/assets/126999465/bccd7240-f967-4d27-a87d-445034db7d21
98
+
99
+
100
+
101
+ ### Supported ebook File Types:
102
+ .epub, .pdf, .mobi, .txt, .html, .rtf, .chm, .lit, .pdb, .fb2, .odt, .cbr, .cbz, .prc, .lrf, .pml, .snb, .cbc, .rb, and .tcr,
103
+ (Best results are from using epub or mobi for auto chapter detection)
104
+
105
+ ### outputs as a m4b with all book metadata and chapters, example output file in an audiobook player app
106
+ ![Example_of_output_in_audiobook_program](https://github.com/DrewThomasson/VoxNovel/blob/dc5197dff97252fa44c391dc0596902d71278a88/readme_files/example_in_app.jpeg)
107
+
108
+ A special thanks to the creaters of:
109
+
110
+
111
+ -Coqui TTS
112
+
113
+ -https://github.com/coqui-ai/TTS
114
+
115
+
116
+ -Calibre
117
+
118
+ -https://calibre-ebook.com
Notebooks/Kaggel Archive Code/Worker_2T4.sh ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ workers=$1
4
+
5
+ # Clean up operator directory
6
+ rm -rf "./Operator"
7
+ rm -rf "./Chapter_wav_files"
8
+ mkdir "./Operator"
9
+ mkdir "./Chapter_wav_files"
10
+
11
+
12
+ # Make appropriate temp directories
13
+ for i in $(seq 1 $workers); do
14
+ mkdir "./Operator/$i"
15
+ mkdir "./Operator/$i/temp"
16
+ mkdir "./Operator/$i/temp_ebook"
17
+ done
18
+
19
+ echo "Created $workers directories"
20
+
21
+ #Divide the chapters
22
+ share=1
23
+ for FILE in ./Working_files/temp_ebook/*; do
24
+ cp $FILE "./Operator/$share/temp_ebook/"
25
+ if [ $share -lt $workers ];
26
+ then
27
+ share=$((share+1))
28
+ else
29
+ share=1
30
+ fi
31
+ done
32
+
33
+ echo "Split chapters into operator"
34
+
35
+ #Run audio generation
36
+ #for i in $(seq 1 $workers); do
37
+ # echo "Starting Worker $i"
38
+ # python p2a_worker.py $i &
39
+ #done
40
+
41
+ gpu=1
42
+ for i in $(seq 1 $workers); do
43
+ if [ $gpu -lt 2 ];
44
+ then
45
+ echo "Starting Worker $i on GPU 1"
46
+ python p2a_worker_gpu1.py $i & #Run audio generation GPU 1 T4
47
+ gpu=2 # switch to gpu 2 on next loop
48
+ else
49
+ echo "Starting Worker $i on GPU 2"
50
+ python p2a_worker_gpu2.py $i & #Run audio generation GPU 2 T4
51
+ gpu=1 # switch to gpu 1 on next loop
52
+ fi
53
+ done
54
+
55
+
56
+
57
+ echo "All workers started waiting for completion...."
58
+ wait
59
+ echo "Done!"
Notebooks/Kaggel Archive Code/default_voice.wav ADDED
Binary file (291 kB). View file
 
Notebooks/Kaggel Archive Code/demo_mini_story_chapters_Drew.epub ADDED
Binary file (415 kB). View file
 
Notebooks/Kaggel Archive Code/ebook2audiobook.py ADDED
@@ -0,0 +1,462 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ print("starting...")
2
+
3
+ import os
4
+ import shutil
5
+ import subprocess
6
+ import re
7
+ from pydub import AudioSegment
8
+ import tempfile
9
+ from pydub import AudioSegment
10
+ import os
11
+ import nltk
12
+ from nltk.tokenize import sent_tokenize
13
+ nltk.download('punkt') # Make sure to download the necessary models
14
+ def is_folder_empty(folder_path):
15
+ if os.path.exists(folder_path) and os.path.isdir(folder_path):
16
+ # List directory contents
17
+ if not os.listdir(folder_path):
18
+ return True # The folder is empty
19
+ else:
20
+ return False # The folder is not empty
21
+ else:
22
+ print(f"The path {folder_path} is not a valid folder.")
23
+ return None # The path is not a valid folder
24
+
25
+ def remove_folder_with_contents(folder_path):
26
+ try:
27
+ shutil.rmtree(folder_path)
28
+ print(f"Successfully removed {folder_path} and all of its contents.")
29
+ except Exception as e:
30
+ print(f"Error removing {folder_path}: {e}")
31
+
32
+
33
+
34
+
35
+ def wipe_folder(folder_path):
36
+ # Check if the folder exists
37
+ if not os.path.exists(folder_path):
38
+ print(f"The folder {folder_path} does not exist.")
39
+ return
40
+
41
+ # Iterate over all the items in the given folder
42
+ for item in os.listdir(folder_path):
43
+ item_path = os.path.join(folder_path, item)
44
+ # If it's a file, remove it and print a message
45
+ if os.path.isfile(item_path):
46
+ os.remove(item_path)
47
+ print(f"Removed file: {item_path}")
48
+ # If it's a directory, remove it recursively and print a message
49
+ elif os.path.isdir(item_path):
50
+ shutil.rmtree(item_path)
51
+ print(f"Removed directory and its contents: {item_path}")
52
+
53
+ print(f"All contents wiped from {folder_path}.")
54
+
55
+
56
+ # Example usage
57
+ # folder_to_wipe = 'path_to_your_folder'
58
+ # wipe_folder(folder_to_wipe)
59
+
60
+
61
+ def create_m4b_from_chapters(input_dir, ebook_file, output_dir):
62
+ # Function to sort chapters based on their numeric order
63
+ def sort_key(chapter_file):
64
+ numbers = re.findall(r'\d+', chapter_file)
65
+ return int(numbers[0]) if numbers else 0
66
+
67
+ # Extract metadata and cover image from the eBook file
68
+ def extract_metadata_and_cover(ebook_path):
69
+ try:
70
+ cover_path = ebook_path.rsplit('.', 1)[0] + '.jpg'
71
+ subprocess.run(['ebook-meta', ebook_path, '--get-cover', cover_path], check=True)
72
+ if os.path.exists(cover_path):
73
+ return cover_path
74
+ except Exception as e:
75
+ print(f"Error extracting eBook metadata or cover: {e}")
76
+ return None
77
+ # Combine WAV files into a single file
78
+ def combine_wav_files(chapter_files, output_path):
79
+ # Initialize an empty audio segment
80
+ combined_audio = AudioSegment.empty()
81
+
82
+ # Sequentially append each file to the combined_audio
83
+ for chapter_file in chapter_files:
84
+ audio_segment = AudioSegment.from_wav(chapter_file)
85
+ combined_audio += audio_segment
86
+ # Export the combined audio to the output file path
87
+ combined_audio.export(output_path, format='wav')
88
+ print(f"Combined audio saved to {output_path}")
89
+
90
+ # Function to generate metadata for M4B chapters
91
+ def generate_ffmpeg_metadata(chapter_files, metadata_file):
92
+ with open(metadata_file, 'w') as file:
93
+ file.write(';FFMETADATA1\n')
94
+ start_time = 0
95
+ for index, chapter_file in enumerate(chapter_files):
96
+ duration_ms = len(AudioSegment.from_wav(chapter_file))
97
+ file.write(f'[CHAPTER]\nTIMEBASE=1/1000\nSTART={start_time}\n')
98
+ file.write(f'END={start_time + duration_ms}\ntitle=Chapter {index + 1}\n')
99
+ start_time += duration_ms
100
+
101
+ # Generate the final M4B file using ffmpeg
102
+ def create_m4b(combined_wav, metadata_file, cover_image, output_m4b):
103
+ # Ensure the output directory exists
104
+ os.makedirs(os.path.dirname(output_m4b), exist_ok=True)
105
+
106
+ ffmpeg_cmd = ['ffmpeg', '-i', combined_wav, '-i', metadata_file]
107
+ if cover_image:
108
+ ffmpeg_cmd += ['-i', cover_image, '-map', '0:a', '-map', '2:v']
109
+ else:
110
+ ffmpeg_cmd += ['-map', '0:a']
111
+
112
+ ffmpeg_cmd += ['-map_metadata', '1', '-c:a', 'aac', '-b:a', '192k']
113
+ if cover_image:
114
+ ffmpeg_cmd += ['-c:v', 'png', '-disposition:v', 'attached_pic']
115
+ ffmpeg_cmd += [output_m4b]
116
+
117
+ subprocess.run(ffmpeg_cmd, check=True)
118
+
119
+
120
+
121
+ # Main logic
122
+ chapter_files = sorted([os.path.join(input_dir, f) for f in os.listdir(input_dir) if f.endswith('.wav')], key=sort_key)
123
+ temp_dir = tempfile.gettempdir()
124
+ temp_combined_wav = os.path.join(temp_dir, 'combined.wav')
125
+ metadata_file = os.path.join(temp_dir, 'metadata.txt')
126
+ cover_image = extract_metadata_and_cover(ebook_file)
127
+ output_m4b = os.path.join(output_dir, os.path.splitext(os.path.basename(ebook_file))[0] + '.m4b')
128
+
129
+ combine_wav_files(chapter_files, temp_combined_wav)
130
+ generate_ffmpeg_metadata(chapter_files, metadata_file)
131
+ create_m4b(temp_combined_wav, metadata_file, cover_image, output_m4b)
132
+
133
+ # Cleanup
134
+ if os.path.exists(temp_combined_wav):
135
+ os.remove(temp_combined_wav)
136
+ if os.path.exists(metadata_file):
137
+ os.remove(metadata_file)
138
+ if cover_image and os.path.exists(cover_image):
139
+ os.remove(cover_image)
140
+
141
+ # Example usage
142
+ # create_m4b_from_chapters('path_to_chapter_wavs', 'path_to_ebook_file', 'path_to_output_dir')
143
+
144
+
145
+
146
+
147
+
148
+
149
+ #this code right here isnt the book grabbing thing but its before to refrence in ordero to create the sepecial chapter labeled book thing with calibre idk some systems cant seem to get it so just in case but the next bit of code after this is the book grabbing code with booknlp
150
+ import os
151
+ import subprocess
152
+ import ebooklib
153
+ from ebooklib import epub
154
+ from bs4 import BeautifulSoup
155
+ import re
156
+ import csv
157
+ import nltk
158
+
159
+ # Only run the main script if Value is True
160
+ def create_chapter_labeled_book(ebook_file_path):
161
+ # Function to ensure the existence of a directory
162
+ def ensure_directory(directory_path):
163
+ if not os.path.exists(directory_path):
164
+ os.makedirs(directory_path)
165
+ print(f"Created directory: {directory_path}")
166
+
167
+ ensure_directory(os.path.join(".", 'Working_files', 'Book'))
168
+
169
+ def convert_to_epub(input_path, output_path):
170
+ # Convert the ebook to EPUB format using Calibre's ebook-convert
171
+ try:
172
+ subprocess.run(['ebook-convert', input_path, output_path], check=True)
173
+ except subprocess.CalledProcessError as e:
174
+ print(f"An error occurred while converting the eBook: {e}")
175
+ return False
176
+ return True
177
+
178
+ def save_chapters_as_text(epub_path):
179
+ # Create the directory if it doesn't exist
180
+ directory = os.path.join(".", "Working_files", "temp_ebook")
181
+ ensure_directory(directory)
182
+
183
+ # Open the EPUB file
184
+ book = epub.read_epub(epub_path)
185
+
186
+ previous_chapter_text = ''
187
+ previous_filename = ''
188
+ chapter_counter = 0
189
+
190
+ # Iterate through the items in the EPUB file
191
+ for item in book.get_items():
192
+ if item.get_type() == ebooklib.ITEM_DOCUMENT:
193
+ # Use BeautifulSoup to parse HTML content
194
+ soup = BeautifulSoup(item.get_content(), 'html.parser')
195
+ text = soup.get_text()
196
+
197
+ # Check if the text is not empty
198
+ if text.strip():
199
+ if len(text) < 2300 and previous_filename:
200
+ # Append text to the previous chapter if it's short
201
+ with open(previous_filename, 'a', encoding='utf-8') as file:
202
+ file.write('\n' + text)
203
+ else:
204
+ # Create a new chapter file and increment the counter
205
+ previous_filename = os.path.join(directory, f"chapter_{chapter_counter}.txt")
206
+ chapter_counter += 1
207
+ with open(previous_filename, 'w', encoding='utf-8') as file:
208
+ file.write(text)
209
+ print(f"Saved chapter: {previous_filename}")
210
+
211
+ # Example usage
212
+ input_ebook = ebook_file_path # Replace with your eBook file path
213
+ output_epub = os.path.join(".", "Working_files", "temp.epub")
214
+
215
+
216
+ if os.path.exists(output_epub):
217
+ os.remove(output_epub)
218
+ print(f"File {output_epub} has been removed.")
219
+ else:
220
+ print(f"The file {output_epub} does not exist.")
221
+
222
+ if convert_to_epub(input_ebook, output_epub):
223
+ save_chapters_as_text(output_epub)
224
+
225
+ # Download the necessary NLTK data (if not already present)
226
+ nltk.download('punkt')
227
+
228
+ def process_chapter_files(folder_path, output_csv):
229
+ with open(output_csv, 'w', newline='', encoding='utf-8') as csvfile:
230
+ writer = csv.writer(csvfile)
231
+ # Write the header row
232
+ writer.writerow(['Text', 'Start Location', 'End Location', 'Is Quote', 'Speaker', 'Chapter'])
233
+
234
+ # Process each chapter file
235
+ chapter_files = sorted(os.listdir(folder_path), key=lambda x: int(x.split('_')[1].split('.')[0]))
236
+ for filename in chapter_files:
237
+ if filename.startswith('chapter_') and filename.endswith('.txt'):
238
+ chapter_number = int(filename.split('_')[1].split('.')[0])
239
+ file_path = os.path.join(folder_path, filename)
240
+
241
+ try:
242
+ with open(file_path, 'r', encoding='utf-8') as file:
243
+ text = file.read()
244
+ # Insert "NEWCHAPTERABC" at the beginning of each chapter's text
245
+ if text:
246
+ text = "NEWCHAPTERABC" + text
247
+ sentences = nltk.tokenize.sent_tokenize(text)
248
+ for sentence in sentences:
249
+ start_location = text.find(sentence)
250
+ end_location = start_location + len(sentence)
251
+ writer.writerow([sentence, start_location, end_location, 'True', 'Narrator', chapter_number])
252
+ except Exception as e:
253
+ print(f"Error processing file {filename}: {e}")
254
+
255
+ # Example usage
256
+ folder_path = os.path.join(".", "Working_files", "temp_ebook")
257
+ output_csv = os.path.join(".", "Working_files", "Book", "Other_book.csv")
258
+
259
+ process_chapter_files(folder_path, output_csv)
260
+
261
+ def sort_key(filename):
262
+ """Extract chapter number for sorting."""
263
+ match = re.search(r'chapter_(\d+)\.txt', filename)
264
+ return int(match.group(1)) if match else 0
265
+
266
+ def combine_chapters(input_folder, output_file):
267
+ # Create the output folder if it doesn't exist
268
+ os.makedirs(os.path.dirname(output_file), exist_ok=True)
269
+
270
+ # List all txt files and sort them by chapter number
271
+ files = [f for f in os.listdir(input_folder) if f.endswith('.txt')]
272
+ sorted_files = sorted(files, key=sort_key)
273
+
274
+ with open(output_file, 'w', encoding='utf-8') as outfile: # Specify UTF-8 encoding here
275
+ for i, filename in enumerate(sorted_files):
276
+ with open(os.path.join(input_folder, filename), 'r', encoding='utf-8') as infile: # And here
277
+ outfile.write(infile.read())
278
+ # Add the marker unless it's the last file
279
+ if i < len(sorted_files) - 1:
280
+ outfile.write("\nNEWCHAPTERABC\n")
281
+
282
+ # Paths
283
+ input_folder = os.path.join(".", 'Working_files', 'temp_ebook')
284
+ output_file = os.path.join(".", 'Working_files', 'Book', 'Chapter_Book.txt')
285
+
286
+
287
+ # Combine the chapters
288
+ combine_chapters(input_folder, output_file)
289
+
290
+ ensure_directory(os.path.join(".", "Working_files", "Book"))
291
+
292
+
293
+ #create_chapter_labeled_book()
294
+
295
+
296
+
297
+
298
+ import os
299
+ import subprocess
300
+ import sys
301
+ import torchaudio
302
+
303
+ # Check if Calibre's ebook-convert tool is installed
304
+ def calibre_installed():
305
+ try:
306
+ subprocess.run(['ebook-convert', '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
307
+ return True
308
+ except FileNotFoundError:
309
+ print("Calibre is not installed. Please install Calibre for this functionality.")
310
+ return False
311
+
312
+
313
+ import os
314
+ import torch
315
+ from TTS.api import TTS
316
+ from nltk.tokenize import sent_tokenize
317
+ from pydub import AudioSegment
318
+ # Assuming split_long_sentence and wipe_folder are defined elsewhere in your code
319
+
320
+ default_target_voice_path = "default_voice.wav" # Ensure this is a valid path
321
+ default_language_code = "en"
322
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
323
+
324
+ def combine_wav_files(input_directory, output_directory, file_name):
325
+ # Ensure that the output directory exists, create it if necessary
326
+ os.makedirs(output_directory, exist_ok=True)
327
+
328
+ # Specify the output file path
329
+ output_file_path = os.path.join(output_directory, file_name)
330
+
331
+ # Initialize an empty audio segment
332
+ combined_audio = AudioSegment.empty()
333
+
334
+ # Get a list of all .wav files in the specified input directory and sort them
335
+ input_file_paths = sorted(
336
+ [os.path.join(input_directory, f) for f in os.listdir(input_directory) if f.endswith(".wav")],
337
+ key=lambda f: int(''.join(filter(str.isdigit, f)))
338
+ )
339
+
340
+ # Sequentially append each file to the combined_audio
341
+ for input_file_path in input_file_paths:
342
+ audio_segment = AudioSegment.from_wav(input_file_path)
343
+ combined_audio += audio_segment
344
+
345
+ # Export the combined audio to the output file path
346
+ combined_audio.export(output_file_path, format='wav')
347
+
348
+ print(f"Combined audio saved to {output_file_path}")
349
+
350
+ # Function to split long strings into parts
351
+ def split_long_sentence(sentence, max_length=249, max_pauses=10):
352
+ """
353
+ Splits a sentence into parts based on length or number of pauses without recursion.
354
+
355
+ :param sentence: The sentence to split.
356
+ :param max_length: Maximum allowed length of a sentence.
357
+ :param max_pauses: Maximum allowed number of pauses in a sentence.
358
+ :return: A list of sentence parts that meet the criteria.
359
+ """
360
+ parts = []
361
+ while len(sentence) > max_length or sentence.count(',') + sentence.count(';') + sentence.count('.') > max_pauses:
362
+ possible_splits = [i for i, char in enumerate(sentence) if char in ',;.' and i < max_length]
363
+ if possible_splits:
364
+ # Find the best place to split the sentence, preferring the last possible split to keep parts longer
365
+ split_at = possible_splits[-1] + 1
366
+ else:
367
+ # If no punctuation to split on within max_length, split at max_length
368
+ split_at = max_length
369
+
370
+ # Split the sentence and add the first part to the list
371
+ parts.append(sentence[:split_at].strip())
372
+ sentence = sentence[split_at:].strip()
373
+
374
+ # Add the remaining part of the sentence
375
+ parts.append(sentence)
376
+ return parts
377
+
378
+ """
379
+ if 'tts' not in locals():
380
+ tts = TTS(selected_tts_model, progress_bar=True).to(device)
381
+ """
382
+ from tqdm import tqdm
383
+
384
+ # Convert chapters to audio using XTTS
385
+ def convert_chapters_to_audio(chapters_dir, output_audio_dir, target_voice_path=None, language=None):
386
+ selected_tts_model = "tts_models/multilingual/multi-dataset/xtts_v2"
387
+ tts = TTS(selected_tts_model, progress_bar=False).to(device) # Set progress_bar to False to avoid nested progress bars
388
+
389
+ if not os.path.exists(output_audio_dir):
390
+ os.makedirs(output_audio_dir)
391
+
392
+ for chapter_file in sorted(os.listdir(chapters_dir)):
393
+ if chapter_file.endswith('.txt'):
394
+ # Extract chapter number from the filename
395
+ match = re.search(r"chapter_(\d+).txt", chapter_file)
396
+ if match:
397
+ chapter_num = int(match.group(1))
398
+ else:
399
+ print(f"Skipping file {chapter_file} as it does not match the expected format.")
400
+ continue
401
+
402
+ chapter_path = os.path.join(chapters_dir, chapter_file)
403
+ output_file_name = f"audio_chapter_{chapter_num}.wav"
404
+ output_file_path = os.path.join(output_audio_dir, output_file_name)
405
+ temp_audio_directory = os.path.join(".", "Working_files", "temp")
406
+ os.makedirs(temp_audio_directory, exist_ok=True)
407
+ temp_count = 0
408
+
409
+ with open(chapter_path, 'r', encoding='utf-8') as file:
410
+ chapter_text = file.read()
411
+ # Use the specified language model for sentence tokenization
412
+ sentences = sent_tokenize(chapter_text, language='italian' if language == 'it' else 'english')
413
+ for sentence in tqdm(sentences, desc=f"Chapter {chapter_num}"):
414
+ fragments = []
415
+ if language == "en":
416
+ fragments = split_long_sentence(sentence, max_length=249, max_pauses=10)
417
+ if language == "it":
418
+ fragments = split_long_sentence(sentence, max_length=213, max_pauses=10)
419
+ for fragment in fragments:
420
+ if fragment != "": #a hot fix to avoid blank fragments
421
+ print(f"Generating fragment: {fragment}...")
422
+ fragment_file_path = os.path.join(temp_audio_directory, f"{temp_count}.wav")
423
+ speaker_wav_path = target_voice_path if target_voice_path else default_target_voice_path
424
+ language_code = language if language else default_language_code
425
+ tts.tts_to_file(text=fragment, file_path=fragment_file_path, speaker_wav=speaker_wav_path, language=language_code)
426
+ temp_count += 1
427
+
428
+ combine_wav_files(temp_audio_directory, output_audio_dir, output_file_name)
429
+ wipe_folder(temp_audio_directory)
430
+ print(f"Converted chapter {chapter_num} to audio.")
431
+
432
+
433
+
434
+ # Main execution flow
435
+ if __name__ == "__main__":
436
+ if len(sys.argv) < 2:
437
+ print("Usage: python script.py <ebook_file_path> [target_voice_file_path]")
438
+ sys.exit(1)
439
+
440
+ ebook_file_path = sys.argv[1]
441
+ target_voice = sys.argv[2] if len(sys.argv) > 2 else None
442
+ language = sys.argv[3] if len(sys.argv) > 3 else None
443
+
444
+ if not calibre_installed():
445
+ sys.exit(1)
446
+
447
+ working_files = os.path.join(".","Working_files", "temp_ebook")
448
+ full_folder_working_files =os.path.join(".","Working_files")
449
+ chapters_directory = os.path.join(".","Working_files", "temp_ebook")
450
+ output_audio_directory = os.path.join(".", 'Chapter_wav_files')
451
+
452
+ print("Wiping and removeing Working_files folder...")
453
+ remove_folder_with_contents(full_folder_working_files)
454
+
455
+ print("Wiping and and removeing chapter_wav_files folder...")
456
+ remove_folder_with_contents(output_audio_directory)
457
+
458
+ create_chapter_labeled_book(ebook_file_path)
459
+ audiobook_output_path = os.path.join(".", "Audiobooks")
460
+ print(f"{chapters_directory}||||{output_audio_directory}|||||{target_voice}")
461
+ convert_chapters_to_audio(chapters_directory, output_audio_directory, target_voice, language)
462
+ create_m4b_from_chapters(output_audio_directory, ebook_file_path, audiobook_output_path)
Notebooks/Kaggel Archive Code/kaggle-ebook2audiobook-demo.ipynb ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata":{"kernelspec":{"language":"python","display_name":"Python 3","name":"python3"},"language_info":{"name":"python","version":"3.10.13","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"},"kaggle":{"accelerator":"nvidiaTeslaT4","dataSources":[],"dockerImageVersionId":30733,"isInternetEnabled":true,"language":"python","sourceType":"notebook","isGpuEnabled":true}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"markdown","source":"Install depdenencies","metadata":{}},{"cell_type":"code","source":"#!DEBIAN_FRONTEND=noninteractive\n!sudo apt-get update # && sudo apt-get -y upgrade\n!sudo apt-get -y install libegl1 \n!sudo apt-get -y install libopengl0\n!sudo apt-get -y install libxcb-cursor0\n!sudo -v && wget -nv -O- https://download.calibre-ebook.com/linux-installer.sh | sudo sh /dev/stdin\n!sudo apt-get install -y ffmpeg\n!pip install tts pydub nltk beautifulsoup4 ebooklib tqdm\n!pip install numpy==1.26.4","metadata":{"_uuid":"8f2839f25d086af736a60e9eeb907d3b93b6e0e5","_cell_guid":"b1076dfc-b9ad-4769-8c92-a6c4dae69d19","execution":{"iopub.status.busy":"2024-06-17T21:17:43.474429Z","iopub.execute_input":"2024-06-17T21:17:43.474679Z","iopub.status.idle":"2024-06-17T21:20:20.992799Z","shell.execute_reply.started":"2024-06-17T21:17:43.474655Z","shell.execute_reply":"2024-06-17T21:20:20.991791Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"Download modified ebook2audiobookXTTS\nhttps://github.com/Rihcus/ebook2audiobookXTTS\n\nOrigional unmodified version\nhttps://github.com/DrewThomasson/ebook2audiobookXTTS","metadata":{}},{"cell_type":"code","source":"!git clone https://github.com/Rihcus/ebook2audiobookXTTS","metadata":{"execution":{"iopub.status.busy":"2024-03-25T23:22:24.156772Z","iopub.execute_input":"2024-03-25T23:22:24.157618Z","iopub.status.idle":"2024-03-25T23:22:26.202486Z","shell.execute_reply.started":"2024-03-25T23:22:24.157577Z","shell.execute_reply":"2024-03-25T23:22:26.201179Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"(optional) Uploading your own epub book.\n\nBy default this notebook will use a sample epub book for testing/demo. \n\nIf you want to use your own book you will need to create a private kaggle data set, upload your epub to it, attach it to this notebook, and uncomment the two lines of code bellow, and update the data set path","metadata":{}},{"cell_type":"code","source":"# !cp -r /kaggle/input/<name of your attached dataset>/*.epub /kaggle/working/ebook2audiobookXTTS #copy your custom book\n# !rm /kaggle/working/ebook2audiobookXTTS/demo_mini_story_chapters_Drew.epub #remove default sample book","metadata":{},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"This to install xtts_v2 models","metadata":{}},{"cell_type":"code","source":"import os\nos.environ[\"COQUI_TOS_AGREED\"] = \"1\"\n\n!cd /kaggle/working/ebook2audiobookXTTS && tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 --text \"test\" --speaker_wav ./4.wav --language_idx en --use_cuda true","metadata":{"execution":{"iopub.status.busy":"2024-03-25T23:23:15.626677Z","iopub.execute_input":"2024-03-25T23:23:15.627585Z","iopub.status.idle":"2024-03-25T23:27:40.712856Z","shell.execute_reply.started":"2024-03-25T23:23:15.627548Z","shell.execute_reply":"2024-03-25T23:27:40.711852Z"},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"This is a modified version of ebook2audiobookXTTS. \n\n- p1.py only runs the first part ebook2audiobookXTTS and generates chapter txts (I commented out other parts)\n - https://github.com/Rihcus/ebook2audiobookXTTS/blob/main/p1.py\n- Worker_2T4.sh as a basic attempt at multigpu support. The 4 argument processes of ebook2audiobook will be run in parallel\n - Worker_2T4 will try to divide the chapter in even groups based on number of workers (ex 4 group 4 workers)\n - It will try to divy up the work between kaggles two T4 GPUS\n - I'm not sure how much of a difference it makes since kaggles cpu limitations\n \nhttps://github.com/Rihcus/ebook2audiobookXTTS/blob/main/Worker_2T4.sh\n\nhttps://github.com/Rihcus/ebook2audiobookXTTS/blob/main/p2a_worker_gpu1.py\n\nhttps://github.com/Rihcus/ebook2audiobookXTTS/blob/main/p2a_worker_gpu2.py","metadata":{}},{"cell_type":"code","source":"!cd /kaggle/working/ebook2audiobookXTTS && python p1.py \"$(ls ./*.epub)\" \"4.wav\" \"en\"\n!cd /kaggle/working/ebook2audiobookXTTS && bash Worker_2T4.sh 4","metadata":{},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"p3.py runs the final ffmpeg command. ffmpeg has been a bit buggy\nhttps://github.com/Rihcus/ebook2audiobookXTTS/blob/main/p3.py","metadata":{}},{"cell_type":"code","source":"!cd /kaggle/working/ebook2audiobookXTTS && python p3.py \"$(ls ./*.epub)\" \"4.wav\" \"en\"","metadata":{},"execution_count":null,"outputs":[]}]}
Notebooks/Kaggel Archive Code/p1.py ADDED
@@ -0,0 +1,462 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ print("starting...")
2
+
3
+ import os
4
+ import shutil
5
+ import subprocess
6
+ import re
7
+ from pydub import AudioSegment
8
+ import tempfile
9
+ from pydub import AudioSegment
10
+ import os
11
+ import nltk
12
+ from nltk.tokenize import sent_tokenize
13
+ nltk.download('punkt') # Make sure to download the necessary models
14
+ def is_folder_empty(folder_path):
15
+ if os.path.exists(folder_path) and os.path.isdir(folder_path):
16
+ # List directory contents
17
+ if not os.listdir(folder_path):
18
+ return True # The folder is empty
19
+ else:
20
+ return False # The folder is not empty
21
+ else:
22
+ print(f"The path {folder_path} is not a valid folder.")
23
+ return None # The path is not a valid folder
24
+
25
+ def remove_folder_with_contents(folder_path):
26
+ try:
27
+ shutil.rmtree(folder_path)
28
+ print(f"Successfully removed {folder_path} and all of its contents.")
29
+ except Exception as e:
30
+ print(f"Error removing {folder_path}: {e}")
31
+
32
+
33
+
34
+
35
+ def wipe_folder(folder_path):
36
+ # Check if the folder exists
37
+ if not os.path.exists(folder_path):
38
+ print(f"The folder {folder_path} does not exist.")
39
+ return
40
+
41
+ # Iterate over all the items in the given folder
42
+ for item in os.listdir(folder_path):
43
+ item_path = os.path.join(folder_path, item)
44
+ # If it's a file, remove it and print a message
45
+ if os.path.isfile(item_path):
46
+ os.remove(item_path)
47
+ print(f"Removed file: {item_path}")
48
+ # If it's a directory, remove it recursively and print a message
49
+ elif os.path.isdir(item_path):
50
+ shutil.rmtree(item_path)
51
+ print(f"Removed directory and its contents: {item_path}")
52
+
53
+ print(f"All contents wiped from {folder_path}.")
54
+
55
+
56
+ # Example usage
57
+ # folder_to_wipe = 'path_to_your_folder'
58
+ # wipe_folder(folder_to_wipe)
59
+
60
+
61
+ def create_m4b_from_chapters(input_dir, ebook_file, output_dir):
62
+ # Function to sort chapters based on their numeric order
63
+ def sort_key(chapter_file):
64
+ numbers = re.findall(r'\d+', chapter_file)
65
+ return int(numbers[0]) if numbers else 0
66
+
67
+ # Extract metadata and cover image from the eBook file
68
+ def extract_metadata_and_cover(ebook_path):
69
+ try:
70
+ cover_path = ebook_path.rsplit('.', 1)[0] + '.jpg'
71
+ subprocess.run(['ebook-meta', ebook_path, '--get-cover', cover_path], check=True)
72
+ if os.path.exists(cover_path):
73
+ return cover_path
74
+ except Exception as e:
75
+ print(f"Error extracting eBook metadata or cover: {e}")
76
+ return None
77
+ # Combine WAV files into a single file
78
+ def combine_wav_files(chapter_files, output_path):
79
+ # Initialize an empty audio segment
80
+ combined_audio = AudioSegment.empty()
81
+
82
+ # Sequentially append each file to the combined_audio
83
+ for chapter_file in chapter_files:
84
+ audio_segment = AudioSegment.from_wav(chapter_file)
85
+ combined_audio += audio_segment
86
+ # Export the combined audio to the output file path
87
+ combined_audio.export(output_path, format='wav')
88
+ print(f"Combined audio saved to {output_path}")
89
+
90
+ # Function to generate metadata for M4B chapters
91
+ def generate_ffmpeg_metadata(chapter_files, metadata_file):
92
+ with open(metadata_file, 'w') as file:
93
+ file.write(';FFMETADATA1\n')
94
+ start_time = 0
95
+ for index, chapter_file in enumerate(chapter_files):
96
+ duration_ms = len(AudioSegment.from_wav(chapter_file))
97
+ file.write(f'[CHAPTER]\nTIMEBASE=1/1000\nSTART={start_time}\n')
98
+ file.write(f'END={start_time + duration_ms}\ntitle=Chapter {index + 1}\n')
99
+ start_time += duration_ms
100
+
101
+ # Generate the final M4B file using ffmpeg
102
+ def create_m4b(combined_wav, metadata_file, cover_image, output_m4b):
103
+ # Ensure the output directory exists
104
+ os.makedirs(os.path.dirname(output_m4b), exist_ok=True)
105
+
106
+ ffmpeg_cmd = ['ffmpeg', '-i', combined_wav, '-i', metadata_file]
107
+ if cover_image:
108
+ ffmpeg_cmd += ['-i', cover_image, '-map', '0:a', '-map', '2:v']
109
+ else:
110
+ ffmpeg_cmd += ['-map', '0:a']
111
+
112
+ ffmpeg_cmd += ['-map_metadata', '1', '-c:a', 'aac', '-b:a', '192k']
113
+ if cover_image:
114
+ ffmpeg_cmd += ['-c:v', 'png', '-disposition:v', 'attached_pic']
115
+ ffmpeg_cmd += [output_m4b]
116
+
117
+ subprocess.run(ffmpeg_cmd, check=True)
118
+
119
+
120
+
121
+ # Main logic
122
+ chapter_files = sorted([os.path.join(input_dir, f) for f in os.listdir(input_dir) if f.endswith('.wav')], key=sort_key)
123
+ temp_dir = tempfile.gettempdir()
124
+ temp_combined_wav = os.path.join(temp_dir, 'combined.wav')
125
+ metadata_file = os.path.join(temp_dir, 'metadata.txt')
126
+ cover_image = extract_metadata_and_cover(ebook_file)
127
+ output_m4b = os.path.join(output_dir, os.path.splitext(os.path.basename(ebook_file))[0] + '.m4b')
128
+
129
+ combine_wav_files(chapter_files, temp_combined_wav)
130
+ generate_ffmpeg_metadata(chapter_files, metadata_file)
131
+ create_m4b(temp_combined_wav, metadata_file, cover_image, output_m4b)
132
+
133
+ # Cleanup
134
+ if os.path.exists(temp_combined_wav):
135
+ os.remove(temp_combined_wav)
136
+ if os.path.exists(metadata_file):
137
+ os.remove(metadata_file)
138
+ if cover_image and os.path.exists(cover_image):
139
+ os.remove(cover_image)
140
+
141
+ # Example usage
142
+ # create_m4b_from_chapters('path_to_chapter_wavs', 'path_to_ebook_file', 'path_to_output_dir')
143
+
144
+
145
+
146
+
147
+
148
+
149
+ #this code right here isnt the book grabbing thing but its before to refrence in ordero to create the sepecial chapter labeled book thing with calibre idk some systems cant seem to get it so just in case but the next bit of code after this is the book grabbing code with booknlp
150
+ import os
151
+ import subprocess
152
+ import ebooklib
153
+ from ebooklib import epub
154
+ from bs4 import BeautifulSoup
155
+ import re
156
+ import csv
157
+ import nltk
158
+
159
+ # Only run the main script if Value is True
160
+ def create_chapter_labeled_book(ebook_file_path):
161
+ # Function to ensure the existence of a directory
162
+ def ensure_directory(directory_path):
163
+ if not os.path.exists(directory_path):
164
+ os.makedirs(directory_path)
165
+ print(f"Created directory: {directory_path}")
166
+
167
+ ensure_directory(os.path.join(".", 'Working_files', 'Book'))
168
+
169
+ def convert_to_epub(input_path, output_path):
170
+ # Convert the ebook to EPUB format using Calibre's ebook-convert
171
+ try:
172
+ subprocess.run(['ebook-convert', input_path, output_path], check=True)
173
+ except subprocess.CalledProcessError as e:
174
+ print(f"An error occurred while converting the eBook: {e}")
175
+ return False
176
+ return True
177
+
178
+ def save_chapters_as_text(epub_path):
179
+ # Create the directory if it doesn't exist
180
+ directory = os.path.join(".", "Working_files", "temp_ebook")
181
+ ensure_directory(directory)
182
+
183
+ # Open the EPUB file
184
+ book = epub.read_epub(epub_path)
185
+
186
+ previous_chapter_text = ''
187
+ previous_filename = ''
188
+ chapter_counter = 0
189
+
190
+ # Iterate through the items in the EPUB file
191
+ for item in book.get_items():
192
+ if item.get_type() == ebooklib.ITEM_DOCUMENT:
193
+ # Use BeautifulSoup to parse HTML content
194
+ soup = BeautifulSoup(item.get_content(), 'html.parser')
195
+ text = soup.get_text()
196
+
197
+ # Check if the text is not empty
198
+ if text.strip():
199
+ if len(text) < 2300 and previous_filename:
200
+ # Append text to the previous chapter if it's short
201
+ with open(previous_filename, 'a', encoding='utf-8') as file:
202
+ file.write('\n' + text)
203
+ else:
204
+ # Create a new chapter file and increment the counter
205
+ previous_filename = os.path.join(directory, f"chapter_{chapter_counter}.txt")
206
+ chapter_counter += 1
207
+ with open(previous_filename, 'w', encoding='utf-8') as file:
208
+ file.write(text)
209
+ print(f"Saved chapter: {previous_filename}")
210
+
211
+ # Example usage
212
+ input_ebook = ebook_file_path # Replace with your eBook file path
213
+ output_epub = os.path.join(".", "Working_files", "temp.epub")
214
+
215
+
216
+ if os.path.exists(output_epub):
217
+ os.remove(output_epub)
218
+ print(f"File {output_epub} has been removed.")
219
+ else:
220
+ print(f"The file {output_epub} does not exist.")
221
+
222
+ if convert_to_epub(input_ebook, output_epub):
223
+ save_chapters_as_text(output_epub)
224
+
225
+ # Download the necessary NLTK data (if not already present)
226
+ nltk.download('punkt')
227
+
228
+ def process_chapter_files(folder_path, output_csv):
229
+ with open(output_csv, 'w', newline='', encoding='utf-8') as csvfile:
230
+ writer = csv.writer(csvfile)
231
+ # Write the header row
232
+ writer.writerow(['Text', 'Start Location', 'End Location', 'Is Quote', 'Speaker', 'Chapter'])
233
+
234
+ # Process each chapter file
235
+ chapter_files = sorted(os.listdir(folder_path), key=lambda x: int(x.split('_')[1].split('.')[0]))
236
+ for filename in chapter_files:
237
+ if filename.startswith('chapter_') and filename.endswith('.txt'):
238
+ chapter_number = int(filename.split('_')[1].split('.')[0])
239
+ file_path = os.path.join(folder_path, filename)
240
+
241
+ try:
242
+ with open(file_path, 'r', encoding='utf-8') as file:
243
+ text = file.read()
244
+ # Insert "NEWCHAPTERABC" at the beginning of each chapter's text
245
+ if text:
246
+ text = "NEWCHAPTERABC" + text
247
+ sentences = nltk.tokenize.sent_tokenize(text)
248
+ for sentence in sentences:
249
+ start_location = text.find(sentence)
250
+ end_location = start_location + len(sentence)
251
+ writer.writerow([sentence, start_location, end_location, 'True', 'Narrator', chapter_number])
252
+ except Exception as e:
253
+ print(f"Error processing file {filename}: {e}")
254
+
255
+ # Example usage
256
+ folder_path = os.path.join(".", "Working_files", "temp_ebook")
257
+ output_csv = os.path.join(".", "Working_files", "Book", "Other_book.csv")
258
+
259
+ process_chapter_files(folder_path, output_csv)
260
+
261
+ def sort_key(filename):
262
+ """Extract chapter number for sorting."""
263
+ match = re.search(r'chapter_(\d+)\.txt', filename)
264
+ return int(match.group(1)) if match else 0
265
+
266
+ def combine_chapters(input_folder, output_file):
267
+ # Create the output folder if it doesn't exist
268
+ os.makedirs(os.path.dirname(output_file), exist_ok=True)
269
+
270
+ # List all txt files and sort them by chapter number
271
+ files = [f for f in os.listdir(input_folder) if f.endswith('.txt')]
272
+ sorted_files = sorted(files, key=sort_key)
273
+
274
+ with open(output_file, 'w', encoding='utf-8') as outfile: # Specify UTF-8 encoding here
275
+ for i, filename in enumerate(sorted_files):
276
+ with open(os.path.join(input_folder, filename), 'r', encoding='utf-8') as infile: # And here
277
+ outfile.write(infile.read())
278
+ # Add the marker unless it's the last file
279
+ if i < len(sorted_files) - 1:
280
+ outfile.write("\nNEWCHAPTERABC\n")
281
+
282
+ # Paths
283
+ input_folder = os.path.join(".", 'Working_files', 'temp_ebook')
284
+ output_file = os.path.join(".", 'Working_files', 'Book', 'Chapter_Book.txt')
285
+
286
+
287
+ # Combine the chapters
288
+ combine_chapters(input_folder, output_file)
289
+
290
+ ensure_directory(os.path.join(".", "Working_files", "Book"))
291
+
292
+
293
+ #create_chapter_labeled_book()
294
+
295
+
296
+
297
+
298
+ import os
299
+ import subprocess
300
+ import sys
301
+ import torchaudio
302
+
303
+ # Check if Calibre's ebook-convert tool is installed
304
+ def calibre_installed():
305
+ try:
306
+ subprocess.run(['ebook-convert', '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
307
+ return True
308
+ except FileNotFoundError:
309
+ print("Calibre is not installed. Please install Calibre for this functionality.")
310
+ return False
311
+
312
+
313
+ import os
314
+ import torch
315
+ from TTS.api import TTS
316
+ from nltk.tokenize import sent_tokenize
317
+ from pydub import AudioSegment
318
+ # Assuming split_long_sentence and wipe_folder are defined elsewhere in your code
319
+
320
+ default_target_voice_path = "default_voice.wav" # Ensure this is a valid path
321
+ default_language_code = "en"
322
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
323
+
324
+ def combine_wav_files(input_directory, output_directory, file_name):
325
+ # Ensure that the output directory exists, create it if necessary
326
+ os.makedirs(output_directory, exist_ok=True)
327
+
328
+ # Specify the output file path
329
+ output_file_path = os.path.join(output_directory, file_name)
330
+
331
+ # Initialize an empty audio segment
332
+ combined_audio = AudioSegment.empty()
333
+
334
+ # Get a list of all .wav files in the specified input directory and sort them
335
+ input_file_paths = sorted(
336
+ [os.path.join(input_directory, f) for f in os.listdir(input_directory) if f.endswith(".wav")],
337
+ key=lambda f: int(''.join(filter(str.isdigit, f)))
338
+ )
339
+
340
+ # Sequentially append each file to the combined_audio
341
+ for input_file_path in input_file_paths:
342
+ audio_segment = AudioSegment.from_wav(input_file_path)
343
+ combined_audio += audio_segment
344
+
345
+ # Export the combined audio to the output file path
346
+ combined_audio.export(output_file_path, format='wav')
347
+
348
+ print(f"Combined audio saved to {output_file_path}")
349
+
350
+ # Function to split long strings into parts
351
+ def split_long_sentence(sentence, max_length=249, max_pauses=10):
352
+ """
353
+ Splits a sentence into parts based on length or number of pauses without recursion.
354
+
355
+ :param sentence: The sentence to split.
356
+ :param max_length: Maximum allowed length of a sentence.
357
+ :param max_pauses: Maximum allowed number of pauses in a sentence.
358
+ :return: A list of sentence parts that meet the criteria.
359
+ """
360
+ parts = []
361
+ while len(sentence) > max_length or sentence.count(',') + sentence.count(';') + sentence.count('.') > max_pauses:
362
+ possible_splits = [i for i, char in enumerate(sentence) if char in ',;.' and i < max_length]
363
+ if possible_splits:
364
+ # Find the best place to split the sentence, preferring the last possible split to keep parts longer
365
+ split_at = possible_splits[-1] + 1
366
+ else:
367
+ # If no punctuation to split on within max_length, split at max_length
368
+ split_at = max_length
369
+
370
+ # Split the sentence and add the first part to the list
371
+ parts.append(sentence[:split_at].strip())
372
+ sentence = sentence[split_at:].strip()
373
+
374
+ # Add the remaining part of the sentence
375
+ parts.append(sentence)
376
+ return parts
377
+
378
+ """
379
+ if 'tts' not in locals():
380
+ tts = TTS(selected_tts_model, progress_bar=True).to(device)
381
+ """
382
+ from tqdm import tqdm
383
+
384
+ # Convert chapters to audio using XTTS
385
+ def convert_chapters_to_audio(chapters_dir, output_audio_dir, target_voice_path=None, language=None):
386
+ selected_tts_model = "tts_models/multilingual/multi-dataset/xtts_v2"
387
+ tts = TTS(selected_tts_model, progress_bar=False).to(device) # Set progress_bar to False to avoid nested progress bars
388
+
389
+ if not os.path.exists(output_audio_dir):
390
+ os.makedirs(output_audio_dir)
391
+
392
+ for chapter_file in sorted(os.listdir(chapters_dir)):
393
+ if chapter_file.endswith('.txt'):
394
+ # Extract chapter number from the filename
395
+ match = re.search(r"chapter_(\d+).txt", chapter_file)
396
+ if match:
397
+ chapter_num = int(match.group(1))
398
+ else:
399
+ print(f"Skipping file {chapter_file} as it does not match the expected format.")
400
+ continue
401
+
402
+ chapter_path = os.path.join(chapters_dir, chapter_file)
403
+ output_file_name = f"audio_chapter_{chapter_num}.wav"
404
+ output_file_path = os.path.join(output_audio_dir, output_file_name)
405
+ temp_audio_directory = os.path.join(".", "Working_files", "temp")
406
+ os.makedirs(temp_audio_directory, exist_ok=True)
407
+ temp_count = 0
408
+
409
+ with open(chapter_path, 'r', encoding='utf-8') as file:
410
+ chapter_text = file.read()
411
+ # Use the specified language model for sentence tokenization
412
+ sentences = sent_tokenize(chapter_text, language='italian' if language == 'it' else 'english')
413
+ for sentence in tqdm(sentences, desc=f"Chapter {chapter_num}"):
414
+ fragments = []
415
+ if language == "en":
416
+ fragments = split_long_sentence(sentence, max_length=249, max_pauses=10)
417
+ if language == "it":
418
+ fragments = split_long_sentence(sentence, max_length=213, max_pauses=10)
419
+ for fragment in fragments:
420
+ if fragment != "": #a hot fix to avoid blank fragments
421
+ print(f"Generating fragment: {fragment}...")
422
+ fragment_file_path = os.path.join(temp_audio_directory, f"{temp_count}.wav")
423
+ speaker_wav_path = target_voice_path if target_voice_path else default_target_voice_path
424
+ language_code = language if language else default_language_code
425
+ tts.tts_to_file(text=fragment, file_path=fragment_file_path, speaker_wav=speaker_wav_path, language=language_code)
426
+ temp_count += 1
427
+
428
+ combine_wav_files(temp_audio_directory, output_audio_dir, output_file_name)
429
+ wipe_folder(temp_audio_directory)
430
+ print(f"Converted chapter {chapter_num} to audio.")
431
+
432
+
433
+
434
+ # Main execution flow
435
+ if __name__ == "__main__":
436
+ if len(sys.argv) < 2:
437
+ print("Usage: python script.py <ebook_file_path> [target_voice_file_path]")
438
+ sys.exit(1)
439
+
440
+ ebook_file_path = sys.argv[1]
441
+ target_voice = sys.argv[2] if len(sys.argv) > 2 else None
442
+ language = sys.argv[3] if len(sys.argv) > 3 else None
443
+
444
+ if not calibre_installed():
445
+ sys.exit(1)
446
+
447
+ working_files = os.path.join(".","Working_files", "temp_ebook")
448
+ full_folder_working_files =os.path.join(".","Working_files")
449
+ chapters_directory = os.path.join(".","Working_files", "temp_ebook")
450
+ output_audio_directory = os.path.join(".", 'Chapter_wav_files')
451
+
452
+ print("Wiping and removeing Working_files folder...")
453
+ remove_folder_with_contents(full_folder_working_files)
454
+
455
+ print("Wiping and and removeing chapter_wav_files folder...")
456
+ remove_folder_with_contents(output_audio_directory)
457
+
458
+ create_chapter_labeled_book(ebook_file_path)
459
+ # audiobook_output_path = os.path.join(".", "Audiobooks")
460
+ # print(f"{chapters_directory}||||{output_audio_directory}|||||{target_voice}")
461
+ # convert_chapters_to_audio(chapters_directory, output_audio_directory, target_voice, language)
462
+ # create_m4b_from_chapters(output_audio_directory, ebook_file_path, audiobook_output_path)
Notebooks/Kaggel Archive Code/p2a_worker_gpu1.py ADDED
@@ -0,0 +1,465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ print("starting...")
2
+
3
+ #import os
4
+ #import shutil
5
+ #import subprocess
6
+ import re
7
+ #from pydub import AudioSegment
8
+ #import tempfile
9
+ #from pydub import AudioSegment
10
+ #import os
11
+ import nltk
12
+ #from nltk.tokenize import sent_tokenize
13
+ nltk.download('punkt') # Make sure to download the necessary models
14
+ def is_folder_empty(folder_path):
15
+ if os.path.exists(folder_path) and os.path.isdir(folder_path):
16
+ # List directory contents
17
+ if not os.listdir(folder_path):
18
+ return True # The folder is empty
19
+ else:
20
+ return False # The folder is not empty
21
+ else:
22
+ print(f"The path {folder_path} is not a valid folder.")
23
+ return None # The path is not a valid folder
24
+
25
+ def remove_folder_with_contents(folder_path):
26
+ try:
27
+ shutil.rmtree(folder_path)
28
+ print(f"Successfully removed {folder_path} and all of its contents.")
29
+ except Exception as e:
30
+ print(f"Error removing {folder_path}: {e}")
31
+
32
+
33
+
34
+
35
+ def wipe_folder(folder_path):
36
+ # Check if the folder exists
37
+ if not os.path.exists(folder_path):
38
+ print(f"The folder {folder_path} does not exist.")
39
+ return
40
+
41
+ # Iterate over all the items in the given folder
42
+ for item in os.listdir(folder_path):
43
+ item_path = os.path.join(folder_path, item)
44
+ # If it's a file, remove it and print a message
45
+ if os.path.isfile(item_path):
46
+ os.remove(item_path)
47
+ print(f"Removed file: {item_path}")
48
+ # If it's a directory, remove it recursively and print a message
49
+ elif os.path.isdir(item_path):
50
+ shutil.rmtree(item_path)
51
+ print(f"Removed directory and its contents: {item_path}")
52
+
53
+ print(f"All contents wiped from {folder_path}.")
54
+
55
+
56
+ # Example usage
57
+ # folder_to_wipe = 'path_to_your_folder'
58
+ # wipe_folder(folder_to_wipe)
59
+
60
+
61
+ def create_m4b_from_chapters(input_dir, ebook_file, output_dir):
62
+ # Function to sort chapters based on their numeric order
63
+ def sort_key(chapter_file):
64
+ numbers = re.findall(r'\d+', chapter_file)
65
+ return int(numbers[0]) if numbers else 0
66
+
67
+ # Extract metadata and cover image from the eBook file
68
+ def extract_metadata_and_cover(ebook_path):
69
+ try:
70
+ cover_path = ebook_path.rsplit('.', 1)[0] + '.jpg'
71
+ subprocess.run(['ebook-meta', ebook_path, '--get-cover', cover_path], check=True)
72
+ if os.path.exists(cover_path):
73
+ return cover_path
74
+ except Exception as e:
75
+ print(f"Error extracting eBook metadata or cover: {e}")
76
+ return None
77
+ # Combine WAV files into a single file
78
+ def combine_wav_files(chapter_files, output_path):
79
+ # Initialize an empty audio segment
80
+ combined_audio = AudioSegment.empty()
81
+
82
+ # Sequentially append each file to the combined_audio
83
+ for chapter_file in chapter_files:
84
+ audio_segment = AudioSegment.from_wav(chapter_file)
85
+ combined_audio += audio_segment
86
+ # Export the combined audio to the output file path
87
+ combined_audio.export(output_path, format='wav')
88
+ print(f"Combined audio saved to {output_path}")
89
+
90
+ # Function to generate metadata for M4B chapters
91
+ def generate_ffmpeg_metadata(chapter_files, metadata_file):
92
+ with open(metadata_file, 'w') as file:
93
+ file.write(';FFMETADATA1\n')
94
+ start_time = 0
95
+ for index, chapter_file in enumerate(chapter_files):
96
+ duration_ms = len(AudioSegment.from_wav(chapter_file))
97
+ file.write(f'[CHAPTER]\nTIMEBASE=1/1000\nSTART={start_time}\n')
98
+ file.write(f'END={start_time + duration_ms}\ntitle=Chapter {index + 1}\n')
99
+ start_time += duration_ms
100
+
101
+ # Generate the final M4B file using ffmpeg
102
+ def create_m4b(combined_wav, metadata_file, cover_image, output_m4b):
103
+ # Ensure the output directory exists
104
+ os.makedirs(os.path.dirname(output_m4b), exist_ok=True)
105
+
106
+ ffmpeg_cmd = ['ffmpeg', '-i', combined_wav, '-i', metadata_file]
107
+ if cover_image:
108
+ ffmpeg_cmd += ['-i', cover_image, '-map', '0:a', '-map', '2:v']
109
+ else:
110
+ ffmpeg_cmd += ['-map', '0:a']
111
+
112
+ ffmpeg_cmd += ['-map_metadata', '1', '-c:a', 'aac', '-b:a', '192k']
113
+ if cover_image:
114
+ ffmpeg_cmd += ['-c:v', 'png', '-disposition:v', 'attached_pic']
115
+ ffmpeg_cmd += [output_m4b]
116
+
117
+ subprocess.run(ffmpeg_cmd, check=True)
118
+
119
+
120
+
121
+ # Main logic
122
+ chapter_files = sorted([os.path.join(input_dir, f) for f in os.listdir(input_dir) if f.endswith('.wav')], key=sort_key)
123
+ temp_dir = tempfile.gettempdir()
124
+ temp_combined_wav = os.path.join(temp_dir, 'combined.wav')
125
+ metadata_file = os.path.join(temp_dir, 'metadata.txt')
126
+ cover_image = extract_metadata_and_cover(ebook_file)
127
+ output_m4b = os.path.join(output_dir, os.path.splitext(os.path.basename(ebook_file))[0] + '.m4b')
128
+
129
+ combine_wav_files(chapter_files, temp_combined_wav)
130
+ generate_ffmpeg_metadata(chapter_files, metadata_file)
131
+ create_m4b(temp_combined_wav, metadata_file, cover_image, output_m4b)
132
+
133
+ # Cleanup
134
+ if os.path.exists(temp_combined_wav):
135
+ os.remove(temp_combined_wav)
136
+ if os.path.exists(metadata_file):
137
+ os.remove(metadata_file)
138
+ if cover_image and os.path.exists(cover_image):
139
+ os.remove(cover_image)
140
+
141
+ # Example usage
142
+ # create_m4b_from_chapters('path_to_chapter_wavs', 'path_to_ebook_file', 'path_to_output_dir')
143
+
144
+
145
+
146
+
147
+
148
+
149
+ #this code right here isnt the book grabbing thing but its before to refrence in ordero to create the sepecial chapter labeled book thing with calibre idk some systems cant seem to get it so just in case but the next bit of code after this is the book grabbing code with booknlp
150
+ #import os
151
+ #import subprocess
152
+ #import ebooklib
153
+ #from ebooklib import epub
154
+ #from bs4 import BeautifulSoup
155
+ #import re
156
+ #import csv
157
+ #import nltk
158
+
159
+ # Only run the main script if Value is True
160
+ def create_chapter_labeled_book(ebook_file_path):
161
+ # Function to ensure the existence of a directory
162
+ def ensure_directory(directory_path):
163
+ if not os.path.exists(directory_path):
164
+ os.makedirs(directory_path)
165
+ print(f"Created directory: {directory_path}")
166
+
167
+ ensure_directory(os.path.join(".", 'Working_files', 'Book'))
168
+
169
+ def convert_to_epub(input_path, output_path):
170
+ # Convert the ebook to EPUB format using Calibre's ebook-convert
171
+ try:
172
+ subprocess.run(['ebook-convert', input_path, output_path], check=True)
173
+ except subprocess.CalledProcessError as e:
174
+ print(f"An error occurred while converting the eBook: {e}")
175
+ return False
176
+ return True
177
+
178
+ def save_chapters_as_text(epub_path):
179
+ # Create the directory if it doesn't exist
180
+ directory = os.path.join(".", "Working_files", "temp_ebook")
181
+ ensure_directory(directory)
182
+
183
+ # Open the EPUB file
184
+ book = epub.read_epub(epub_path)
185
+
186
+ previous_chapter_text = ''
187
+ previous_filename = ''
188
+ chapter_counter = 0
189
+
190
+ # Iterate through the items in the EPUB file
191
+ for item in book.get_items():
192
+ if item.get_type() == ebooklib.ITEM_DOCUMENT:
193
+ # Use BeautifulSoup to parse HTML content
194
+ soup = BeautifulSoup(item.get_content(), 'html.parser')
195
+ text = soup.get_text()
196
+
197
+ # Check if the text is not empty
198
+ if text.strip():
199
+ if len(text) < 2300 and previous_filename:
200
+ # Append text to the previous chapter if it's short
201
+ with open(previous_filename, 'a', encoding='utf-8') as file:
202
+ file.write('\n' + text)
203
+ else:
204
+ # Create a new chapter file and increment the counter
205
+ previous_filename = os.path.join(directory, f"chapter_{chapter_counter}.txt")
206
+ chapter_counter += 1
207
+ with open(previous_filename, 'w', encoding='utf-8') as file:
208
+ file.write(text)
209
+ print(f"Saved chapter: {previous_filename}")
210
+
211
+ # Example usage
212
+ input_ebook = ebook_file_path # Replace with your eBook file path
213
+ output_epub = os.path.join(".", "Working_files", "temp.epub")
214
+
215
+
216
+ if os.path.exists(output_epub):
217
+ os.remove(output_epub)
218
+ print(f"File {output_epub} has been removed.")
219
+ else:
220
+ print(f"The file {output_epub} does not exist.")
221
+
222
+ if convert_to_epub(input_ebook, output_epub):
223
+ save_chapters_as_text(output_epub)
224
+
225
+ # Download the necessary NLTK data (if not already present)
226
+ nltk.download('punkt')
227
+
228
+ def process_chapter_files(folder_path, output_csv):
229
+ with open(output_csv, 'w', newline='', encoding='utf-8') as csvfile:
230
+ writer = csv.writer(csvfile)
231
+ # Write the header row
232
+ writer.writerow(['Text', 'Start Location', 'End Location', 'Is Quote', 'Speaker', 'Chapter'])
233
+
234
+ # Process each chapter file
235
+ chapter_files = sorted(os.listdir(folder_path), key=lambda x: int(x.split('_')[1].split('.')[0]))
236
+ for filename in chapter_files:
237
+ if filename.startswith('chapter_') and filename.endswith('.txt'):
238
+ chapter_number = int(filename.split('_')[1].split('.')[0])
239
+ file_path = os.path.join(folder_path, filename)
240
+
241
+ try:
242
+ with open(file_path, 'r', encoding='utf-8') as file:
243
+ text = file.read()
244
+ # Insert "NEWCHAPTERABC" at the beginning of each chapter's text
245
+ if text:
246
+ text = "NEWCHAPTERABC" + text
247
+ sentences = nltk.tokenize.sent_tokenize(text)
248
+ for sentence in sentences:
249
+ start_location = text.find(sentence)
250
+ end_location = start_location + len(sentence)
251
+ writer.writerow([sentence, start_location, end_location, 'True', 'Narrator', chapter_number])
252
+ except Exception as e:
253
+ print(f"Error processing file {filename}: {e}")
254
+
255
+ # Example usage
256
+ folder_path = os.path.join(".", "Working_files", "temp_ebook")
257
+ output_csv = os.path.join(".", "Working_files", "Book", "Other_book.csv")
258
+
259
+ process_chapter_files(folder_path, output_csv)
260
+
261
+ def sort_key(filename):
262
+ """Extract chapter number for sorting."""
263
+ match = re.search(r'chapter_(\d+)\.txt', filename)
264
+ return int(match.group(1)) if match else 0
265
+
266
+ def combine_chapters(input_folder, output_file):
267
+ # Create the output folder if it doesn't exist
268
+ os.makedirs(os.path.dirname(output_file), exist_ok=True)
269
+
270
+ # List all txt files and sort them by chapter number
271
+ files = [f for f in os.listdir(input_folder) if f.endswith('.txt')]
272
+ sorted_files = sorted(files, key=sort_key)
273
+
274
+ with open(output_file, 'w', encoding='utf-8') as outfile: # Specify UTF-8 encoding here
275
+ for i, filename in enumerate(sorted_files):
276
+ with open(os.path.join(input_folder, filename), 'r', encoding='utf-8') as infile: # And here
277
+ outfile.write(infile.read())
278
+ # Add the marker unless it's the last file
279
+ if i < len(sorted_files) - 1:
280
+ outfile.write("\nNEWCHAPTERABC\n")
281
+
282
+ # Paths
283
+ input_folder = os.path.join(".", 'Working_files', 'temp_ebook')
284
+ output_file = os.path.join(".", 'Working_files', 'Book', 'Chapter_Book.txt')
285
+
286
+
287
+ # Combine the chapters
288
+ combine_chapters(input_folder, output_file)
289
+
290
+ ensure_directory(os.path.join(".", "Working_files", "Book"))
291
+
292
+
293
+ #create_chapter_labeled_book()
294
+
295
+
296
+
297
+
298
+ #import os
299
+ import subprocess
300
+ import sys
301
+ import torchaudio # not sure if this is needed
302
+
303
+ # Check if Calibre's ebook-convert tool is installed
304
+ def calibre_installed():
305
+ try:
306
+ subprocess.run(['ebook-convert', '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
307
+ return True
308
+ except FileNotFoundError:
309
+ print("Calibre is not installed. Please install Calibre for this functionality.")
310
+ return False
311
+
312
+
313
+ import os
314
+ import torch
315
+ from TTS.api import TTS
316
+ from nltk.tokenize import sent_tokenize
317
+ from pydub import AudioSegment
318
+ # Assuming split_long_sentence and wipe_folder are defined elsewhere in your code
319
+
320
+ default_target_voice_path = "default_voice.wav" # Ensure this is a valid path
321
+ default_language_code = "en"
322
+ device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
323
+
324
+ def combine_wav_files(input_directory, output_directory, file_name):
325
+ # Ensure that the output directory exists, create it if necessary
326
+ os.makedirs(output_directory, exist_ok=True)
327
+
328
+ # Specify the output file path
329
+ output_file_path = os.path.join(output_directory, file_name)
330
+
331
+ # Initialize an empty audio segment
332
+ combined_audio = AudioSegment.empty()
333
+
334
+ # Get a list of all .wav files in the specified input directory and sort them
335
+ input_file_paths = sorted(
336
+ [os.path.join(input_directory, f) for f in os.listdir(input_directory) if f.endswith(".wav")],
337
+ key=lambda f: int(''.join(filter(str.isdigit, f)))
338
+ )
339
+
340
+ # Sequentially append each file to the combined_audio
341
+ for input_file_path in input_file_paths:
342
+ audio_segment = AudioSegment.from_wav(input_file_path)
343
+ combined_audio += audio_segment
344
+
345
+ # Export the combined audio to the output file path
346
+ combined_audio.export(output_file_path, format='wav')
347
+
348
+ print(f"Combined audio saved to {output_file_path}")
349
+
350
+ # Function to split long strings into parts
351
+ def split_long_sentence(sentence, max_length=249, max_pauses=10):
352
+ """
353
+ Splits a sentence into parts based on length or number of pauses without recursion.
354
+
355
+ :param sentence: The sentence to split.
356
+ :param max_length: Maximum allowed length of a sentence.
357
+ :param max_pauses: Maximum allowed number of pauses in a sentence.
358
+ :return: A list of sentence parts that meet the criteria.
359
+ """
360
+ parts = []
361
+ while len(sentence) > max_length or sentence.count(',') + sentence.count(';') + sentence.count('.') > max_pauses:
362
+ possible_splits = [i for i, char in enumerate(sentence) if char in ',;.' and i < max_length]
363
+ if possible_splits:
364
+ # Find the best place to split the sentence, preferring the last possible split to keep parts longer
365
+ split_at = possible_splits[-1] + 1
366
+ else:
367
+ # If no punctuation to split on within max_length, split at max_length
368
+ split_at = max_length
369
+
370
+ # Split the sentence and add the first part to the list
371
+ parts.append(sentence[:split_at].strip())
372
+ sentence = sentence[split_at:].strip()
373
+
374
+ # Add the remaining part of the sentence
375
+ parts.append(sentence)
376
+ return parts
377
+
378
+ """
379
+ if 'tts' not in locals():
380
+ tts = TTS(selected_tts_model, progress_bar=True).to(device)
381
+ """
382
+ from tqdm import tqdm
383
+
384
+ # Convert chapters to audio using XTTS
385
+ def convert_chapters_to_audio(chapters_dir, output_audio_dir, target_voice_path=None, language=None):
386
+ selected_tts_model = "tts_models/multilingual/multi-dataset/xtts_v2"
387
+ tts = TTS(selected_tts_model, progress_bar=False).to(device) # Set progress_bar to False to avoid nested progress bars
388
+
389
+ if not os.path.exists(output_audio_dir):
390
+ os.makedirs(output_audio_dir)
391
+
392
+ for chapter_file in sorted(os.listdir(chapters_dir)):
393
+ if chapter_file.endswith('.txt'):
394
+ # Extract chapter number from the filename
395
+ match = re.search(r"chapter_(\d+).txt", chapter_file)
396
+ if match:
397
+ chapter_num = int(match.group(1))
398
+ else:
399
+ print(f"Skipping file {chapter_file} as it does not match the expected format.")
400
+ continue
401
+
402
+ chapter_path = os.path.join(chapters_dir, chapter_file)
403
+ output_file_name = f"audio_chapter_{chapter_num}.wav"
404
+ output_file_path = os.path.join(output_audio_dir, output_file_name)
405
+ # temp_audio_directory = os.path.join(".", "Working_files", "temp")
406
+ temp_audio_directory = os.path.join(".", "Operator",worker_num, "temp")
407
+ os.makedirs(temp_audio_directory, exist_ok=True)
408
+ temp_count = 0
409
+
410
+ with open(chapter_path, 'r', encoding='utf-8') as file:
411
+ chapter_text = file.read()
412
+ # Use the specified language model for sentence tokenization
413
+ sentences = sent_tokenize(chapter_text, language='italian' if language == 'it' else 'english')
414
+ for sentence in tqdm(sentences, desc=f"Chapter {chapter_num}"):
415
+ fragments = []
416
+ if language == "en":
417
+ fragments = split_long_sentence(sentence, max_length=249, max_pauses=10)
418
+ if language == "it":
419
+ fragments = split_long_sentence(sentence, max_length=213, max_pauses=10)
420
+ for fragment in fragments:
421
+ if fragment != "": #a hot fix to avoid blank fragments
422
+ print(f"Generating fragment: {fragment}...")
423
+ fragment_file_path = os.path.join(temp_audio_directory, f"{temp_count}.wav")
424
+ speaker_wav_path = target_voice_path if target_voice_path else default_target_voice_path
425
+ language_code = language if language else default_language_code
426
+ tts.tts_to_file(text=fragment, file_path=fragment_file_path, speaker_wav=speaker_wav_path, language=language_code)
427
+ temp_count += 1
428
+
429
+ combine_wav_files(temp_audio_directory, output_audio_dir, output_file_name)
430
+ wipe_folder(temp_audio_directory)
431
+ print(f"Converted chapter {chapter_num} to audio.")
432
+
433
+
434
+
435
+ # Main execution flow
436
+ if __name__ == "__main__":
437
+ # if len(sys.argv) < 2:
438
+ # print("Usage: python script.py <ebook_file_path> [target_voice_file_path]")
439
+ # sys.exit(1)
440
+
441
+ worker_num = sys.argv[1] #to let the script know which temp dir its using in operator
442
+ # ebook_file_path = sys.argv[1]
443
+ target_voice = "./4.wav" # sys.argv[2] if len(sys.argv) > 2 else None
444
+ language = "en" # sys.argv[3] if len(sys.argv) > 3 else None
445
+
446
+ # if not calibre_installed():
447
+ # sys.exit(1)
448
+
449
+ working_files = os.path.join(".","Working_files", "temp_ebook")
450
+ full_folder_working_files =os.path.join(".","Working_files")
451
+ # chapters_directory = os.path.join(".","Working_files", "temp_ebook")
452
+ chapters_directory = os.path.join(".","Operator",worker_num, "temp_ebook")
453
+ output_audio_directory = os.path.join(".", 'Chapter_wav_files')
454
+
455
+ # print("Wiping and removeing Working_files folder...")
456
+ # remove_folder_with_contents(full_folder_working_files)
457
+ #
458
+ # print("Wiping and and removeing chapter_wav_files folder...")
459
+ # remove_folder_with_contents(output_audio_directory)
460
+
461
+ # create_chapter_labeled_book(ebook_file_path)
462
+ audiobook_output_path = os.path.join(".", "Audiobooks")
463
+ print(f"{chapters_directory}||||{output_audio_directory}|||||{target_voice}")
464
+ convert_chapters_to_audio(chapters_directory, output_audio_directory, target_voice, language)
465
+ # create_m4b_from_chapters(output_audio_directory, ebook_file_path, audiobook_output_path)
Notebooks/Kaggel Archive Code/p2a_worker_gpu2.py ADDED
@@ -0,0 +1,465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ print("starting...")
2
+
3
+ #import os
4
+ #import shutil
5
+ #import subprocess
6
+ import re
7
+ #from pydub import AudioSegment
8
+ #import tempfile
9
+ #from pydub import AudioSegment
10
+ #import os
11
+ import nltk
12
+ #from nltk.tokenize import sent_tokenize
13
+ nltk.download('punkt') # Make sure to download the necessary models
14
+ def is_folder_empty(folder_path):
15
+ if os.path.exists(folder_path) and os.path.isdir(folder_path):
16
+ # List directory contents
17
+ if not os.listdir(folder_path):
18
+ return True # The folder is empty
19
+ else:
20
+ return False # The folder is not empty
21
+ else:
22
+ print(f"The path {folder_path} is not a valid folder.")
23
+ return None # The path is not a valid folder
24
+
25
+ def remove_folder_with_contents(folder_path):
26
+ try:
27
+ shutil.rmtree(folder_path)
28
+ print(f"Successfully removed {folder_path} and all of its contents.")
29
+ except Exception as e:
30
+ print(f"Error removing {folder_path}: {e}")
31
+
32
+
33
+
34
+
35
+ def wipe_folder(folder_path):
36
+ # Check if the folder exists
37
+ if not os.path.exists(folder_path):
38
+ print(f"The folder {folder_path} does not exist.")
39
+ return
40
+
41
+ # Iterate over all the items in the given folder
42
+ for item in os.listdir(folder_path):
43
+ item_path = os.path.join(folder_path, item)
44
+ # If it's a file, remove it and print a message
45
+ if os.path.isfile(item_path):
46
+ os.remove(item_path)
47
+ print(f"Removed file: {item_path}")
48
+ # If it's a directory, remove it recursively and print a message
49
+ elif os.path.isdir(item_path):
50
+ shutil.rmtree(item_path)
51
+ print(f"Removed directory and its contents: {item_path}")
52
+
53
+ print(f"All contents wiped from {folder_path}.")
54
+
55
+
56
+ # Example usage
57
+ # folder_to_wipe = 'path_to_your_folder'
58
+ # wipe_folder(folder_to_wipe)
59
+
60
+
61
+ def create_m4b_from_chapters(input_dir, ebook_file, output_dir):
62
+ # Function to sort chapters based on their numeric order
63
+ def sort_key(chapter_file):
64
+ numbers = re.findall(r'\d+', chapter_file)
65
+ return int(numbers[0]) if numbers else 0
66
+
67
+ # Extract metadata and cover image from the eBook file
68
+ def extract_metadata_and_cover(ebook_path):
69
+ try:
70
+ cover_path = ebook_path.rsplit('.', 1)[0] + '.jpg'
71
+ subprocess.run(['ebook-meta', ebook_path, '--get-cover', cover_path], check=True)
72
+ if os.path.exists(cover_path):
73
+ return cover_path
74
+ except Exception as e:
75
+ print(f"Error extracting eBook metadata or cover: {e}")
76
+ return None
77
+ # Combine WAV files into a single file
78
+ def combine_wav_files(chapter_files, output_path):
79
+ # Initialize an empty audio segment
80
+ combined_audio = AudioSegment.empty()
81
+
82
+ # Sequentially append each file to the combined_audio
83
+ for chapter_file in chapter_files:
84
+ audio_segment = AudioSegment.from_wav(chapter_file)
85
+ combined_audio += audio_segment
86
+ # Export the combined audio to the output file path
87
+ combined_audio.export(output_path, format='wav')
88
+ print(f"Combined audio saved to {output_path}")
89
+
90
+ # Function to generate metadata for M4B chapters
91
+ def generate_ffmpeg_metadata(chapter_files, metadata_file):
92
+ with open(metadata_file, 'w') as file:
93
+ file.write(';FFMETADATA1\n')
94
+ start_time = 0
95
+ for index, chapter_file in enumerate(chapter_files):
96
+ duration_ms = len(AudioSegment.from_wav(chapter_file))
97
+ file.write(f'[CHAPTER]\nTIMEBASE=1/1000\nSTART={start_time}\n')
98
+ file.write(f'END={start_time + duration_ms}\ntitle=Chapter {index + 1}\n')
99
+ start_time += duration_ms
100
+
101
+ # Generate the final M4B file using ffmpeg
102
+ def create_m4b(combined_wav, metadata_file, cover_image, output_m4b):
103
+ # Ensure the output directory exists
104
+ os.makedirs(os.path.dirname(output_m4b), exist_ok=True)
105
+
106
+ ffmpeg_cmd = ['ffmpeg', '-i', combined_wav, '-i', metadata_file]
107
+ if cover_image:
108
+ ffmpeg_cmd += ['-i', cover_image, '-map', '0:a', '-map', '2:v']
109
+ else:
110
+ ffmpeg_cmd += ['-map', '0:a']
111
+
112
+ ffmpeg_cmd += ['-map_metadata', '1', '-c:a', 'aac', '-b:a', '192k']
113
+ if cover_image:
114
+ ffmpeg_cmd += ['-c:v', 'png', '-disposition:v', 'attached_pic']
115
+ ffmpeg_cmd += [output_m4b]
116
+
117
+ subprocess.run(ffmpeg_cmd, check=True)
118
+
119
+
120
+
121
+ # Main logic
122
+ chapter_files = sorted([os.path.join(input_dir, f) for f in os.listdir(input_dir) if f.endswith('.wav')], key=sort_key)
123
+ temp_dir = tempfile.gettempdir()
124
+ temp_combined_wav = os.path.join(temp_dir, 'combined.wav')
125
+ metadata_file = os.path.join(temp_dir, 'metadata.txt')
126
+ cover_image = extract_metadata_and_cover(ebook_file)
127
+ output_m4b = os.path.join(output_dir, os.path.splitext(os.path.basename(ebook_file))[0] + '.m4b')
128
+
129
+ combine_wav_files(chapter_files, temp_combined_wav)
130
+ generate_ffmpeg_metadata(chapter_files, metadata_file)
131
+ create_m4b(temp_combined_wav, metadata_file, cover_image, output_m4b)
132
+
133
+ # Cleanup
134
+ if os.path.exists(temp_combined_wav):
135
+ os.remove(temp_combined_wav)
136
+ if os.path.exists(metadata_file):
137
+ os.remove(metadata_file)
138
+ if cover_image and os.path.exists(cover_image):
139
+ os.remove(cover_image)
140
+
141
+ # Example usage
142
+ # create_m4b_from_chapters('path_to_chapter_wavs', 'path_to_ebook_file', 'path_to_output_dir')
143
+
144
+
145
+
146
+
147
+
148
+
149
+ #this code right here isnt the book grabbing thing but its before to refrence in ordero to create the sepecial chapter labeled book thing with calibre idk some systems cant seem to get it so just in case but the next bit of code after this is the book grabbing code with booknlp
150
+ #import os
151
+ #import subprocess
152
+ #import ebooklib
153
+ #from ebooklib import epub
154
+ #from bs4 import BeautifulSoup
155
+ #import re
156
+ #import csv
157
+ #import nltk
158
+
159
+ # Only run the main script if Value is True
160
+ def create_chapter_labeled_book(ebook_file_path):
161
+ # Function to ensure the existence of a directory
162
+ def ensure_directory(directory_path):
163
+ if not os.path.exists(directory_path):
164
+ os.makedirs(directory_path)
165
+ print(f"Created directory: {directory_path}")
166
+
167
+ ensure_directory(os.path.join(".", 'Working_files', 'Book'))
168
+
169
+ def convert_to_epub(input_path, output_path):
170
+ # Convert the ebook to EPUB format using Calibre's ebook-convert
171
+ try:
172
+ subprocess.run(['ebook-convert', input_path, output_path], check=True)
173
+ except subprocess.CalledProcessError as e:
174
+ print(f"An error occurred while converting the eBook: {e}")
175
+ return False
176
+ return True
177
+
178
+ def save_chapters_as_text(epub_path):
179
+ # Create the directory if it doesn't exist
180
+ directory = os.path.join(".", "Working_files", "temp_ebook")
181
+ ensure_directory(directory)
182
+
183
+ # Open the EPUB file
184
+ book = epub.read_epub(epub_path)
185
+
186
+ previous_chapter_text = ''
187
+ previous_filename = ''
188
+ chapter_counter = 0
189
+
190
+ # Iterate through the items in the EPUB file
191
+ for item in book.get_items():
192
+ if item.get_type() == ebooklib.ITEM_DOCUMENT:
193
+ # Use BeautifulSoup to parse HTML content
194
+ soup = BeautifulSoup(item.get_content(), 'html.parser')
195
+ text = soup.get_text()
196
+
197
+ # Check if the text is not empty
198
+ if text.strip():
199
+ if len(text) < 2300 and previous_filename:
200
+ # Append text to the previous chapter if it's short
201
+ with open(previous_filename, 'a', encoding='utf-8') as file:
202
+ file.write('\n' + text)
203
+ else:
204
+ # Create a new chapter file and increment the counter
205
+ previous_filename = os.path.join(directory, f"chapter_{chapter_counter}.txt")
206
+ chapter_counter += 1
207
+ with open(previous_filename, 'w', encoding='utf-8') as file:
208
+ file.write(text)
209
+ print(f"Saved chapter: {previous_filename}")
210
+
211
+ # Example usage
212
+ input_ebook = ebook_file_path # Replace with your eBook file path
213
+ output_epub = os.path.join(".", "Working_files", "temp.epub")
214
+
215
+
216
+ if os.path.exists(output_epub):
217
+ os.remove(output_epub)
218
+ print(f"File {output_epub} has been removed.")
219
+ else:
220
+ print(f"The file {output_epub} does not exist.")
221
+
222
+ if convert_to_epub(input_ebook, output_epub):
223
+ save_chapters_as_text(output_epub)
224
+
225
+ # Download the necessary NLTK data (if not already present)
226
+ nltk.download('punkt')
227
+
228
+ def process_chapter_files(folder_path, output_csv):
229
+ with open(output_csv, 'w', newline='', encoding='utf-8') as csvfile:
230
+ writer = csv.writer(csvfile)
231
+ # Write the header row
232
+ writer.writerow(['Text', 'Start Location', 'End Location', 'Is Quote', 'Speaker', 'Chapter'])
233
+
234
+ # Process each chapter file
235
+ chapter_files = sorted(os.listdir(folder_path), key=lambda x: int(x.split('_')[1].split('.')[0]))
236
+ for filename in chapter_files:
237
+ if filename.startswith('chapter_') and filename.endswith('.txt'):
238
+ chapter_number = int(filename.split('_')[1].split('.')[0])
239
+ file_path = os.path.join(folder_path, filename)
240
+
241
+ try:
242
+ with open(file_path, 'r', encoding='utf-8') as file:
243
+ text = file.read()
244
+ # Insert "NEWCHAPTERABC" at the beginning of each chapter's text
245
+ if text:
246
+ text = "NEWCHAPTERABC" + text
247
+ sentences = nltk.tokenize.sent_tokenize(text)
248
+ for sentence in sentences:
249
+ start_location = text.find(sentence)
250
+ end_location = start_location + len(sentence)
251
+ writer.writerow([sentence, start_location, end_location, 'True', 'Narrator', chapter_number])
252
+ except Exception as e:
253
+ print(f"Error processing file {filename}: {e}")
254
+
255
+ # Example usage
256
+ folder_path = os.path.join(".", "Working_files", "temp_ebook")
257
+ output_csv = os.path.join(".", "Working_files", "Book", "Other_book.csv")
258
+
259
+ process_chapter_files(folder_path, output_csv)
260
+
261
+ def sort_key(filename):
262
+ """Extract chapter number for sorting."""
263
+ match = re.search(r'chapter_(\d+)\.txt', filename)
264
+ return int(match.group(1)) if match else 0
265
+
266
+ def combine_chapters(input_folder, output_file):
267
+ # Create the output folder if it doesn't exist
268
+ os.makedirs(os.path.dirname(output_file), exist_ok=True)
269
+
270
+ # List all txt files and sort them by chapter number
271
+ files = [f for f in os.listdir(input_folder) if f.endswith('.txt')]
272
+ sorted_files = sorted(files, key=sort_key)
273
+
274
+ with open(output_file, 'w', encoding='utf-8') as outfile: # Specify UTF-8 encoding here
275
+ for i, filename in enumerate(sorted_files):
276
+ with open(os.path.join(input_folder, filename), 'r', encoding='utf-8') as infile: # And here
277
+ outfile.write(infile.read())
278
+ # Add the marker unless it's the last file
279
+ if i < len(sorted_files) - 1:
280
+ outfile.write("\nNEWCHAPTERABC\n")
281
+
282
+ # Paths
283
+ input_folder = os.path.join(".", 'Working_files', 'temp_ebook')
284
+ output_file = os.path.join(".", 'Working_files', 'Book', 'Chapter_Book.txt')
285
+
286
+
287
+ # Combine the chapters
288
+ combine_chapters(input_folder, output_file)
289
+
290
+ ensure_directory(os.path.join(".", "Working_files", "Book"))
291
+
292
+
293
+ #create_chapter_labeled_book()
294
+
295
+
296
+
297
+
298
+ #import os
299
+ import subprocess
300
+ import sys
301
+ import torchaudio # not sure if this is needed
302
+
303
+ # Check if Calibre's ebook-convert tool is installed
304
+ def calibre_installed():
305
+ try:
306
+ subprocess.run(['ebook-convert', '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
307
+ return True
308
+ except FileNotFoundError:
309
+ print("Calibre is not installed. Please install Calibre for this functionality.")
310
+ return False
311
+
312
+
313
+ import os
314
+ import torch
315
+ from TTS.api import TTS
316
+ from nltk.tokenize import sent_tokenize
317
+ from pydub import AudioSegment
318
+ # Assuming split_long_sentence and wipe_folder are defined elsewhere in your code
319
+
320
+ default_target_voice_path = "default_voice.wav" # Ensure this is a valid path
321
+ default_language_code = "en"
322
+ device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
323
+
324
+ def combine_wav_files(input_directory, output_directory, file_name):
325
+ # Ensure that the output directory exists, create it if necessary
326
+ os.makedirs(output_directory, exist_ok=True)
327
+
328
+ # Specify the output file path
329
+ output_file_path = os.path.join(output_directory, file_name)
330
+
331
+ # Initialize an empty audio segment
332
+ combined_audio = AudioSegment.empty()
333
+
334
+ # Get a list of all .wav files in the specified input directory and sort them
335
+ input_file_paths = sorted(
336
+ [os.path.join(input_directory, f) for f in os.listdir(input_directory) if f.endswith(".wav")],
337
+ key=lambda f: int(''.join(filter(str.isdigit, f)))
338
+ )
339
+
340
+ # Sequentially append each file to the combined_audio
341
+ for input_file_path in input_file_paths:
342
+ audio_segment = AudioSegment.from_wav(input_file_path)
343
+ combined_audio += audio_segment
344
+
345
+ # Export the combined audio to the output file path
346
+ combined_audio.export(output_file_path, format='wav')
347
+
348
+ print(f"Combined audio saved to {output_file_path}")
349
+
350
+ # Function to split long strings into parts
351
+ def split_long_sentence(sentence, max_length=249, max_pauses=10):
352
+ """
353
+ Splits a sentence into parts based on length or number of pauses without recursion.
354
+
355
+ :param sentence: The sentence to split.
356
+ :param max_length: Maximum allowed length of a sentence.
357
+ :param max_pauses: Maximum allowed number of pauses in a sentence.
358
+ :return: A list of sentence parts that meet the criteria.
359
+ """
360
+ parts = []
361
+ while len(sentence) > max_length or sentence.count(',') + sentence.count(';') + sentence.count('.') > max_pauses:
362
+ possible_splits = [i for i, char in enumerate(sentence) if char in ',;.' and i < max_length]
363
+ if possible_splits:
364
+ # Find the best place to split the sentence, preferring the last possible split to keep parts longer
365
+ split_at = possible_splits[-1] + 1
366
+ else:
367
+ # If no punctuation to split on within max_length, split at max_length
368
+ split_at = max_length
369
+
370
+ # Split the sentence and add the first part to the list
371
+ parts.append(sentence[:split_at].strip())
372
+ sentence = sentence[split_at:].strip()
373
+
374
+ # Add the remaining part of the sentence
375
+ parts.append(sentence)
376
+ return parts
377
+
378
+ """
379
+ if 'tts' not in locals():
380
+ tts = TTS(selected_tts_model, progress_bar=True).to(device)
381
+ """
382
+ from tqdm import tqdm
383
+
384
+ # Convert chapters to audio using XTTS
385
+ def convert_chapters_to_audio(chapters_dir, output_audio_dir, target_voice_path=None, language=None):
386
+ selected_tts_model = "tts_models/multilingual/multi-dataset/xtts_v2"
387
+ tts = TTS(selected_tts_model, progress_bar=False).to(device) # Set progress_bar to False to avoid nested progress bars
388
+
389
+ if not os.path.exists(output_audio_dir):
390
+ os.makedirs(output_audio_dir)
391
+
392
+ for chapter_file in sorted(os.listdir(chapters_dir)):
393
+ if chapter_file.endswith('.txt'):
394
+ # Extract chapter number from the filename
395
+ match = re.search(r"chapter_(\d+).txt", chapter_file)
396
+ if match:
397
+ chapter_num = int(match.group(1))
398
+ else:
399
+ print(f"Skipping file {chapter_file} as it does not match the expected format.")
400
+ continue
401
+
402
+ chapter_path = os.path.join(chapters_dir, chapter_file)
403
+ output_file_name = f"audio_chapter_{chapter_num}.wav"
404
+ output_file_path = os.path.join(output_audio_dir, output_file_name)
405
+ # temp_audio_directory = os.path.join(".", "Working_files", "temp")
406
+ temp_audio_directory = os.path.join(".", "Operator",worker_num, "temp")
407
+ os.makedirs(temp_audio_directory, exist_ok=True)
408
+ temp_count = 0
409
+
410
+ with open(chapter_path, 'r', encoding='utf-8') as file:
411
+ chapter_text = file.read()
412
+ # Use the specified language model for sentence tokenization
413
+ sentences = sent_tokenize(chapter_text, language='italian' if language == 'it' else 'english')
414
+ for sentence in tqdm(sentences, desc=f"Chapter {chapter_num}"):
415
+ fragments = []
416
+ if language == "en":
417
+ fragments = split_long_sentence(sentence, max_length=249, max_pauses=10)
418
+ if language == "it":
419
+ fragments = split_long_sentence(sentence, max_length=213, max_pauses=10)
420
+ for fragment in fragments:
421
+ if fragment != "": #a hot fix to avoid blank fragments
422
+ print(f"Generating fragment: {fragment}...")
423
+ fragment_file_path = os.path.join(temp_audio_directory, f"{temp_count}.wav")
424
+ speaker_wav_path = target_voice_path if target_voice_path else default_target_voice_path
425
+ language_code = language if language else default_language_code
426
+ tts.tts_to_file(text=fragment, file_path=fragment_file_path, speaker_wav=speaker_wav_path, language=language_code)
427
+ temp_count += 1
428
+
429
+ combine_wav_files(temp_audio_directory, output_audio_dir, output_file_name)
430
+ wipe_folder(temp_audio_directory)
431
+ print(f"Converted chapter {chapter_num} to audio.")
432
+
433
+
434
+
435
+ # Main execution flow
436
+ if __name__ == "__main__":
437
+ # if len(sys.argv) < 2:
438
+ # print("Usage: python script.py <ebook_file_path> [target_voice_file_path]")
439
+ # sys.exit(1)
440
+
441
+ worker_num = sys.argv[1] #to let the script know which temp dir its using in operator
442
+ # ebook_file_path = sys.argv[1]
443
+ target_voice = "./4.wav" # sys.argv[2] if len(sys.argv) > 2 else None
444
+ language = "en" # sys.argv[3] if len(sys.argv) > 3 else None
445
+
446
+ # if not calibre_installed():
447
+ # sys.exit(1)
448
+
449
+ working_files = os.path.join(".","Working_files", "temp_ebook")
450
+ full_folder_working_files =os.path.join(".","Working_files")
451
+ # chapters_directory = os.path.join(".","Working_files", "temp_ebook")
452
+ chapters_directory = os.path.join(".","Operator",worker_num, "temp_ebook")
453
+ output_audio_directory = os.path.join(".", 'Chapter_wav_files')
454
+
455
+ # print("Wiping and removeing Working_files folder...")
456
+ # remove_folder_with_contents(full_folder_working_files)
457
+ #
458
+ # print("Wiping and and removeing chapter_wav_files folder...")
459
+ # remove_folder_with_contents(output_audio_directory)
460
+
461
+ # create_chapter_labeled_book(ebook_file_path)
462
+ audiobook_output_path = os.path.join(".", "Audiobooks")
463
+ print(f"{chapters_directory}||||{output_audio_directory}|||||{target_voice}")
464
+ convert_chapters_to_audio(chapters_directory, output_audio_directory, target_voice, language)
465
+ # create_m4b_from_chapters(output_audio_directory, ebook_file_path, audiobook_output_path)
Notebooks/Kaggel Archive Code/p3.py ADDED
@@ -0,0 +1,462 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ print("starting...")
2
+
3
+ import os
4
+ import shutil
5
+ import subprocess
6
+ import re
7
+ from pydub import AudioSegment
8
+ import tempfile
9
+ from pydub import AudioSegment
10
+ import os
11
+ import nltk
12
+ from nltk.tokenize import sent_tokenize
13
+ nltk.download('punkt') # Make sure to download the necessary models
14
+ def is_folder_empty(folder_path):
15
+ if os.path.exists(folder_path) and os.path.isdir(folder_path):
16
+ # List directory contents
17
+ if not os.listdir(folder_path):
18
+ return True # The folder is empty
19
+ else:
20
+ return False # The folder is not empty
21
+ else:
22
+ print(f"The path {folder_path} is not a valid folder.")
23
+ return None # The path is not a valid folder
24
+
25
+ def remove_folder_with_contents(folder_path):
26
+ try:
27
+ shutil.rmtree(folder_path)
28
+ print(f"Successfully removed {folder_path} and all of its contents.")
29
+ except Exception as e:
30
+ print(f"Error removing {folder_path}: {e}")
31
+
32
+
33
+
34
+
35
+ def wipe_folder(folder_path):
36
+ # Check if the folder exists
37
+ if not os.path.exists(folder_path):
38
+ print(f"The folder {folder_path} does not exist.")
39
+ return
40
+
41
+ # Iterate over all the items in the given folder
42
+ for item in os.listdir(folder_path):
43
+ item_path = os.path.join(folder_path, item)
44
+ # If it's a file, remove it and print a message
45
+ if os.path.isfile(item_path):
46
+ os.remove(item_path)
47
+ print(f"Removed file: {item_path}")
48
+ # If it's a directory, remove it recursively and print a message
49
+ elif os.path.isdir(item_path):
50
+ shutil.rmtree(item_path)
51
+ print(f"Removed directory and its contents: {item_path}")
52
+
53
+ print(f"All contents wiped from {folder_path}.")
54
+
55
+
56
+ # Example usage
57
+ # folder_to_wipe = 'path_to_your_folder'
58
+ # wipe_folder(folder_to_wipe)
59
+
60
+
61
+ def create_m4b_from_chapters(input_dir, ebook_file, output_dir):
62
+ # Function to sort chapters based on their numeric order
63
+ def sort_key(chapter_file):
64
+ numbers = re.findall(r'\d+', chapter_file)
65
+ return int(numbers[0]) if numbers else 0
66
+
67
+ # Extract metadata and cover image from the eBook file
68
+ def extract_metadata_and_cover(ebook_path):
69
+ try:
70
+ cover_path = ebook_path.rsplit('.', 1)[0] + '.jpg'
71
+ subprocess.run(['ebook-meta', ebook_path, '--get-cover', cover_path], check=True)
72
+ if os.path.exists(cover_path):
73
+ return cover_path
74
+ except Exception as e:
75
+ print(f"Error extracting eBook metadata or cover: {e}")
76
+ return None
77
+ # Combine WAV files into a single file
78
+ def combine_wav_files(chapter_files, output_path):
79
+ # Initialize an empty audio segment
80
+ combined_audio = AudioSegment.empty()
81
+
82
+ # Sequentially append each file to the combined_audio
83
+ for chapter_file in chapter_files:
84
+ audio_segment = AudioSegment.from_wav(chapter_file)
85
+ combined_audio += audio_segment
86
+ # Export the combined audio to the output file path
87
+ combined_audio.export(output_path, format='wav')
88
+ print(f"Combined audio saved to {output_path}")
89
+
90
+ # Function to generate metadata for M4B chapters
91
+ def generate_ffmpeg_metadata(chapter_files, metadata_file):
92
+ with open(metadata_file, 'w') as file:
93
+ file.write(';FFMETADATA1\n')
94
+ start_time = 0
95
+ for index, chapter_file in enumerate(chapter_files):
96
+ duration_ms = len(AudioSegment.from_wav(chapter_file))
97
+ file.write(f'[CHAPTER]\nTIMEBASE=1/1000\nSTART={start_time}\n')
98
+ file.write(f'END={start_time + duration_ms}\ntitle=Chapter {index + 1}\n')
99
+ start_time += duration_ms
100
+
101
+ # Generate the final M4B file using ffmpeg
102
+ def create_m4b(combined_wav, metadata_file, cover_image, output_m4b):
103
+ # Ensure the output directory exists
104
+ os.makedirs(os.path.dirname(output_m4b), exist_ok=True)
105
+
106
+ ffmpeg_cmd = ['ffmpeg', '-i', combined_wav, '-i', metadata_file]
107
+ if cover_image:
108
+ ffmpeg_cmd += ['-i', cover_image, '-map', '0:a', '-map', '2:v']
109
+ else:
110
+ ffmpeg_cmd += ['-map', '0:a']
111
+
112
+ ffmpeg_cmd += ['-map_metadata', '1', '-c:a', 'aac', '-b:a', '192k']
113
+ if cover_image:
114
+ ffmpeg_cmd += ['-c:v', 'png', '-disposition:v', 'attached_pic']
115
+ ffmpeg_cmd += [output_m4b]
116
+
117
+ subprocess.run(ffmpeg_cmd, check=True)
118
+
119
+
120
+
121
+ # Main logic
122
+ chapter_files = sorted([os.path.join(input_dir, f) for f in os.listdir(input_dir) if f.endswith('.wav')], key=sort_key)
123
+ temp_dir = tempfile.gettempdir()
124
+ temp_combined_wav = os.path.join(temp_dir, 'combined.wav')
125
+ metadata_file = os.path.join(temp_dir, 'metadata.txt')
126
+ cover_image = extract_metadata_and_cover(ebook_file)
127
+ output_m4b = os.path.join(output_dir, os.path.splitext(os.path.basename(ebook_file))[0] + '.m4b')
128
+
129
+ combine_wav_files(chapter_files, temp_combined_wav)
130
+ generate_ffmpeg_metadata(chapter_files, metadata_file)
131
+ create_m4b(temp_combined_wav, metadata_file, cover_image, output_m4b)
132
+
133
+ # Cleanup
134
+ if os.path.exists(temp_combined_wav):
135
+ os.remove(temp_combined_wav)
136
+ if os.path.exists(metadata_file):
137
+ os.remove(metadata_file)
138
+ if cover_image and os.path.exists(cover_image):
139
+ os.remove(cover_image)
140
+
141
+ # Example usage
142
+ # create_m4b_from_chapters('path_to_chapter_wavs', 'path_to_ebook_file', 'path_to_output_dir')
143
+
144
+
145
+
146
+
147
+
148
+
149
+ #this code right here isnt the book grabbing thing but its before to refrence in ordero to create the sepecial chapter labeled book thing with calibre idk some systems cant seem to get it so just in case but the next bit of code after this is the book grabbing code with booknlp
150
+ import os
151
+ import subprocess
152
+ import ebooklib
153
+ from ebooklib import epub
154
+ from bs4 import BeautifulSoup
155
+ import re
156
+ import csv
157
+ import nltk
158
+
159
+ # Only run the main script if Value is True
160
+ def create_chapter_labeled_book(ebook_file_path):
161
+ # Function to ensure the existence of a directory
162
+ def ensure_directory(directory_path):
163
+ if not os.path.exists(directory_path):
164
+ os.makedirs(directory_path)
165
+ print(f"Created directory: {directory_path}")
166
+
167
+ ensure_directory(os.path.join(".", 'Working_files', 'Book'))
168
+
169
+ def convert_to_epub(input_path, output_path):
170
+ # Convert the ebook to EPUB format using Calibre's ebook-convert
171
+ try:
172
+ subprocess.run(['ebook-convert', input_path, output_path], check=True)
173
+ except subprocess.CalledProcessError as e:
174
+ print(f"An error occurred while converting the eBook: {e}")
175
+ return False
176
+ return True
177
+
178
+ def save_chapters_as_text(epub_path):
179
+ # Create the directory if it doesn't exist
180
+ directory = os.path.join(".", "Working_files", "temp_ebook")
181
+ ensure_directory(directory)
182
+
183
+ # Open the EPUB file
184
+ book = epub.read_epub(epub_path)
185
+
186
+ previous_chapter_text = ''
187
+ previous_filename = ''
188
+ chapter_counter = 0
189
+
190
+ # Iterate through the items in the EPUB file
191
+ for item in book.get_items():
192
+ if item.get_type() == ebooklib.ITEM_DOCUMENT:
193
+ # Use BeautifulSoup to parse HTML content
194
+ soup = BeautifulSoup(item.get_content(), 'html.parser')
195
+ text = soup.get_text()
196
+
197
+ # Check if the text is not empty
198
+ if text.strip():
199
+ if len(text) < 2300 and previous_filename:
200
+ # Append text to the previous chapter if it's short
201
+ with open(previous_filename, 'a', encoding='utf-8') as file:
202
+ file.write('\n' + text)
203
+ else:
204
+ # Create a new chapter file and increment the counter
205
+ previous_filename = os.path.join(directory, f"chapter_{chapter_counter}.txt")
206
+ chapter_counter += 1
207
+ with open(previous_filename, 'w', encoding='utf-8') as file:
208
+ file.write(text)
209
+ print(f"Saved chapter: {previous_filename}")
210
+
211
+ # Example usage
212
+ input_ebook = ebook_file_path # Replace with your eBook file path
213
+ output_epub = os.path.join(".", "Working_files", "temp.epub")
214
+
215
+
216
+ if os.path.exists(output_epub):
217
+ os.remove(output_epub)
218
+ print(f"File {output_epub} has been removed.")
219
+ else:
220
+ print(f"The file {output_epub} does not exist.")
221
+
222
+ if convert_to_epub(input_ebook, output_epub):
223
+ save_chapters_as_text(output_epub)
224
+
225
+ # Download the necessary NLTK data (if not already present)
226
+ nltk.download('punkt')
227
+
228
+ def process_chapter_files(folder_path, output_csv):
229
+ with open(output_csv, 'w', newline='', encoding='utf-8') as csvfile:
230
+ writer = csv.writer(csvfile)
231
+ # Write the header row
232
+ writer.writerow(['Text', 'Start Location', 'End Location', 'Is Quote', 'Speaker', 'Chapter'])
233
+
234
+ # Process each chapter file
235
+ chapter_files = sorted(os.listdir(folder_path), key=lambda x: int(x.split('_')[1].split('.')[0]))
236
+ for filename in chapter_files:
237
+ if filename.startswith('chapter_') and filename.endswith('.txt'):
238
+ chapter_number = int(filename.split('_')[1].split('.')[0])
239
+ file_path = os.path.join(folder_path, filename)
240
+
241
+ try:
242
+ with open(file_path, 'r', encoding='utf-8') as file:
243
+ text = file.read()
244
+ # Insert "NEWCHAPTERABC" at the beginning of each chapter's text
245
+ if text:
246
+ text = "NEWCHAPTERABC" + text
247
+ sentences = nltk.tokenize.sent_tokenize(text)
248
+ for sentence in sentences:
249
+ start_location = text.find(sentence)
250
+ end_location = start_location + len(sentence)
251
+ writer.writerow([sentence, start_location, end_location, 'True', 'Narrator', chapter_number])
252
+ except Exception as e:
253
+ print(f"Error processing file {filename}: {e}")
254
+
255
+ # Example usage
256
+ folder_path = os.path.join(".", "Working_files", "temp_ebook")
257
+ output_csv = os.path.join(".", "Working_files", "Book", "Other_book.csv")
258
+
259
+ process_chapter_files(folder_path, output_csv)
260
+
261
+ def sort_key(filename):
262
+ """Extract chapter number for sorting."""
263
+ match = re.search(r'chapter_(\d+)\.txt', filename)
264
+ return int(match.group(1)) if match else 0
265
+
266
+ def combine_chapters(input_folder, output_file):
267
+ # Create the output folder if it doesn't exist
268
+ os.makedirs(os.path.dirname(output_file), exist_ok=True)
269
+
270
+ # List all txt files and sort them by chapter number
271
+ files = [f for f in os.listdir(input_folder) if f.endswith('.txt')]
272
+ sorted_files = sorted(files, key=sort_key)
273
+
274
+ with open(output_file, 'w', encoding='utf-8') as outfile: # Specify UTF-8 encoding here
275
+ for i, filename in enumerate(sorted_files):
276
+ with open(os.path.join(input_folder, filename), 'r', encoding='utf-8') as infile: # And here
277
+ outfile.write(infile.read())
278
+ # Add the marker unless it's the last file
279
+ if i < len(sorted_files) - 1:
280
+ outfile.write("\nNEWCHAPTERABC\n")
281
+
282
+ # Paths
283
+ input_folder = os.path.join(".", 'Working_files', 'temp_ebook')
284
+ output_file = os.path.join(".", 'Working_files', 'Book', 'Chapter_Book.txt')
285
+
286
+
287
+ # Combine the chapters
288
+ combine_chapters(input_folder, output_file)
289
+
290
+ ensure_directory(os.path.join(".", "Working_files", "Book"))
291
+
292
+
293
+ #create_chapter_labeled_book()
294
+
295
+
296
+
297
+
298
+ import os
299
+ import subprocess
300
+ import sys
301
+ import torchaudio
302
+
303
+ # Check if Calibre's ebook-convert tool is installed
304
+ def calibre_installed():
305
+ try:
306
+ subprocess.run(['ebook-convert', '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
307
+ return True
308
+ except FileNotFoundError:
309
+ print("Calibre is not installed. Please install Calibre for this functionality.")
310
+ return False
311
+
312
+
313
+ import os
314
+ import torch
315
+ from TTS.api import TTS
316
+ from nltk.tokenize import sent_tokenize
317
+ from pydub import AudioSegment
318
+ # Assuming split_long_sentence and wipe_folder are defined elsewhere in your code
319
+
320
+ default_target_voice_path = "default_voice.wav" # Ensure this is a valid path
321
+ default_language_code = "en"
322
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
323
+
324
+ def combine_wav_files(input_directory, output_directory, file_name):
325
+ # Ensure that the output directory exists, create it if necessary
326
+ os.makedirs(output_directory, exist_ok=True)
327
+
328
+ # Specify the output file path
329
+ output_file_path = os.path.join(output_directory, file_name)
330
+
331
+ # Initialize an empty audio segment
332
+ combined_audio = AudioSegment.empty()
333
+
334
+ # Get a list of all .wav files in the specified input directory and sort them
335
+ input_file_paths = sorted(
336
+ [os.path.join(input_directory, f) for f in os.listdir(input_directory) if f.endswith(".wav")],
337
+ key=lambda f: int(''.join(filter(str.isdigit, f)))
338
+ )
339
+
340
+ # Sequentially append each file to the combined_audio
341
+ for input_file_path in input_file_paths:
342
+ audio_segment = AudioSegment.from_wav(input_file_path)
343
+ combined_audio += audio_segment
344
+
345
+ # Export the combined audio to the output file path
346
+ combined_audio.export(output_file_path, format='wav')
347
+
348
+ print(f"Combined audio saved to {output_file_path}")
349
+
350
+ # Function to split long strings into parts
351
+ def split_long_sentence(sentence, max_length=249, max_pauses=10):
352
+ """
353
+ Splits a sentence into parts based on length or number of pauses without recursion.
354
+
355
+ :param sentence: The sentence to split.
356
+ :param max_length: Maximum allowed length of a sentence.
357
+ :param max_pauses: Maximum allowed number of pauses in a sentence.
358
+ :return: A list of sentence parts that meet the criteria.
359
+ """
360
+ parts = []
361
+ while len(sentence) > max_length or sentence.count(',') + sentence.count(';') + sentence.count('.') > max_pauses:
362
+ possible_splits = [i for i, char in enumerate(sentence) if char in ',;.' and i < max_length]
363
+ if possible_splits:
364
+ # Find the best place to split the sentence, preferring the last possible split to keep parts longer
365
+ split_at = possible_splits[-1] + 1
366
+ else:
367
+ # If no punctuation to split on within max_length, split at max_length
368
+ split_at = max_length
369
+
370
+ # Split the sentence and add the first part to the list
371
+ parts.append(sentence[:split_at].strip())
372
+ sentence = sentence[split_at:].strip()
373
+
374
+ # Add the remaining part of the sentence
375
+ parts.append(sentence)
376
+ return parts
377
+
378
+ """
379
+ if 'tts' not in locals():
380
+ tts = TTS(selected_tts_model, progress_bar=True).to(device)
381
+ """
382
+ from tqdm import tqdm
383
+
384
+ # Convert chapters to audio using XTTS
385
+ def convert_chapters_to_audio(chapters_dir, output_audio_dir, target_voice_path=None, language=None):
386
+ selected_tts_model = "tts_models/multilingual/multi-dataset/xtts_v2"
387
+ tts = TTS(selected_tts_model, progress_bar=False).to(device) # Set progress_bar to False to avoid nested progress bars
388
+
389
+ if not os.path.exists(output_audio_dir):
390
+ os.makedirs(output_audio_dir)
391
+
392
+ for chapter_file in sorted(os.listdir(chapters_dir)):
393
+ if chapter_file.endswith('.txt'):
394
+ # Extract chapter number from the filename
395
+ match = re.search(r"chapter_(\d+).txt", chapter_file)
396
+ if match:
397
+ chapter_num = int(match.group(1))
398
+ else:
399
+ print(f"Skipping file {chapter_file} as it does not match the expected format.")
400
+ continue
401
+
402
+ chapter_path = os.path.join(chapters_dir, chapter_file)
403
+ output_file_name = f"audio_chapter_{chapter_num}.wav"
404
+ output_file_path = os.path.join(output_audio_dir, output_file_name)
405
+ temp_audio_directory = os.path.join(".", "Working_files", "temp")
406
+ os.makedirs(temp_audio_directory, exist_ok=True)
407
+ temp_count = 0
408
+
409
+ with open(chapter_path, 'r', encoding='utf-8') as file:
410
+ chapter_text = file.read()
411
+ # Use the specified language model for sentence tokenization
412
+ sentences = sent_tokenize(chapter_text, language='italian' if language == 'it' else 'english')
413
+ for sentence in tqdm(sentences, desc=f"Chapter {chapter_num}"):
414
+ fragments = []
415
+ if language == "en":
416
+ fragments = split_long_sentence(sentence, max_length=249, max_pauses=10)
417
+ if language == "it":
418
+ fragments = split_long_sentence(sentence, max_length=213, max_pauses=10)
419
+ for fragment in fragments:
420
+ if fragment != "": #a hot fix to avoid blank fragments
421
+ print(f"Generating fragment: {fragment}...")
422
+ fragment_file_path = os.path.join(temp_audio_directory, f"{temp_count}.wav")
423
+ speaker_wav_path = target_voice_path if target_voice_path else default_target_voice_path
424
+ language_code = language if language else default_language_code
425
+ tts.tts_to_file(text=fragment, file_path=fragment_file_path, speaker_wav=speaker_wav_path, language=language_code)
426
+ temp_count += 1
427
+
428
+ combine_wav_files(temp_audio_directory, output_audio_dir, output_file_name)
429
+ wipe_folder(temp_audio_directory)
430
+ print(f"Converted chapter {chapter_num} to audio.")
431
+
432
+
433
+
434
+ # Main execution flow
435
+ if __name__ == "__main__":
436
+ if len(sys.argv) < 2:
437
+ print("Usage: python script.py <ebook_file_path> [target_voice_file_path]")
438
+ sys.exit(1)
439
+
440
+ ebook_file_path = sys.argv[1]
441
+ target_voice = sys.argv[2] if len(sys.argv) > 2 else None
442
+ language = sys.argv[3] if len(sys.argv) > 3 else None
443
+
444
+ if not calibre_installed():
445
+ sys.exit(1)
446
+
447
+ working_files = os.path.join(".","Working_files", "temp_ebook")
448
+ full_folder_working_files =os.path.join(".","Working_files")
449
+ chapters_directory = os.path.join(".","Working_files", "temp_ebook")
450
+ output_audio_directory = os.path.join(".", 'Chapter_wav_files')
451
+
452
+ # print("Wiping and removeing Working_files folder...")
453
+ # remove_folder_with_contents(full_folder_working_files)
454
+ #
455
+ # print("Wiping and and removeing chapter_wav_files folder...")
456
+ # remove_folder_with_contents(output_audio_directory)
457
+
458
+ # create_chapter_labeled_book(ebook_file_path)
459
+ audiobook_output_path = os.path.join(".", "Audiobooks")
460
+ # print(f"{chapters_directory}||||{output_audio_directory}|||||{target_voice}")
461
+ # convert_chapters_to_audio(chapters_directory, output_audio_directory, target_voice, language)
462
+ create_m4b_from_chapters(output_audio_directory, ebook_file_path, audiobook_output_path)
Notebooks/colab_ebook2audiobookxtts.ipynb ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "nbformat": 4,
3
+ "nbformat_minor": 0,
4
+ "metadata": {
5
+ "colab": {
6
+ "provenance": [],
7
+ "gpuType": "T4",
8
+ "include_colab_link": true
9
+ },
10
+ "kernelspec": {
11
+ "name": "python3",
12
+ "display_name": "Python 3"
13
+ },
14
+ "language_info": {
15
+ "name": "python"
16
+ },
17
+ "accelerator": "GPU"
18
+ },
19
+ "cells": [
20
+ {
21
+ "cell_type": "markdown",
22
+ "metadata": {
23
+ "id": "view-in-github",
24
+ "colab_type": "text"
25
+ },
26
+ "source": [
27
+ "<a href=\"https://colab.research.google.com/github/DrewThomasson/ebook2audiobookXTTS/blob/main/Notebooks/colab_ebook2audiobookxtts.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "markdown",
32
+ "source": [
33
+ "## Welcome to the ebook2audiobookxtts free google colab!\n",
34
+ "## 🌟 Features\n",
35
+ "\n",
36
+ "- 📖 Converts eBooks to text format with Calibre.\n",
37
+ "- 📚 Splits eBook into chapters for organized audio.\n",
38
+ "- 🎙️ High-quality text-to-speech with Coqui XTTS.\n",
39
+ "- 🗣️ Optional voice cloning with your own voice file.\n",
40
+ "- 🌍 Supports multiple languages! (English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko)).\n",
41
+ "## Want to run locally for free? ⬇\n",
42
+ "## [Check out the ebook2audiobookxtts github!](https://github.com/DrewThomasson/ebook2audiobookXTTS)"
43
+ ],
44
+ "metadata": {
45
+ "id": "DKNNnwD-HJwQ"
46
+ }
47
+ },
48
+ {
49
+ "cell_type": "code",
50
+ "source": [
51
+ "# @title 🛠️ Install requirments\n",
52
+ "#!DEBIAN_FRONTEND=noninteractive\n",
53
+ "!sudo apt-get update # && sudo apt-get -y upgrade\n",
54
+ "!sudo apt-get -y install libegl1\n",
55
+ "!sudo apt-get -y install libopengl0\n",
56
+ "!sudo apt-get -y install libxcb-cursor0\n",
57
+ "!sudo -v && wget -nv -O- https://download.calibre-ebook.com/linux-installer.sh | sudo sh /dev/stdin\n",
58
+ "!sudo apt-get install -y ffmpeg\n",
59
+ "#!sudo apt-get install -y calibre\n",
60
+ "!pip install ebook2audiobook-install-counter\n",
61
+ "!pip install ebooklib\n",
62
+ "!pip install pydub\n",
63
+ "!pip install nltk\n",
64
+ "!pip install beautifulsoup4\n",
65
+ "!pip install tqdm\n",
66
+ "!pip install gradio\n",
67
+ "!pip install coqui-tts"
68
+ ],
69
+ "metadata": {
70
+ "id": "Edxj355K0rUz",
71
+ "collapsed": true,
72
+ "cellView": "form"
73
+ },
74
+ "execution_count": null,
75
+ "outputs": []
76
+ },
77
+ {
78
+ "cell_type": "code",
79
+ "source": [
80
+ "# @title 🚀 Run ebook2audiobookxtts! (Make sure to set the runtime to have gpu to have faster generation speeds! :)\n",
81
+ "#ntlk error fix\n",
82
+ "#https://github.com/delip/PyTorchNLPBook/issues/14\n",
83
+ "import nltk\n",
84
+ "nltk.download('punkt')\n",
85
+ "\n",
86
+ "#Auto agree to xtts\n",
87
+ "import os\n",
88
+ "os.environ[\"COQUI_TOS_AGREED\"] = \"1\"\n",
89
+ "\n",
90
+ "# To download the app.py and the Default_voice wav if not seen locally\n",
91
+ "!wget -O /content/app.py https://raw.githubusercontent.com/DrewThomasson/ebook2audiobookXTTS/main/app.py\n",
92
+ "!wget -O /content/default_voice.wav https://raw.githubusercontent.com/DrewThomasson/ebook2audiobookXTTS/main/default_voice.wav\n",
93
+ "\n",
94
+ "# Start the app with Share=True for the gradio interface\n",
95
+ "!python /content/app.py --share True"
96
+ ],
97
+ "metadata": {
98
+ "id": "658BTHueyLMo",
99
+ "cellView": "form"
100
+ },
101
+ "execution_count": null,
102
+ "outputs": []
103
+ }
104
+ ]
105
+ }
Notebooks/kaggle-beta-of-ebook2audiobookxtts-ipynb.ipynb ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"language_info":{"name":"python","version":"3.10.14","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"},"colab":{"provenance":[],"gpuType":"T4"},"accelerator":"GPU","kaggle":{"accelerator":"gpu","dataSources":[],"isInternetEnabled":true,"language":"python","sourceType":"notebook","isGpuEnabled":true}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"code","source":"# IGNORE THESE ITS OLD LOL\n\n# install needed packages\n\n##!apt-get update\n\n##!apt-get install wget unzip git ffmpeg calibre\n\n\n\n# pip install requirments\n\n##!pip install tts==0.21.3 pydub nltk beautifulsoup4 ebooklib tqdm gradio\n\n\n\n##!pip install numpy==1.23\n\n##!pip install --no-binary lxml lxml\n\n##import os\n\n##os.kill(os.getpid(), 9)\n","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"gh3HEhmzuqVA","outputId":"81217d71-7576-43db-d56c-07ce11ea6517","jupyter":{"source_hidden":true},"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"#!DEBIAN_FRONTEND=noninteractive\n\n!sudo apt-get update # && sudo apt-get -y upgrade\n\n!sudo apt-get -y install libegl1\n\n!sudo apt-get -y install libopengl0\n\n!sudo apt-get -y install libxcb-cursor0\n\n!sudo -v && wget -nv -O- https://download.calibre-ebook.com/linux-installer.sh | sudo sh /dev/stdin\n\n!sudo apt-get install -y ffmpeg\n\n!pip install tts pydub nltk beautifulsoup4 ebooklib tqdm\n\n!pip install numpy==1.26.4\n\n!pip install gradio","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":1000},"id":"Edxj355K0rUz","outputId":"9fc5f4e1-1ba2-4814-a477-496f626c2772","trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"# Start the app with Share=True for the gradio interface\n\n\n\n#ntlk error fix\n\n#https://github.com/delip/PyTorchNLPBook/issues/14\n\nimport nltk\n\nnltk.download('punkt')\n\n\n\n#Auto agree to xtts\n\nimport os\n\nos.environ[\"COQUI_TOS_AGREED\"] = \"1\"\n\n\n\n!python /kaggle/working/app.py --share True","metadata":{"id":"EZIZva9Tvdbb","trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"code","source":"#ntlk error fix\n\n#https://github.com/delip/PyTorchNLPBook/issues/14\n\nimport nltk\n\nnltk.download('punkt')\n\n\n\n#Auto agree to xtts\n\nimport os\n\nos.environ[\"COQUI_TOS_AGREED\"] = \"1\"\n\n\n\n# To download the app.py and the Default_voice wav if not seen locally\n\n!wget -O /kaggle/working/app.py https://raw.githubusercontent.com/DrewThomasson/ebook2audiobookXTTS/main/app.py\n\n!wget -O /kaggle/working/default_voice.wav https://raw.githubusercontent.com/DrewThomasson/ebook2audiobookXTTS/main/default_voice.wav\n\n\n\n# Start the app with Share=True for the gradio interface\n\n!python /kaggle/working/app.py --share True","metadata":{"id":"658BTHueyLMo","colab":{"base_uri":"https://localhost:8080/"},"outputId":"e293e70d-b25a-41bc-dbac-7ca1ddf1d3d2","trusted":true},"execution_count":null,"outputs":[]}]}
lib/__pycache__/conf.cpython-312.pyc ADDED
Binary file (6.63 kB). View file
 
lib/__pycache__/functions.cpython-312.pyc ADDED
Binary file (84.3 kB). View file
 
lib/__pycache__/lang.cpython-312.pyc ADDED
Binary file (120 kB). View file
 
lib/__pycache__/tokenizer.cpython-312.pyc ADDED
Binary file (37.1 kB). View file
 
lib/conf.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from lib.lang import default_voice_file
3
+
4
+ NATIVE = 'native'
5
+ DOCKER_UTILS = 'docker_utils'
6
+ FULL_DOCKER = 'full_docker'
7
+
8
+ version = '2.0.0'
9
+ min_python_version = (3,10)
10
+ max_python_version = (3,12)
11
+
12
+ requirements_file = os.path.abspath(os.path.join('.','requirements.txt'))
13
+
14
+ docker_utils_image = 'utils'
15
+
16
+ interface_host = '0.0.0.0'
17
+ interface_port = 7860
18
+ interface_shared_expire = 72 # hours
19
+ interface_concurrency_limit = 8 # or None for unlimited
20
+ interface_component_options = {
21
+ "gr_tab_preferences": True,
22
+ "gr_voice_file": True,
23
+ "gr_group_custom_model": True
24
+ }
25
+
26
+ python_env_dir = os.path.abspath(os.path.join('.','python_env'))
27
+
28
+ models_dir = os.path.abspath(os.path.join('.','models'))
29
+ ebooks_dir = os.path.abspath(os.path.join('.','ebooks'))
30
+ processes_dir = os.path.abspath(os.path.join('.','tmp'))
31
+
32
+ audiobooks_gradio_dir = os.path.abspath(os.path.join('.','audiobooks','gui','gradio'))
33
+ audiobooks_host_dir = os.path.abspath(os.path.join('.','audiobooks','gui','host'))
34
+ audiobooks_cli_dir = os.path.abspath(os.path.join('.','audiobooks','cli'))
35
+
36
+ # <<<<<<< HEAD
37
+ # Automatically accept the non-commercial license
38
+ os.environ['COQUI_TOS_AGREED'] = '1'
39
+ os.environ['CALIBRE_TEMP_DIR'] = processes_dir
40
+ os.environ['CALIBRE_CACHE_DIRECTORY'] = processes_dir
41
+ os.environ['CALIBRE_NO_NATIVE_FILEDIALOGS'] = '1'
42
+ os.environ['DO_NOT_TRACK'] = 'true'
43
+ os.environ['HUGGINGFACE_HUB_CACHE'] = models_dir
44
+ os.environ['TTS_HOME'] = models_dir
45
+ os.environ['HF_HOME'] = models_dir
46
+ os.environ['HF_DATASETS_CACHE'] = models_dir
47
+ os.environ['HF_TOKEN_PATH'] = os.path.join(os.path.expanduser('~'), '.huggingface_token')
48
+ os.environ['TTS_CACHE'] = models_dir
49
+ os.environ['TORCH_HOME'] = models_dir
50
+ os.environ['XDG_CACHE_HOME'] = models_dir
51
+
52
+ ebook_formats = ['.epub', '.mobi', '.azw3', 'fb2', 'lrf', 'rb', 'snb', 'tcr', '.pdf', '.txt', '.rtf', 'doc', '.docx', '.html', '.odt', '.azw']
53
+ audiobook_format = 'm4b' # or 'mp3'
54
+ audioproc_format = 'wav' # only 'wav' is valid for now
55
+
56
+ default_tts_engine = 'xtts'
57
+ default_fine_tuned = 'std'
58
+ default_model_files = ['config.json', 'vocab.json', 'model.pth', 'ref.wav']
59
+
60
+ models = {
61
+ "xtts": {
62
+ "std": {
63
+ "lang": "multi",
64
+ "repo": "tts_models/multilingual/multi-dataset/xtts_v2",
65
+ "sub": "",
66
+ "voice": default_voice_file
67
+ },
68
+ "AiExplained": {
69
+ "lang": "eng",
70
+ "repo": "drewThomasson/fineTunedTTSModels",
71
+ "sub": "xtts-v2/eng/AiExplained",
72
+ "voice": os.path.abspath(os.path.join("voices", "eng", "adult", "male", "AiExplained_24khz.wav"))
73
+ },
74
+ "BobOdenkirk": {
75
+ "lang": "eng",
76
+ "repo": "drewThomasson/fineTunedTTSModels",
77
+ "sub": "xtts-v2/eng/BobOdenkirk",
78
+ "voice": os.path.abspath(os.path.join("voices", "eng", "adult", "male", "BobOdenkirk_24khz.wav"))
79
+ },
80
+ "BobRoss": {
81
+ "lang": "eng",
82
+ "repo": "drewThomasson/fineTunedTTSModels",
83
+ "sub": "xtts-v2/eng/BobRoss",
84
+ "voice": os.path.abspath(os.path.join("voices", "eng", "adult", "male", "BobRoss_24khz.wav"))
85
+ },
86
+ "BryanCranston": {
87
+ "lang": "eng",
88
+ "repo": "drewThomasson/fineTunedTTSModels",
89
+ "sub": "xtts-v2/eng/BryanCranston",
90
+ "voice": os.path.abspath(os.path.join("voices", "eng", "adult", "male", "BryanCranston_24khz.wav"))
91
+ },
92
+ "DavidAttenborough": {
93
+ "lang": "eng",
94
+ "repo": "drewThomasson/fineTunedTTSModels",
95
+ "sub": "xtts-v2/eng/DavidAttenborough",
96
+ "voice": os.path.abspath(os.path.join("voices", "eng", "elder", "male", "DavidAttenborough_24khz.wav"))
97
+ },
98
+ "DeathPuss&Boots": {
99
+ "lang": "eng",
100
+ "repo": "drewThomasson/fineTunedTTSModels",
101
+ "sub": "xtts-v2/eng/DeathPuss&Boots",
102
+ "voice": os.path.abspath(os.path.join("voices", "eng", "adult", "male", "DeathPuss&Boots_24khz.wav"))
103
+ },
104
+ "GhostMW2": {
105
+ "lang": "eng",
106
+ "repo": "drewThomasson/fineTunedTTSModels",
107
+ "sub": "xtts-v2/eng/GhostMW2",
108
+ "voice": os.path.abspath(os.path.join("voices", "eng", "adult", "male", "GhostMW2_24khz.wav"))
109
+ },
110
+ "JhonButlerASMR": {
111
+ "lang": "eng",
112
+ "repo": "drewThomasson/fineTunedTTSModels",
113
+ "sub": "xtts-v2/eng/JhonButlerASMR",
114
+ "voice": os.path.abspath(os.path.join("voices", "eng", "elder", "male", "JhonButlerASMR_24khz.wav"))
115
+ },
116
+ "JhonMulaney": {
117
+ "lang": "eng",
118
+ "repo": "drewThomasson/fineTunedTTSModels",
119
+ "sub": "xtts-v2/eng/JhonMulaney",
120
+ "voice": os.path.abspath(os.path.join("voices", "eng", "adult", "male", "JhonMulaney_24khz.wav"))
121
+ },
122
+ "MorganFreeman": {
123
+ "lang": "eng",
124
+ "repo": "drewThomasson/fineTunedTTSModels",
125
+ "sub": "xtts-v2/eng/MorganFreeman",
126
+ "voice": os.path.abspath(os.path.join("voices", "eng", "adult", "male", "MorganFreeman_24khz.wav"))
127
+ },
128
+ "RainyDayHeadSpace": {
129
+ "lang": "eng",
130
+ "repo": "drewThomasson/fineTunedTTSModels",
131
+ "sub": "xtts-v2/eng/RainyDayHeadSpace",
132
+ "voice": os.path.abspath(os.path.join("voices", "eng", "elder", "male", "RainyDayHeadSpace_24khz.wav"))
133
+ },
134
+ "WhisperSalemASMR": {
135
+ "lang": "eng",
136
+ "repo": "drewThomasson/fineTunedTTSModels",
137
+ "sub": "xtts-v2/eng/WhisperSalemASMR",
138
+ "voice": os.path.abspath(os.path.join("voices", "eng", "adult", "male", "WhisperSalemASMR_24khz.wav"))
139
+ }
140
+ },
141
+ "fairseq": {
142
+ "std": {
143
+ "lang": "multi",
144
+ "repo": "tts_models/[lang]/fairseq/vits",
145
+ "sub": "",
146
+ "voice": default_voice_file
147
+ }
148
+ }
149
+ }
lib/functions.py ADDED
@@ -0,0 +1,1594 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import csv
3
+ import docker
4
+ import ebooklib
5
+ import fnmatch
6
+ import gradio as gr
7
+ import hashlib
8
+ import json
9
+ import numpy as np
10
+ import os
11
+ import regex as re
12
+ import requests
13
+ import shutil
14
+ import socket
15
+ import subprocess
16
+ import sys
17
+ import threading
18
+ import time
19
+ import torch
20
+ import torchaudio
21
+ import urllib.request
22
+ import uuid
23
+ import zipfile
24
+ import traceback
25
+
26
+ from bs4 import BeautifulSoup
27
+ from collections import Counter
28
+ from collections.abc import MutableMapping
29
+ from datetime import datetime
30
+ from ebooklib import epub
31
+ from glob import glob
32
+ from huggingface_hub import hf_hub_download
33
+ from iso639 import languages
34
+ from multiprocessing import Manager, Event
35
+ from pydub import AudioSegment
36
+ from tqdm import tqdm
37
+ from translate import Translator
38
+ from TTS.api import TTS as XTTS
39
+ from TTS.tts.configs.xtts_config import XttsConfig
40
+ from TTS.tts.models.xtts import Xtts
41
+ from urllib.parse import urlparse
42
+
43
+ import lib.conf as conf
44
+ import lib.lang as lang
45
+
46
+ def inject_configs(target_namespace):
47
+ # Extract variables from both modules and inject them into the target namespace
48
+ for module in (conf, lang):
49
+ target_namespace.update({k: v for k, v in vars(module).items() if not k.startswith('__')})
50
+
51
+ # Inject configurations into the global namespace of this module
52
+ inject_configs(globals())
53
+
54
+ def recursive_proxy(data, manager=None):
55
+ """Recursively convert a nested dictionary into Manager.dict proxies."""
56
+ if manager is None:
57
+ manager = Manager()
58
+ if isinstance(data, dict):
59
+ proxy_dict = manager.dict()
60
+ for key, value in data.items():
61
+ proxy_dict[key] = recursive_proxy(value, manager)
62
+ return proxy_dict
63
+ elif isinstance(data, list):
64
+ proxy_list = manager.list()
65
+ for item in data:
66
+ proxy_list.append(recursive_proxy(item, manager))
67
+ return proxy_list
68
+ elif isinstance(data, (str, int, float, bool, type(None))): # Scalars
69
+ return data
70
+ else:
71
+ raise TypeError(f"Unsupported data type: {type(data)}")
72
+
73
+ class ConversionContext:
74
+ def __init__(self):
75
+ self.manager = Manager()
76
+ self.sessions = self.manager.dict() # Store all session-specific contexts
77
+ self.cancellation_events = {} # Store multiprocessing.Event for each session
78
+
79
+ def get_session(self, session_id):
80
+ """Retrieve or initialize session-specific context"""
81
+ if session_id not in self.sessions:
82
+ self.sessions[session_id] = recursive_proxy({
83
+ "script_mode": NATIVE,
84
+ "client": None,
85
+ "language": default_language_code,
86
+ "audiobooks_dir": None,
87
+ "tmp_dir": None,
88
+ "src": None,
89
+ "id": session_id,
90
+ "chapters_dir": None,
91
+ "chapters_dir_sentences": None,
92
+ "epub": None,
93
+ "epub_path": None,
94
+ "filename_noext": None,
95
+ "fine_tuned": None,
96
+ "voice_file": None,
97
+ "custom_model": None,
98
+ "custom_model_dir": None,
99
+ "chapters": None,
100
+ "cover": None,
101
+ "metadata": {
102
+ "title": None,
103
+ "creator": None,
104
+ "contributor": None,
105
+ "language": None,
106
+ "language_iso1": None,
107
+ "identifier": None,
108
+ "publisher": None,
109
+ "date": None,
110
+ "description": None,
111
+ "subject": None,
112
+ "rights": None,
113
+ "format": None,
114
+ "type": None,
115
+ "coverage": None,
116
+ "relation": None,
117
+ "Source": None,
118
+ "Modified": None,
119
+ },
120
+ "status": "Idle",
121
+ "progress": 0,
122
+ "cancellation_requested": False
123
+ }, manager=self.manager)
124
+ return self.sessions[session_id]
125
+
126
+ context = ConversionContext()
127
+ is_gui_process = False
128
+
129
+ class DependencyError(Exception):
130
+ def __init__(self, message=None):
131
+ super().__init__(message)
132
+ # Automatically handle the exception when it's raised
133
+ self.handle_exception()
134
+
135
+ def handle_exception(self):
136
+ # Print the full traceback of the exception
137
+ traceback.print_exc()
138
+
139
+ # Print the exception message
140
+ print(f'Caught DependencyError: {self}')
141
+
142
+ # Exit the script if it's not a web process
143
+ if not is_gui_process:
144
+ sys.exit(1)
145
+
146
+ def prepare_dirs(src, session):
147
+ try:
148
+ resume = False
149
+ os.makedirs(os.path.join(models_dir,'tts'), exist_ok=True)
150
+ os.makedirs(session['tmp_dir'], exist_ok=True)
151
+ os.makedirs(session['custom_model_dir'], exist_ok=True)
152
+ os.makedirs(session['audiobooks_dir'], exist_ok=True)
153
+ session['src'] = os.path.join(session['tmp_dir'], os.path.basename(src))
154
+ if os.path.exists(session['src']):
155
+ if compare_files_by_hash(session['src'], src):
156
+ resume = True
157
+ if not resume:
158
+ shutil.rmtree(session['chapters_dir'], ignore_errors=True)
159
+ os.makedirs(session['chapters_dir'], exist_ok=True)
160
+ os.makedirs(session['chapters_dir_sentences'], exist_ok=True)
161
+ shutil.copy(src, session['src'])
162
+ return True
163
+ except Exception as e:
164
+ raise DependencyError(e)
165
+
166
+ def check_programs(prog_name, command, options):
167
+ try:
168
+ subprocess.run([command, options], check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
169
+ return True, None
170
+ except FileNotFoundError:
171
+ e = f'''********** Error: {prog_name} is not installed! if your OS calibre package version
172
+ is not compatible you still can run ebook2audiobook.sh (linux/mac) or ebook2audiobook.cmd (windows) **********'''
173
+ raise DependencyError(e)
174
+ except subprocess.CalledProcessError:
175
+ e = f'Error: There was an issue running {prog_name}.'
176
+ raise DependencyError(e)
177
+
178
+ def check_fine_tuned(fine_tuned, language):
179
+ try:
180
+ for parent, children in models.items():
181
+ if fine_tuned in children:
182
+ if language_xtts.get(language):
183
+ tts = 'xtts'
184
+ else:
185
+ tts = 'fairseq'
186
+ if parent == tts:
187
+ return parent
188
+ return False
189
+ except Exception as e:
190
+ raise RuntimeError(e)
191
+
192
+ def analyze_uploaded_file(zip_path, required_files=None):
193
+ if required_files is None:
194
+ required_files = default_model_files
195
+ executable_extensions = {'.exe', '.bat', '.cmd', '.bash', '.bin', '.sh', '.msi', '.dll', '.com'}
196
+ try:
197
+ with zipfile.ZipFile(zip_path, 'r') as zf:
198
+ files_in_zip = set()
199
+ executables_found = False
200
+ for file_info in zf.infolist():
201
+ file_name = file_info.filename
202
+ if file_info.is_dir():
203
+ continue # Skip directories
204
+ base_name = os.path.basename(file_name)
205
+ files_in_zip.add(base_name)
206
+ _, ext = os.path.splitext(base_name.lower())
207
+ if ext in executable_extensions:
208
+ executables_found = True
209
+ break
210
+ missing_files = [f for f in required_files if f not in files_in_zip]
211
+ is_valid = not executables_found and not missing_files
212
+ return is_valid,
213
+ except zipfile.BadZipFile:
214
+ raise ValueError("error: The file is not a valid ZIP archive.")
215
+ except Exception as e:
216
+ raise RuntimeError(f'analyze_uploaded_file(): {e}')
217
+
218
+ async def extract_custom_model(file_src, dest=None, session=None, required_files=None):
219
+ try:
220
+ progress_bar = None
221
+ if is_gui_process:
222
+ progress_bar = gr.Progress(track_tqdm=True)
223
+ if dest is None:
224
+ dest = session['custom_model_dir'] = os.path.join(models_dir, '__sessions', f"model-{session['id']}")
225
+ os.makedirs(dest, exist_ok=True)
226
+ if required_files is None:
227
+ required_files = default_model_files
228
+
229
+ dir_src = os.path.dirname(file_src)
230
+ dir_name = os.path.basename(file_src).replace('.zip', '')
231
+
232
+ with zipfile.ZipFile(file_src, 'r') as zip_ref:
233
+ files = zip_ref.namelist()
234
+ files_length = len(files)
235
+ dir_tts = 'fairseq'
236
+ xtts_config = 'config.json'
237
+
238
+ # Check the model type
239
+ config_data = {}
240
+ if xtts_config in zip_ref.namelist():
241
+ with zip_ref.open(xtts_config) as file:
242
+ config_data = json.load(file)
243
+ if config_data.get('model') == 'xtts':
244
+ dir_tts = 'xtts'
245
+
246
+ dir_dest = os.path.join(dest, dir_tts, dir_name)
247
+ os.makedirs(dir_dest, exist_ok=True)
248
+
249
+ # Initialize progress bar
250
+ with tqdm(total=100, unit='%') as t: # Track progress as a percentage
251
+ for i, file in enumerate(files):
252
+ if file in required_files:
253
+ zip_ref.extract(file, dir_dest)
254
+ progress_percentage = ((i + 1) / files_length) * 100
255
+ t.n = int(progress_percentage)
256
+ t.refresh()
257
+ if progress_bar is not None:
258
+ progress_bar(downloaded / total_size)
259
+ yield dir_name, progress_bar
260
+
261
+ os.remove(file_src)
262
+ print(f'Extracted files to {dir_dest}')
263
+ yield dir_name, progress_bar
264
+ return
265
+ except Exception as e:
266
+ raise DependencyError(e)
267
+
268
+ def calculate_hash(filepath, hash_algorithm='sha256'):
269
+ hash_func = hashlib.new(hash_algorithm)
270
+ with open(filepath, 'rb') as file:
271
+ while chunk := file.read(8192): # Read in chunks to handle large files
272
+ hash_func.update(chunk)
273
+ return hash_func.hexdigest()
274
+
275
+ def compare_files_by_hash(file1, file2, hash_algorithm='sha256'):
276
+ return calculate_hash(file1, hash_algorithm) == calculate_hash(file2, hash_algorithm)
277
+
278
+ def has_metadata(f):
279
+ try:
280
+ b = epub.read_epub(f)
281
+ metadata = b.get_metadata('DC', '')
282
+ if metadata:
283
+ return True
284
+ else:
285
+ return False
286
+ except Exception as e:
287
+ return False
288
+
289
+ def convert_to_epub(session):
290
+ if session['cancellation_requested']:
291
+ #stop_and_detach_tts()
292
+ print('Cancel requested')
293
+ return False
294
+ if session['script_mode'] == DOCKER_UTILS:
295
+ try:
296
+ docker_dir = os.path.basename(session['tmp_dir'])
297
+ docker_file_in = os.path.basename(session['src'])
298
+ docker_file_out = os.path.basename(session['epub_path'])
299
+
300
+ # Check if the input file is already an EPUB
301
+ if docker_file_in.lower().endswith('.epub'):
302
+ shutil.copy(session['src'], session['epub_path'])
303
+ return True
304
+
305
+ # Convert the ebook to EPUB format using utils Docker image
306
+ container = session['client'].containers.run(
307
+ docker_utils_image,
308
+ command=f'ebook-convert /files/{docker_dir}/{docker_file_in} /files/{docker_dir}/{docker_file_out}',
309
+ volumes={session['tmp_dir']: {'bind': f'/files/{docker_dir}', 'mode': 'rw'}},
310
+ remove=True,
311
+ detach=False,
312
+ stdout=True,
313
+ stderr=True
314
+ )
315
+ print(container.decode('utf-8'))
316
+ return True
317
+ except docker.errors.ContainerError as e:
318
+ raise DependencyError(e)
319
+ except docker.errors.ImageNotFound as e:
320
+ raise DependencyError(e)
321
+ except docker.errors.APIError as e:
322
+ raise DependencyError(e)
323
+ else:
324
+ try:
325
+ util_app = shutil.which('ebook-convert')
326
+ subprocess.run([util_app, session['src'], session['epub_path']], check=True)
327
+ return True
328
+ except subprocess.CalledProcessError as e:
329
+ raise DependencyError(e)
330
+
331
+ def get_cover(session):
332
+ try:
333
+ if session['cancellation_requested']:
334
+ #stop_and_detach_tts()
335
+ print('Cancel requested')
336
+ return False
337
+ cover_image = False
338
+ cover_path = os.path.join(session['tmp_dir'], session['filename_noext'] + '.jpg')
339
+ for item in session['epub'].get_items_of_type(ebooklib.ITEM_COVER):
340
+ cover_image = item.get_content()
341
+ break
342
+ if not cover_image:
343
+ for item in session['epub'].get_items_of_type(ebooklib.ITEM_IMAGE):
344
+ if 'cover' in item.file_name.lower() or 'cover' in item.get_id().lower():
345
+ cover_image = item.get_content()
346
+ break
347
+ if cover_image:
348
+ with open(cover_path, 'wb') as cover_file:
349
+ cover_file.write(cover_image)
350
+ return cover_path
351
+ return True
352
+ except Exception as e:
353
+ raise DependencyError(e)
354
+
355
+ def get_chapters(language, session):
356
+ try:
357
+ if session['cancellation_requested']:
358
+ #stop_and_detach_tts()
359
+ print('Cancel requested')
360
+ return False
361
+ all_docs = list(session['epub'].get_items_of_type(ebooklib.ITEM_DOCUMENT))
362
+ if all_docs:
363
+ all_docs = all_docs[1:]
364
+ doc_patterns = [filter_pattern(str(doc)) for doc in all_docs if filter_pattern(str(doc))]
365
+ most_common_pattern = filter_doc(doc_patterns)
366
+ selected_docs = [doc for doc in all_docs if filter_pattern(str(doc)) == most_common_pattern]
367
+ chapters = [filter_chapter(doc, language) for doc in selected_docs]
368
+ if session['metadata'].get('creator'):
369
+ intro = f"{session['metadata']['creator']}, {session['metadata']['title']};\n "
370
+ chapters[0].insert(0, intro)
371
+ return chapters
372
+ return False
373
+ except Exception as e:
374
+ raise DependencyError(f'Error extracting main content pages: {e}')
375
+
376
+ def filter_doc(doc_patterns):
377
+ pattern_counter = Counter(doc_patterns)
378
+ # Returns a list with one tuple: [(pattern, count)]
379
+ most_common = pattern_counter.most_common(1)
380
+ return most_common[0][0] if most_common else None
381
+
382
+ def filter_pattern(doc_identifier):
383
+ parts = doc_identifier.split(':')
384
+ if len(parts) > 2:
385
+ segment = parts[1]
386
+ if re.search(r'[a-zA-Z]', segment) and re.search(r'\d', segment):
387
+ return ''.join([char for char in segment if char.isalpha()])
388
+ elif re.match(r'^[a-zA-Z]+$', segment):
389
+ return segment
390
+ elif re.match(r'^\d+$', segment):
391
+ return 'numbers'
392
+ return None
393
+
394
+ def filter_chapter(doc, language):
395
+ soup = BeautifulSoup(doc.get_body_content(), 'html.parser')
396
+ # Remove scripts and styles
397
+ for script in soup(["script", "style"]):
398
+ script.decompose()
399
+ # Normalize lines and remove unnecessary spaces
400
+ text = re.sub(r'(\r\n|\r|\n){3,}', '\r\n', soup.get_text().strip())
401
+ text = replace_roman_numbers(text)
402
+ lines = (line.strip() for line in text.splitlines())
403
+ chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
404
+ text = '\n'.join(chunk for chunk in chunks if chunk)
405
+ text = text.replace('»', '"').replace('«', '"')
406
+ # Pattern 1: Add a space between UTF-8 characters and numbers
407
+ text = re.sub(r'(?<=[\p{L}])(?=\d)|(?<=\d)(?=[\p{L}])', ' ', text)
408
+ # Pattern 2: Split numbers into groups of 4
409
+ text = re.sub(r'(\d{4})(?=\d)', r'\1 ', text)
410
+ chapter_sentences = get_sentences(text, language)
411
+ return chapter_sentences
412
+
413
+ def get_sentences(sentence, language, max_pauses=9):
414
+ max_length = language_mapping[language]['char_limit']
415
+ punctuation = language_mapping[language]['punctuation']
416
+ sentence = sentence.replace(".", ";\n")
417
+ parts = []
418
+ while len(sentence) > max_length or sum(sentence.count(p) for p in punctuation) > max_pauses:
419
+ # Step 1: Look for the last period (.) within max_length
420
+ possible_splits = [i for i, char in enumerate(sentence[:max_length]) if char == '.']
421
+ # Step 2: If no periods, look for the last comma (,)
422
+ if not possible_splits:
423
+ possible_splits = [i for i, char in enumerate(sentence[:max_length]) if char == ',']
424
+ # Step 3: If still no splits, look for any other punctuation
425
+ if not possible_splits:
426
+ possible_splits = [i for i, char in enumerate(sentence[:max_length]) if char in punctuation]
427
+ # Step 4: Determine where to split the sentence
428
+ if possible_splits:
429
+ split_at = possible_splits[-1] + 1 # Split at the last occurrence of punctuation
430
+ else:
431
+ # If no punctuation is found, split at the last space
432
+ last_space = sentence.rfind(' ', 0, max_length)
433
+ if last_space != -1:
434
+ split_at = last_space + 1
435
+ else:
436
+ # If no space is found, force split at max_length
437
+ split_at = max_length
438
+ # Add the split sentence to parts
439
+ parts.append(sentence[:split_at].strip() + ' ')
440
+ sentence = sentence[split_at:].strip()
441
+ # Add the remaining sentence if any
442
+ if sentence:
443
+ parts.append(sentence.strip() + ' ')
444
+ return parts
445
+
446
+ def convert_chapters_to_audio(session):
447
+ try:
448
+ if session['cancellation_requested']:
449
+ #stop_and_detach_tts()
450
+ print('Cancel requested')
451
+ return False
452
+ progress_bar = None
453
+ params = {}
454
+ if is_gui_process:
455
+ progress_bar = gr.Progress(track_tqdm=True)
456
+ params['tts_model'] = None
457
+ '''
458
+ # List available TTS base models
459
+ print("Available Models:")
460
+ print("=================")
461
+ for index, model in enumerate(XTTS().list_models(), 1):
462
+ print(f"{index}. {model}")
463
+ '''
464
+ if session['metadata']['language'] in language_xtts:
465
+ params['tts_model'] = 'xtts'
466
+ if session['custom_model'] is not None:
467
+ print(f"Loading TTS {params['tts_model']} model from {session['custom_model']}...")
468
+ model_path = os.path.join(session['custom_model'], 'model.pth')
469
+ config_path = os.path.join(session['custom_model'],'config.json')
470
+ vocab_path = os.path.join(session['custom_model'],'vocab.json')
471
+ voice_path = os.path.join(session['custom_model'],'ref.wav')
472
+ config = XttsConfig()
473
+ config.models_dir = os.path.join(models_dir,'tts')
474
+ config.load_json(config_path)
475
+ params['tts'] = Xtts.init_from_config(config)
476
+ params['tts'].load_checkpoint(config, checkpoint_path=model_path, vocab_path=vocab_path, eval=True)
477
+ print('Computing speaker latents...')
478
+ params['voice_file'] = session['voice_file'] if session['voice_file'] is not None else voice_path
479
+ params['gpt_cond_latent'], params['speaker_embedding'] = params['tts'].get_conditioning_latents(audio_path=[params['voice_file']])
480
+ elif session['fine_tuned'] != 'std':
481
+ print(f"Loading TTS {params['tts_model']} model from {session['fine_tuned']}...")
482
+ hf_repo = models[params['tts_model']][session['fine_tuned']]['repo']
483
+ hf_sub = models[params['tts_model']][session['fine_tuned']]['sub']
484
+ cache_dir = os.path.join(models_dir,'tts')
485
+ model_path = hf_hub_download(repo_id=hf_repo, filename=f"{hf_sub}/model.pth", cache_dir=cache_dir)
486
+ config_path = hf_hub_download(repo_id=hf_repo, filename=f"{hf_sub}/config.json", cache_dir=cache_dir)
487
+ vocab_path = hf_hub_download(repo_id=hf_repo, filename=f"{hf_sub}/vocab.json", cache_dir=cache_dir)
488
+ config = XttsConfig()
489
+ config.models_dir = cache_dir
490
+ config.load_json(config_path)
491
+ params['tts'] = Xtts.init_from_config(config)
492
+ params['tts'].load_checkpoint(config, checkpoint_path=model_path, vocab_path=vocab_path, eval=True)
493
+ print('Computing speaker latents...')
494
+ params['voice_file'] = session['voice_file'] if session['voice_file'] is not None else models[params['tts_model']][session['fine_tuned']]['voice']
495
+ params['gpt_cond_latent'], params['speaker_embedding'] = params['tts'].get_conditioning_latents(audio_path=[params['voice_file']])
496
+ else:
497
+ print(f"Loading TTS {params['tts_model']} model from {models[params['tts_model']][session['fine_tuned']]['repo']}...")
498
+ params['tts'] = XTTS(model_name=models[params['tts_model']][session['fine_tuned']]['repo'])
499
+ params['voice_file'] = session['voice_file'] if session['voice_file'] is not None else models[params['tts_model']][session['fine_tuned']]['voice']
500
+ params['tts'].to(session['device'])
501
+ else:
502
+ params['tts_model'] = 'fairseq'
503
+ model_repo = models[params['tts_model']][session['fine_tuned']]['repo'].replace("[lang]", session['metadata']['language'])
504
+ print(f"Loading TTS {model_repo} model from {model_repo}...")
505
+ params['tts'] = XTTS(model_repo)
506
+ params['voice_file'] = session['voice_file'] if session['voice_file'] is not None else models[params['tts_model']][session['fine_tuned']]['voice']
507
+ params['tts'].to(session['device'])
508
+
509
+ resume_chapter = 0
510
+ resume_sentence = 0
511
+
512
+ # Check existing files to resume the process if it was interrupted
513
+ existing_chapters = sorted([f for f in os.listdir(session['chapters_dir']) if f.endswith(f'.{audioproc_format}')])
514
+ existing_sentences = sorted([f for f in os.listdir(session['chapters_dir_sentences']) if f.endswith(f'.{audioproc_format}')])
515
+
516
+ if existing_chapters:
517
+ count_chapter_files = len(existing_chapters)
518
+ resume_chapter = count_chapter_files - 1 if count_chapter_files > 0 else 0
519
+ print(f'Resuming from chapter {count_chapter_files}')
520
+ if existing_sentences:
521
+ resume_sentence = len(existing_sentences)
522
+ print(f'Resuming from sentence {resume_sentence}')
523
+
524
+ total_chapters = len(session['chapters'])
525
+ total_sentences = sum(len(array) for array in session['chapters'])
526
+ current_sentence = 0
527
+
528
+ with tqdm(total=total_sentences, desc='convert_chapters_to_audio 0.00%', bar_format='{desc}: {n_fmt}/{total_fmt} ', unit='step', initial=resume_sentence) as t:
529
+ t.n = resume_sentence
530
+ t.refresh()
531
+ for x in range(resume_chapter, total_chapters):
532
+ chapter_num = x + 1
533
+ chapter_audio_file = f'chapter_{chapter_num}.{audioproc_format}'
534
+ sentences = session['chapters'][x]
535
+ sentences_count = len(sentences)
536
+ start = current_sentence # Mark the starting sentence of the chapter
537
+ print(f"\nChapter {chapter_num} containing {sentences_count} sentences...")
538
+ for i, sentence in enumerate(sentences):
539
+ if current_sentence >= resume_sentence:
540
+ params['sentence_audio_file'] = os.path.join(session['chapters_dir_sentences'], f'{current_sentence}.{audioproc_format}')
541
+ params['sentence'] = sentence
542
+ if convert_sentence_to_audio(params, session):
543
+ t.update(1)
544
+ percentage = (current_sentence / total_sentences) * 100
545
+ t.set_description(f'Processing {percentage:.2f}%')
546
+ print(f'Sentence: {sentence}')
547
+ t.refresh()
548
+ if progress_bar is not None:
549
+ progress_bar(current_sentence / total_sentences)
550
+ else:
551
+ return False
552
+ current_sentence += 1
553
+ end = current_sentence - 1
554
+ print(f"\nEnd of Chapter {chapter_num}")
555
+ if start >= resume_sentence:
556
+ if combine_audio_sentences(chapter_audio_file, start, end, session):
557
+ print(f'Combining chapter {chapter_num} to audio, sentence {start} to {end}')
558
+ else:
559
+ print('combine_audio_sentences() failed!')
560
+ return False
561
+ return True
562
+ except Exception as e:
563
+ raise DependencyError(e)
564
+
565
+ def convert_sentence_to_audio(params, session):
566
+ try:
567
+ if session['cancellation_requested']:
568
+ #stop_and_detach_tts(params['tts'])
569
+ print('Cancel requested')
570
+ return False
571
+ generation_params = {
572
+ "temperature": session['temperature'],
573
+ "length_penalty": session["length_penalty"],
574
+ "repetition_penalty": session['repetition_penalty'],
575
+ "num_beams": int(session['length_penalty']) + 1 if session["length_penalty"] > 1 else 1,
576
+ "top_k": session['top_k'],
577
+ "top_p": session['top_p'],
578
+ "speed": session['speed'],
579
+ "enable_text_splitting": session['enable_text_splitting']
580
+ }
581
+ if params['tts_model'] == 'xtts':
582
+ if session['custom_model'] is not None or session['fine_tuned'] != 'std':
583
+ output = params['tts'].inference(
584
+ text=params['sentence'],
585
+ language=session['metadata']['language_iso1'],
586
+ gpt_cond_latent=params['gpt_cond_latent'],
587
+ speaker_embedding=params['speaker_embedding'],
588
+ **generation_params
589
+ )
590
+ torchaudio.save(
591
+ params['sentence_audio_file'],
592
+ torch.tensor(output[audioproc_format]).unsqueeze(0),
593
+ sample_rate=24000
594
+ )
595
+ else:
596
+ params['tts'].tts_to_file(
597
+ text=params['sentence'],
598
+ language=session['metadata']['language_iso1'],
599
+ file_path=params['sentence_audio_file'],
600
+ speaker_wav=params['voice_file'],
601
+ **generation_params
602
+ )
603
+ elif params['tts_model'] == 'fairseq':
604
+ params['tts'].tts_with_vc_to_file(
605
+ text=params['sentence'],
606
+ file_path=params['sentence_audio_file'],
607
+ speaker_wav=params['voice_file'].replace('_24khz','_16khz'),
608
+ split_sentences=session['enable_text_splitting']
609
+ )
610
+ if os.path.exists(params['sentence_audio_file']):
611
+ return True
612
+ print(f"Cannot create {params['sentence_audio_file']}")
613
+ return False
614
+ except Exception as e:
615
+ raise DependencyError(e)
616
+
617
+ def combine_audio_sentences(chapter_audio_file, start, end, session):
618
+ try:
619
+ chapter_audio_file = os.path.join(session['chapters_dir'], chapter_audio_file)
620
+ combined_audio = AudioSegment.empty()
621
+ # Get all audio sentence files sorted by their numeric indices
622
+ sentence_files = [f for f in os.listdir(session['chapters_dir_sentences']) if f.endswith(".wav")]
623
+ sentences_dir_ordered = sorted(sentence_files, key=lambda x: int(re.search(r'\d+', x).group()))
624
+ # Filter the files in the range [start, end]
625
+ selected_files = [
626
+ file for file in sentences_dir_ordered
627
+ if start <= int(''.join(filter(str.isdigit, os.path.basename(file)))) <= end
628
+ ]
629
+ for file in selected_files:
630
+ if session['cancellation_requested']:
631
+ #stop_and_detach_tts(params['tts'])
632
+ print('Cancel requested')
633
+ return False
634
+ if session['cancellation_requested']:
635
+ msg = 'Cancel requested'
636
+ raise ValueError(msg)
637
+ audio_segment = AudioSegment.from_file(os.path.join(session['chapters_dir_sentences'],file), format=audioproc_format)
638
+ combined_audio += audio_segment
639
+ combined_audio.export(chapter_audio_file, format=audioproc_format)
640
+ print(f'Combined audio saved to {chapter_audio_file}')
641
+ return True
642
+ except Exception as e:
643
+ raise DependencyError(e)
644
+
645
+
646
+ def combine_audio_chapters(session):
647
+ def sort_key(chapter_file):
648
+ numbers = re.findall(r'\d+', chapter_file)
649
+ return int(numbers[0]) if numbers else 0
650
+
651
+ def assemble_audio():
652
+ try:
653
+ combined_audio = AudioSegment.empty()
654
+ batch_size = 256
655
+ # Process the chapter files in batches
656
+ for i in range(0, len(chapter_files), batch_size):
657
+ batch_files = chapter_files[i:i + batch_size]
658
+ batch_audio = AudioSegment.empty() # Initialize an empty AudioSegment for the batch
659
+ # Sequentially append each file in the current batch to the batch_audio
660
+ for chapter_file in batch_files:
661
+ if session['cancellation_requested']:
662
+ print('Cancel requested')
663
+ return False
664
+ audio_segment = AudioSegment.from_wav(os.path.join(session['chapters_dir'],chapter_file))
665
+ batch_audio += audio_segment
666
+ combined_audio += batch_audio
667
+ combined_audio.export(assembled_audio, format=audioproc_format)
668
+ print(f'Combined audio saved to {assembled_audio}')
669
+ return True
670
+ except Exception as e:
671
+ raise DependencyError(e)
672
+
673
+ def generate_ffmpeg_metadata():
674
+ try:
675
+ if session['cancellation_requested']:
676
+ print('Cancel requested')
677
+ return False
678
+ ffmpeg_metadata = ';FFMETADATA1\n'
679
+ if session['metadata'].get('title'):
680
+ ffmpeg_metadata += f"title={session['metadata']['title']}\n"
681
+ if session['metadata'].get('creator'):
682
+ ffmpeg_metadata += f"artist={session['metadata']['creator']}\n"
683
+ if session['metadata'].get('language'):
684
+ ffmpeg_metadata += f"language={session['metadata']['language']}\n\n"
685
+ if session['metadata'].get('publisher'):
686
+ ffmpeg_metadata += f"publisher={session['metadata']['publisher']}\n"
687
+ if session['metadata'].get('description'):
688
+ ffmpeg_metadata += f"description={session['metadata']['description']}\n"
689
+ if session['metadata'].get('published'):
690
+ # Check if the timestamp contains fractional seconds
691
+ if '.' in session['metadata']['published']:
692
+ # Parse with fractional seconds
693
+ year = datetime.strptime(session['metadata']['published'], '%Y-%m-%dT%H:%M:%S.%f%z').year
694
+ else:
695
+ # Parse without fractional seconds
696
+ year = datetime.strptime(session['metadata']['published'], '%Y-%m-%dT%H:%M:%S%z').year
697
+ else:
698
+ # If published is not provided, use the current year
699
+ year = datetime.now().year
700
+ ffmpeg_metadata += f'year={year}\n'
701
+ if session['metadata'].get('identifiers') and isinstance(session['metadata'].get('identifiers'), dict):
702
+ isbn = session['metadata']['identifiers'].get('isbn', None)
703
+ if isbn:
704
+ ffmpeg_metadata += f'isbn={isbn}\n' # ISBN
705
+ mobi_asin = session['metadata']['identifiers'].get('mobi-asin', None)
706
+ if mobi_asin:
707
+ ffmpeg_metadata += f'asin={mobi_asin}\n' # ASIN
708
+ start_time = 0
709
+ for index, chapter_file in enumerate(chapter_files):
710
+ if session['cancellation_requested']:
711
+ msg = 'Cancel requested'
712
+ raise ValueError(msg)
713
+
714
+ duration_ms = len(AudioSegment.from_wav(os.path.join(session['chapters_dir'],chapter_file)))
715
+ ffmpeg_metadata += f'[CHAPTER]\nTIMEBASE=1/1000\nSTART={start_time}\n'
716
+ ffmpeg_metadata += f'END={start_time + duration_ms}\ntitle=Chapter {index + 1}\n'
717
+ start_time += duration_ms
718
+ # Write the metadata to the file
719
+ with open(metadata_file, 'w', encoding='utf-8') as file:
720
+ file.write(ffmpeg_metadata)
721
+ return True
722
+ except Exception as e:
723
+ raise DependencyError(e)
724
+
725
+ def export_audio():
726
+ try:
727
+ if session['cancellation_requested']:
728
+ print('Cancel requested')
729
+ return False
730
+ ffmpeg_cover = None
731
+ if session['script_mode'] == DOCKER_UTILS:
732
+ docker_dir = os.path.basename(session['tmp_dir'])
733
+ ffmpeg_combined_audio = f'/files/{docker_dir}/' + os.path.basename(assembled_audio)
734
+ ffmpeg_metadata_file = f'/files/{docker_dir}/' + os.path.basename(metadata_file)
735
+ ffmpeg_final_file = f'/files/{docker_dir}/' + os.path.basename(docker_final_file)
736
+ if session['cover'] is not None:
737
+ ffmpeg_cover = f'/files/{docker_dir}/' + os.path.basename(session['cover'])
738
+ ffmpeg_cmd = ['ffmpeg', '-i', ffmpeg_combined_audio, '-i', ffmpeg_metadata_file]
739
+ else:
740
+ ffmpeg_combined_audio = assembled_audio
741
+ ffmpeg_metadata_file = metadata_file
742
+ ffmpeg_final_file = final_file
743
+ if session['cover'] is not None:
744
+ ffmpeg_cover = session['cover']
745
+ ffmpeg_cmd = [shutil.which('ffmpeg'), '-i', ffmpeg_combined_audio, '-i', ffmpeg_metadata_file]
746
+ if ffmpeg_cover is not None:
747
+ ffmpeg_cmd += ['-i', ffmpeg_cover, '-map', '0:a', '-map', '2:v']
748
+ else:
749
+ ffmpeg_cmd += ['-map', '0:a']
750
+ ffmpeg_cmd += ['-map_metadata', '1', '-c:a', 'aac', '-b:a', '128k', '-ar', '44100']
751
+ if ffmpeg_cover is not None:
752
+ if ffmpeg_cover.endswith('.png'):
753
+ ffmpeg_cmd += ['-c:v', 'png', '-disposition:v', 'attached_pic'] # PNG cover
754
+ else:
755
+ ffmpeg_cmd += ['-c:v', 'copy', '-disposition:v', 'attached_pic'] # JPEG cover (no re-encoding needed)
756
+ if ffmpeg_cover is not None and ffmpeg_cover.endswith('.png'):
757
+ ffmpeg_cmd += ['-pix_fmt', 'yuv420p']
758
+ ffmpeg_cmd += [
759
+ '-af',
760
+ 'agate=threshold=-35dB:ratio=1.5:attack=10:release=200,acompressor=threshold=-20dB:ratio=2:attack=80:release=200:makeup=1dB,loudnorm=I=-19:TP=-3:LRA=7:linear=true,afftdn=nf=-50,equalizer=f=150:t=q:w=2:g=2,equalizer=f=250:t=q:w=2:g=-2,equalizer=f=12000:t=q:w=2:g=2',
761
+ '-movflags', '+faststart', '-y', ffmpeg_final_file
762
+ ]
763
+ if session['script_mode'] == DOCKER_UTILS:
764
+ try:
765
+ container = session['client'].containers.run(
766
+ docker_utils_image,
767
+ command=ffmpeg_cmd,
768
+ volumes={session['tmp_dir']: {'bind': f'/files/{docker_dir}', 'mode': 'rw'}},
769
+ remove=True,
770
+ detach=False,
771
+ stdout=True,
772
+ stderr=True
773
+ )
774
+ print(container.decode('utf-8'))
775
+ if shutil.copy(docker_final_file, final_file):
776
+ return True
777
+ return False
778
+ except docker.errors.ContainerError as e:
779
+ raise DependencyError(e)
780
+ except docker.errors.ImageNotFound as e:
781
+ raise DependencyError(e)
782
+ except docker.errors.APIError as e:
783
+ raise DependencyError(e)
784
+ else:
785
+ try:
786
+ subprocess.run(ffmpeg_cmd, env={}, check=True)
787
+ return True
788
+ except subprocess.CalledProcessError as e:
789
+ raise DependencyError(e)
790
+
791
+ except Exception as e:
792
+ raise DependencyError(e)
793
+
794
+ try:
795
+ chapter_files = [f for f in os.listdir(session['chapters_dir']) if f.endswith(".wav")]
796
+ chapter_files = sorted(chapter_files, key=lambda x: int(re.search(r'\d+', x).group()))
797
+ assembled_audio = os.path.join(session['tmp_dir'], session['metadata']['title'] + '.' + audioproc_format)
798
+ metadata_file = os.path.join(session['tmp_dir'], 'metadata.txt')
799
+ if assemble_audio():
800
+ if generate_ffmpeg_metadata():
801
+ final_name = session['metadata']['title'] + '.' + audiobook_format
802
+ docker_final_file = os.path.join(session['tmp_dir'], final_name)
803
+ final_file = os.path.join(session['audiobooks_dir'], final_name)
804
+ if export_audio():
805
+ return final_file
806
+ return None
807
+ except Exception as e:
808
+ raise DependencyError(e)
809
+
810
+ def replace_roman_numbers(text):
811
+ def roman_to_int(s):
812
+ try:
813
+ roman = {'I': 1, 'V': 5, 'X': 10, 'L': 50, 'C': 100, 'D': 500, 'M': 1000,
814
+ 'IV': 4, 'IX': 9, 'XL': 40, 'XC': 90, 'CD': 400, 'CM': 900}
815
+ i = 0
816
+ num = 0
817
+ # Iterate over the string to calculate the integer value
818
+ while i < len(s):
819
+ # Check for two-character numerals (subtractive combinations)
820
+ if i + 1 < len(s) and s[i:i+2] in roman:
821
+ num += roman[s[i:i+2]]
822
+ i += 2
823
+ else:
824
+ # Add the value of the single character
825
+ num += roman[s[i]]
826
+ i += 1
827
+ return num
828
+ except Exception as e:
829
+ return s
830
+
831
+ roman_chapter_pattern = re.compile(
832
+ r'\b(chapter|volume|chapitre|tome|capitolo|capítulo|volumen|Kapitel|глава|том|κεφάλαιο|τόμος|capitul|poglavlje)\s'
833
+ r'(M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})|[IVXLCDM]+)\b',
834
+ re.IGNORECASE
835
+ )
836
+
837
+ roman_numerals_with_period = re.compile(
838
+ r'^(M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})|[IVXLCDM])\.+'
839
+ )
840
+
841
+ def replace_chapter_match(match):
842
+ chapter_word = match.group(1)
843
+ roman_numeral = match.group(2)
844
+ integer_value = roman_to_int(roman_numeral.upper())
845
+ return f'{chapter_word.capitalize()} {integer_value}'
846
+
847
+ def replace_numeral_with_period(match):
848
+ roman_numeral = match.group(1)
849
+ integer_value = roman_to_int(roman_numeral)
850
+ return f'{integer_value}.'
851
+
852
+ text = roman_chapter_pattern.sub(replace_chapter_match, text)
853
+ text = roman_numerals_with_period.sub(replace_numeral_with_period, text)
854
+ return text
855
+ '''
856
+ def stop_and_detach_tts(tts=None):
857
+ if tts is not None:
858
+ if next(tts.parameters()).is_cuda:
859
+ tts.to('cpu')
860
+ del tts
861
+ if torch.cuda.is_available():
862
+ torch.cuda.empty_cache()
863
+ '''
864
+ def delete_old_web_folders(root_dir):
865
+ try:
866
+ if not os.path.exists(root_dir):
867
+ os.makedirs(root_dir)
868
+ print(f'Created missing directory: {root_dir}')
869
+ current_time = time.time()
870
+ age_limit = current_time - interface_shared_expire * 60 * 60 # 24 hours in seconds
871
+ for folder_name in os.listdir(root_dir):
872
+ dir_path = os.path.join(root_dir, folder_name)
873
+ if os.path.isdir(dir_path) and folder_name.startswith('web-'):
874
+ folder_creation_time = os.path.getctime(dir_path)
875
+ if folder_creation_time < age_limit:
876
+ shutil.rmtree(dir_path)
877
+ except Exception as e:
878
+ raise DependencyError(e)
879
+
880
+ def compare_file_metadata(f1, f2):
881
+ if os.path.getsize(f1) != os.path.getsize(f2):
882
+ return False
883
+ if os.path.getmtime(f1) != os.path.getmtime(f2):
884
+ return False
885
+ return True
886
+
887
+ def convert_ebook(args):
888
+ try:
889
+ global is_gui_process
890
+ global context
891
+ error = None
892
+ try:
893
+ if len(args['language']) == 2:
894
+ lang_array = languages.get(alpha2=args['language'])
895
+ if lang_array and lang_array.part3:
896
+ args['language'] = lang_array.part3
897
+ else:
898
+ args['language'] = None
899
+ else:
900
+ lang_array = languages.get(part3=args['language'])
901
+ if not lang_array:
902
+ args['language'] = None
903
+ except Exception as e:
904
+ args['language'] = None
905
+ pass
906
+
907
+ if args['language'] is not None and args['language'] in language_mapping.keys():
908
+ session_id = args['session'] if args['session'] is not None else str(uuid.uuid4())
909
+ session = context.get_session(session_id)
910
+ session['id'] = session_id
911
+ session['src'] = args['ebook']
912
+ session['script_mode'] = args['script_mode'] if args['script_mode'] is not None else NATIVE
913
+ session['audiobooks_dir'] = args['audiobooks_dir']
914
+ is_gui_process = args['is_gui_process']
915
+ device = args['device'].lower()
916
+ voice_file = args['voice']
917
+ language = args['language']
918
+ temperature = args['temperature']
919
+ length_penalty = args['length_penalty']
920
+ repetition_penalty = args['repetition_penalty']
921
+ top_k = args['top_k']
922
+ top_p = args['top_p']
923
+ speed = args['speed']
924
+ enable_text_splitting = args['enable_text_splitting'] if args['enable_text_splitting'] is not None else True
925
+ custom_model_file = args['custom_model'] if args['custom_model'] != 'none' and args['custom_model'] is not None else None
926
+ fine_tuned = args['fine_tuned'] if check_fine_tuned(args['fine_tuned'], args['language']) else None
927
+
928
+ if not fine_tuned:
929
+ raise ValueError('The fine tuned model does not exist.')
930
+
931
+ if not os.path.splitext(args['ebook'])[1]:
932
+ raise ValueError('The selected ebook file has no extension. Please select a valid file.')
933
+
934
+ if session['script_mode'] == NATIVE:
935
+ bool, e = check_programs('Calibre', 'calibre', '--version')
936
+ if not bool:
937
+ raise DependencyError(e)
938
+ bool, e = check_programs('FFmpeg', 'ffmpeg', '-version')
939
+ if not bool:
940
+ raise DependencyError(e)
941
+ elif session['script_mode'] == DOCKER_UTILS:
942
+ session['client'] = docker.from_env()
943
+
944
+ session['tmp_dir'] = os.path.join(processes_dir, f"ebook-{session['id']}")
945
+ session['chapters_dir'] = os.path.join(session['tmp_dir'], f"chapters_{hashlib.md5(args['ebook'].encode()).hexdigest()}")
946
+ session['chapters_dir_sentences'] = os.path.join(session['chapters_dir'], 'sentences')
947
+
948
+ if not is_gui_process:
949
+ print(f'*********** Session: {session_id}', '************* Store it in case of interruption or crash you can resume the conversion')
950
+ session['custom_model_dir'] = os.path.join(models_dir,'__sessions',f"model-{session['id']}")
951
+ if custom_model_file:
952
+ session['custom_model'], progression_status = extract_custom_model(custom_model_file, session['custom_model_dir'])
953
+ if not session['custom_model']:
954
+ raise ValueError(f'{custom_model_file} could not be extracted or mandatory files are missing')
955
+
956
+ if prepare_dirs(args['ebook'], session):
957
+ session['filename_noext'] = os.path.splitext(os.path.basename(session['src']))[0]
958
+ if not torch.cuda.is_available() or device == 'cpu':
959
+ if device == 'gpu':
960
+ print('GPU is not available on your device!')
961
+ device = 'cpu'
962
+ else:
963
+ device = 'cuda'
964
+ torch.device(device)
965
+ print(f'Available Processor Unit: {device}')
966
+ session['epub_path'] = os.path.join(session['tmp_dir'], '__' + session['filename_noext'] + '.epub')
967
+ has_src_metadata = has_metadata(session['src'])
968
+ if convert_to_epub(session):
969
+ session['epub'] = epub.read_epub(session['epub_path'], {'ignore_ncx': True})
970
+ metadata = dict(session['metadata'])
971
+ for key, value in metadata.items():
972
+ data = session['epub'].get_metadata('DC', key)
973
+ if data:
974
+ for value, attributes in data:
975
+ if key == 'language' and not has_src_metadata:
976
+ session['metadata'][key] = language
977
+ else:
978
+ session['metadata'][key] = value
979
+ language_array = languages.get(part3=language)
980
+ if language_array and language_array.part1:
981
+ session['metadata']['language_iso1'] = language_array.part1
982
+ if session['metadata']['language'] == language or session['metadata']['language_iso1'] and session['metadata']['language'] == session['metadata']['language_iso1']:
983
+ session['metadata']['title'] = os.path.splitext(os.path.basename(session['src']))[0] if not session['metadata']['title'] else session['metadata']['title']
984
+ session['metadata']['creator'] = False if not session['metadata']['creator'] else session['metadata']['creator']
985
+ session['cover'] = get_cover(session)
986
+ if session['cover']:
987
+ session['chapters'] = get_chapters(language, session)
988
+ if session['chapters']:
989
+ session['device'] = device
990
+ session['temperature'] = temperature
991
+ session['length_penalty'] = length_penalty
992
+ session['repetition_penalty'] = repetition_penalty
993
+ session['top_k'] = top_k
994
+ session['top_p'] = top_p
995
+ session['speed'] = speed
996
+ session['enable_text_splitting'] = enable_text_splitting
997
+ session['fine_tuned'] = fine_tuned
998
+ session['voice_file'] = voice_file
999
+ session['language'] = language
1000
+ if convert_chapters_to_audio(session):
1001
+ final_file = combine_audio_chapters(session)
1002
+ if final_file is not None:
1003
+ chapters_dirs = [
1004
+ dir_name for dir_name in os.listdir(session['tmp_dir'])
1005
+ if fnmatch.fnmatch(dir_name, "chapters_*") and os.path.isdir(os.path.join(session['tmp_dir'], dir_name))
1006
+ ]
1007
+ if len(chapters_dirs) > 1:
1008
+ if os.path.exists(session['chapters_dir']):
1009
+ shutil.rmtree(session['chapters_dir'])
1010
+ if os.path.exists(session['epub_path']):
1011
+ os.remove(session['epub_path'])
1012
+ if os.path.exists(session['cover']):
1013
+ os.remove(session['cover'])
1014
+ else:
1015
+ if os.path.exists(session['tmp_dir']):
1016
+ shutil.rmtree(session['tmp_dir'])
1017
+ progress_status = f'Audiobook {os.path.basename(final_file)} created!'
1018
+ return progress_status, final_file
1019
+ else:
1020
+ error = 'combine_audio_chapters() error: final_file not created!'
1021
+ else:
1022
+ error = 'convert_chapters_to_audio() failed!'
1023
+ else:
1024
+ error = 'get_chapters() failed!'
1025
+ else:
1026
+ error = 'get_cover() failed!'
1027
+ else:
1028
+ error = f"WARNING: Ebook language: {session['metadata']['language']}, language selected: {language}"
1029
+ else:
1030
+ error = 'convert_to_epub() failed!'
1031
+ else:
1032
+ error = f"Temporary directory {session['tmp_dir']} not removed due to failure."
1033
+ else:
1034
+ error = f"Language {args['language']} is not supported."
1035
+ if session['cancellation_requested']:
1036
+ error = 'Cancelled'
1037
+ print(error)
1038
+ return error, None
1039
+ except Exception as e:
1040
+ print(f'convert_ebook() Exception: {e}')
1041
+ return e, None
1042
+
1043
+ def web_interface(args):
1044
+ script_mode = args['script_mode']
1045
+ is_gui_process = args['is_gui_process']
1046
+ is_gui_shared = args['share']
1047
+ is_converting = False
1048
+ audiobooks_dir = None
1049
+ ebook_src = None
1050
+ audiobook_file = None
1051
+ language_options = [
1052
+ (
1053
+ f"{details['name']} - {details['native_name']}" if details['name'] != details['native_name'] else details['name'],
1054
+ lang
1055
+ )
1056
+ for lang, details in language_mapping.items()
1057
+ ]
1058
+ custom_model_options = None
1059
+ fine_tuned_options = list(models['xtts'].keys())
1060
+ default_language_name = next((name for name, key in language_options if key == default_language_code), None)
1061
+
1062
+ theme = gr.themes.Origin(
1063
+ primary_hue='amber',
1064
+ secondary_hue='green',
1065
+ neutral_hue='gray',
1066
+ radius_size='lg',
1067
+ font_mono=['JetBrains Mono', 'monospace', 'Consolas', 'Menlo', 'Liberation Mono']
1068
+ )
1069
+
1070
+ with gr.Blocks(theme=theme) as interface:
1071
+ gr.HTML(
1072
+ '''
1073
+ <style>
1074
+ .svelte-1xyfx7i.center.boundedheight.flex{
1075
+ height: 120px !important;
1076
+ }
1077
+ .block.svelte-5y6bt2 {
1078
+ padding: 10px !important;
1079
+ margin: 0 !important;
1080
+ height: auto !important;
1081
+ font-size: 16px !important;
1082
+ }
1083
+ .wrap.svelte-12ioyct {
1084
+ padding: 0 !important;
1085
+ margin: 0 !important;
1086
+ font-size: 12px !important;
1087
+ }
1088
+ .block.svelte-5y6bt2.padded {
1089
+ height: auto !important;
1090
+ padding: 10px !important;
1091
+ }
1092
+ .block.svelte-5y6bt2.padded.hide-container {
1093
+ height: auto !important;
1094
+ padding: 0 !important;
1095
+ }
1096
+ .waveform-container.svelte-19usgod {
1097
+ height: 58px !important;
1098
+ overflow: hidden !important;
1099
+ padding: 0 !important;
1100
+ margin: 0 !important;
1101
+ }
1102
+ .component-wrapper.svelte-19usgod {
1103
+ height: 110px !important;
1104
+ }
1105
+ .timestamps.svelte-19usgod {
1106
+ display: none !important;
1107
+ }
1108
+ .controls.svelte-ije4bl {
1109
+ padding: 0 !important;
1110
+ margin: 0 !important;
1111
+ }
1112
+ #component-7, #component-10, #component-20 {
1113
+ height: 140px !important;
1114
+ }
1115
+ #component-47, #component-51 {
1116
+ height: 100px !important;
1117
+ }
1118
+ </style>
1119
+ '''
1120
+ )
1121
+ gr.Markdown(
1122
+ f'''
1123
+ # Ebook2Audiobook v{version}<br/>
1124
+ https://github.com/DrewThomasson/ebook2audiobook<br/>
1125
+ Convert eBooks into immersive audiobooks with realistic voice TTS models.<br/>
1126
+ Multiuser, multiprocessing, multithread on a geo cluster to share the conversion to the Grid.
1127
+ '''
1128
+ )
1129
+ with gr.Tabs():
1130
+ gr_tab_main = gr.TabItem('Input Options')
1131
+ with gr_tab_main:
1132
+ with gr.Row():
1133
+ with gr.Column(scale=3):
1134
+ with gr.Group():
1135
+ gr_ebook_file = gr.File(label='EBook File (.epub, .mobi, .azw3, fb2, lrf, rb, snb, tcr, .pdf, .txt, .rtf, doc, .docx, .html, .odt, .azw)', file_types=['.epub', '.mobi', '.azw3', 'fb2', 'lrf', 'rb', 'snb', 'tcr', '.pdf', '.txt', '.rtf', 'doc', '.docx', '.html', '.odt', '.azw'])
1136
+ with gr.Group():
1137
+ gr_voice_file = gr.File(label='*Cloning Voice (a .wav 24khz for XTTS base model and 16khz for FAIRSEQ base model, no more than 6 sec)', file_types=['.wav'], visible=interface_component_options['gr_voice_file'])
1138
+ gr.Markdown('<p>&nbsp;&nbsp;* Optional</p>')
1139
+ with gr.Group():
1140
+ gr_device = gr.Radio(label='Processor Unit', choices=['CPU', 'GPU'], value='CPU')
1141
+ with gr.Group():
1142
+ gr_language = gr.Dropdown(label='Language', choices=[name for name, _ in language_options], value=default_language_name)
1143
+ with gr.Column(scale=3):
1144
+ gr_group_custom_model = gr.Group(visible=interface_component_options['gr_group_custom_model'])
1145
+ with gr_group_custom_model:
1146
+ gr_custom_model_file = gr.File(label='*Custom XTTS Model (a .zip containing config.json, vocab.json, model.pth, ref.wav)', file_types=['.zip'])
1147
+ gr_custom_model_list = gr.Dropdown(label='', choices=['none'], interactive=True)
1148
+ gr.Markdown('<p>&nbsp;&nbsp;* Optional</p>')
1149
+ with gr.Group():
1150
+ gr_session_status = gr.Textbox(label='Session')
1151
+ with gr.Group():
1152
+ gr_tts_engine = gr.Dropdown(label='TTS Base', choices=[default_tts_engine], value=default_tts_engine, interactive=True)
1153
+ gr_fine_tuned = gr.Dropdown(label='Fine Tuned Models', choices=fine_tuned_options, value=default_fine_tuned, interactive=True)
1154
+ gr_tab_preferences = gr.TabItem('Audio Generation Preferences', visible=interface_component_options['gr_tab_preferences'])
1155
+ with gr_tab_preferences:
1156
+ gr.Markdown(
1157
+ '''
1158
+ ### Customize Audio Generation Parameters
1159
+ Adjust the settings below to influence how the audio is generated. You can control the creativity, speed, repetition, and more.
1160
+ '''
1161
+ )
1162
+ gr_temperature = gr.Slider(
1163
+ label='Temperature',
1164
+ minimum=0.1,
1165
+ maximum=10.0,
1166
+ step=0.1,
1167
+ value=0.65,
1168
+ info='Higher values lead to more creative, unpredictable outputs. Lower values make it more monotone.'
1169
+ )
1170
+ gr_length_penalty = gr.Slider(
1171
+ label='Length Penalty',
1172
+ minimum=0.5,
1173
+ maximum=10.0,
1174
+ step=0.1,
1175
+ value=1.0,
1176
+ info='Penalize longer sequences. Higher values produce shorter outputs. Not applied to custom models.'
1177
+ )
1178
+ gr_repetition_penalty = gr.Slider(
1179
+ label='Repetition Penalty',
1180
+ minimum=1.0,
1181
+ maximum=10.0,
1182
+ step=0.1,
1183
+ value=2.5,
1184
+ info='Penalizes repeated phrases. Higher values reduce repetition.'
1185
+ )
1186
+ gr_top_k = gr.Slider(
1187
+ label='Top-k Sampling',
1188
+ minimum=10,
1189
+ maximum=100,
1190
+ step=1,
1191
+ value=50,
1192
+ info='Lower values restrict outputs to more likely words and increase speed at which audio generates.'
1193
+ )
1194
+ gr_top_p = gr.Slider(
1195
+ label='Top-p Sampling',
1196
+ minimum=0.1,
1197
+ maximum=1.0,
1198
+ step=.01,
1199
+ value=0.8,
1200
+ info='Controls cumulative probability for word selection. Lower values make the output more predictable and increase speed at which audio generates.'
1201
+ )
1202
+ gr_speed = gr.Slider(
1203
+ label='Speed',
1204
+ minimum=0.5,
1205
+ maximum=3.0,
1206
+ step=0.1,
1207
+ value=1.0,
1208
+ info='Adjusts how fast the narrator will speak.'
1209
+ )
1210
+ gr_enable_text_splitting = gr.Checkbox(
1211
+ label='Enable Text Splitting',
1212
+ value=True,
1213
+ info='Splits long texts into sentences to generate audio in chunks. Useful for very long inputs.'
1214
+ )
1215
+
1216
+ gr_state = gr.State(value="") # Initialize state for each user session
1217
+ gr_session = gr.Textbox(label='Session', visible=False)
1218
+ gr_conversion_progress = gr.Textbox(label='Progress')
1219
+ gr_convert_btn = gr.Button('Convert', variant='primary', interactive=False)
1220
+ gr_audio_player = gr.Audio(label='Listen', type='filepath', show_download_button=False, container=True, visible=False)
1221
+ gr_audiobooks_ddn = gr.Dropdown(choices=[], label='Audiobooks')
1222
+ gr_audiobook_link = gr.File(label='Download')
1223
+ gr_write_data = gr.JSON(visible=False)
1224
+ gr_read_data = gr.JSON(visible=False)
1225
+ gr_data = gr.State({})
1226
+ gr_modal_html = gr.HTML()
1227
+
1228
+ def show_modal(message):
1229
+ return f'''
1230
+ <style>
1231
+ .modal {{
1232
+ display: none; /* Hidden by default */
1233
+ position: fixed;
1234
+ top: 0;
1235
+ left: 0;
1236
+ width: 100%;
1237
+ height: 100%;
1238
+ background-color: rgba(0, 0, 0, 0.5);
1239
+ z-index: 9999;
1240
+ display: flex;
1241
+ justify-content: center;
1242
+ align-items: center;
1243
+ }}
1244
+ .modal-content {{
1245
+ background-color: #333;
1246
+ padding: 20px;
1247
+ border-radius: 8px;
1248
+ text-align: center;
1249
+ max-width: 300px;
1250
+ box-shadow: 0 4px 8px rgba(0, 0, 0, 0.5);
1251
+ border: 2px solid #FFA500;
1252
+ color: white;
1253
+ font-family: Arial, sans-serif;
1254
+ position: relative;
1255
+ }}
1256
+ .modal-content p {{
1257
+ margin: 10px 0;
1258
+ }}
1259
+ /* Spinner */
1260
+ .spinner {{
1261
+ margin: 15px auto;
1262
+ border: 4px solid rgba(255, 255, 255, 0.2);
1263
+ border-top: 4px solid #FFA500;
1264
+ border-radius: 50%;
1265
+ width: 30px;
1266
+ height: 30px;
1267
+ animation: spin 1s linear infinite;
1268
+ }}
1269
+ @keyframes spin {{
1270
+ 0% {{ transform: rotate(0deg); }}
1271
+ 100% {{ transform: rotate(360deg); }}
1272
+ }}
1273
+ </style>
1274
+ <div id="custom-modal" class="modal">
1275
+ <div class="modal-content">
1276
+ <p>{message}</p>
1277
+ <div class="spinner"></div> <!-- Spinner added here -->
1278
+ </div>
1279
+ </div>
1280
+ '''
1281
+
1282
+ def hide_modal():
1283
+ return ''
1284
+
1285
+ def update_interface():
1286
+ nonlocal is_converting
1287
+ is_converting = False
1288
+ return gr.update('Convert', variant='primary', interactive=False), gr.update(value=None), gr.update(value=None), gr.update(value=audiobook_file), update_audiobooks_ddn(), hide_modal()
1289
+
1290
+ def refresh_audiobook_list():
1291
+ files = []
1292
+ if audiobooks_dir is not None:
1293
+ if os.path.exists(audiobooks_dir):
1294
+ files = [f for f in os.listdir(audiobooks_dir)]
1295
+ files.sort(key=lambda x: os.path.getmtime(os.path.join(audiobooks_dir, x)), reverse=True)
1296
+ return files
1297
+
1298
+ def change_gr_audiobooks_ddn(audiobook):
1299
+ if audiobooks_dir is not None:
1300
+ if audiobook:
1301
+ link = os.path.join(audiobooks_dir, audiobook)
1302
+ return link, link, gr.update(visible=True)
1303
+ return None, None, gr.update(visible=False)
1304
+
1305
+ def update_convert_btn(upload_file=None, custom_model_file=None, session_id=None):
1306
+ if session_id is None:
1307
+ yield gr.update(variant='primary', interactive=False)
1308
+ return
1309
+ else:
1310
+ session = context.get_session(session_id)
1311
+ if hasattr(upload_file, 'name') and not hasattr(custom_model_file, 'name'):
1312
+ yield gr.update(variant='primary', interactive=True)
1313
+ else:
1314
+ yield gr.update(variant='primary', interactive=False)
1315
+ return
1316
+
1317
+ def update_audiobooks_ddn():
1318
+ files = refresh_audiobook_list()
1319
+ return gr.update(choices=files, label='Audiobooks', value=files[0] if files else None)
1320
+
1321
+ async def change_gr_ebook_file(f, session_id):
1322
+ nonlocal is_converting
1323
+ if context and session_id:
1324
+ session = context.get_session(session_id)
1325
+ if f is None:
1326
+ if is_converting:
1327
+ session['cancellation_requested'] = True
1328
+ yield show_modal('Cancellation requested, please wait...')
1329
+ return
1330
+ session['cancellation_requested'] = False
1331
+ yield hide_modal()
1332
+ return
1333
+
1334
+ def change_gr_language(selected: str, session_id: str):
1335
+ nonlocal custom_model_options
1336
+ if selected == 'zzzz':
1337
+ new_language_name = default_language_name
1338
+ new_language_key = default_language_code
1339
+ else:
1340
+ new_language_name, new_language_key = next(((name, key) for name, key in language_options if key == selected), (None, None))
1341
+ tts_engine_options = ['xtts'] if language_xtts.get(new_language_key, False) else ['fairseq']
1342
+ fine_tuned_options = [
1343
+ model_name
1344
+ for model_name, model_details in models.get(tts_engine_options[0], {}).items()
1345
+ if model_details.get('lang') == 'multi' or model_details.get('lang') == new_language_key
1346
+ ]
1347
+ custom_model_options = ['none']
1348
+ if context and session_id:
1349
+ session = context.get_session(session_id)
1350
+ session['language'] = new_language_key
1351
+ custom_model_tts = check_custom_model_tts(session)
1352
+ custom_model_tts_dir = os.path.join(session['custom_model_dir'], custom_model_tts)
1353
+ if os.path.exists(custom_model_tts_dir):
1354
+ custom_model_options += os.listdir(custom_model_tts_dir)
1355
+ return (
1356
+ gr.update(value=new_language_name),
1357
+ gr.update(choices=tts_engine_options, value=tts_engine_options[0]),
1358
+ gr.update(choices=fine_tuned_options, value=fine_tuned_options[0] if fine_tuned_options else 'none'),
1359
+ gr.update(choices=custom_model_options, value=custom_model_options[0])
1360
+ )
1361
+
1362
+ def check_custom_model_tts(session):
1363
+ custom_model_tts = 'xtts'
1364
+ if not language_xtts.get(session['language']):
1365
+ custom_model_tts = 'fairseq'
1366
+ custom_model_tts_dir = os.path.join(session['custom_model_dir'], custom_model_tts)
1367
+ if not os.path.isdir(custom_model_tts_dir):
1368
+ os.makedirs(custom_model_tts_dir, exist_ok=True)
1369
+ return custom_model_tts
1370
+
1371
+ def change_gr_custom_model_list(custom_model_list):
1372
+ if custom_model_list == 'none':
1373
+ return gr.update(visible=True)
1374
+ return gr.update(visible=False)
1375
+
1376
+ async def change_gr_custom_model_file(custom_model_file, session_id):
1377
+ try:
1378
+ nonlocal custom_model_options, gr_custom_model_file, gr_conversion_progress
1379
+ if context and session_id:
1380
+ session = context.get_session(session_id)
1381
+ if custom_model_file is not None:
1382
+ if analyze_uploaded_file(custom_model_file):
1383
+ session['custom_model'], progress_status = extract_custom_model(custom_model_file, None, session)
1384
+ if session['custom_model']:
1385
+ custom_model_tts_dir = check_custom_model_tts(session)
1386
+ custom_model_options = ['none'] + os.listdir(os.path.join(session['custom_model_dir'], custom_model_tts_dir))
1387
+ yield (
1388
+ gr.update(visible=False),
1389
+ gr.update(choices=custom_model_options, value=session['custom_model']),
1390
+ gr.update(value=f"{session['custom_model']} added to the custom list")
1391
+ )
1392
+ gr_custom_model_file = gr.File(label='*XTTS Model (a .zip containing config.json, vocab.json, model.pth, ref.wav)', value=None, file_types=['.zip'])
1393
+ return
1394
+ yield gr.update(), gr.update(), gr.update(value='Invalid file! Please upload a valid ZIP.')
1395
+ return
1396
+ except Exception as e:
1397
+ yield gr.update(), gr.update(), gr.update(value=f'Error: {str(e)}')
1398
+ return
1399
+
1400
+ def change_gr_tts_engine(engine):
1401
+ if engine == 'xtts':
1402
+ return gr.update(visible=True)
1403
+ else:
1404
+ return gr.update(visible=False)
1405
+
1406
+ def change_gr_fine_tuned(fine_tuned):
1407
+ visible = False
1408
+ if fine_tuned == 'std':
1409
+ visible = True
1410
+ return gr.update(visible=visible)
1411
+
1412
+ def change_gr_data(data):
1413
+ data['event'] = 'change_data'
1414
+ return data
1415
+
1416
+ def change_gr_read_data(data):
1417
+ nonlocal audiobooks_dir
1418
+ nonlocal custom_model_options
1419
+ warning_text_extra = ''
1420
+ if not data:
1421
+ data = {'session_id': str(uuid.uuid4())}
1422
+ warning_text = f"Session: {data['session_id']}"
1423
+ else:
1424
+ if 'session_id' not in data:
1425
+ data['session_id'] = str(uuid.uuid4())
1426
+ warning_text = data['session_id']
1427
+ event = data.get('event', '')
1428
+ if event != 'load':
1429
+ return [gr.update(), gr.update(), gr.update(), gr.update(), gr.update()]
1430
+ session = context.get_session(data['session_id'])
1431
+ session['custom_model_dir'] = os.path.join(models_dir,'__sessions',f"model-{session['id']}")
1432
+ os.makedirs(session['custom_model_dir'], exist_ok=True)
1433
+ custom_model_tts_dir = check_custom_model_tts(session)
1434
+ custom_model_options = ['none'] + os.listdir(os.path.join(session['custom_model_dir'],custom_model_tts_dir))
1435
+ if is_gui_shared:
1436
+ warning_text_extra = f' Note: access limit time: {interface_shared_expire} hours'
1437
+ audiobooks_dir = os.path.join(audiobooks_gradio_dir, f"web-{data['session_id']}")
1438
+ delete_old_web_folders(audiobooks_gradio_dir)
1439
+ else:
1440
+ audiobooks_dir = os.path.join(audiobooks_host_dir, f"web-{data['session_id']}")
1441
+ return [data, f'{warning_text}{warning_text_extra}', data['session_id'], update_audiobooks_ddn(), gr.update(choices=custom_model_options, value='none')]
1442
+
1443
+ def submit_convert_btn(
1444
+ session, device, ebook_file, voice_file, language,
1445
+ custom_model_file, temperature, length_penalty,
1446
+ repetition_penalty, top_k, top_p, speed, enable_text_splitting, fine_tuned
1447
+ ):
1448
+ nonlocal is_converting
1449
+
1450
+ args = {
1451
+ "is_gui_process": is_gui_process,
1452
+ "session": session,
1453
+ "script_mode": script_mode,
1454
+ "device": device.lower(),
1455
+ "ebook": ebook_file.name if ebook_file else None,
1456
+ "audiobooks_dir": audiobooks_dir,
1457
+ "voice": voice_file.name if voice_file else None,
1458
+ "language": next((key for name, key in language_options if name == language), None),
1459
+ "custom_model": next((key for name, key in language_options if name != 'none'), None),
1460
+ "temperature": float(temperature),
1461
+ "length_penalty": float(length_penalty),
1462
+ "repetition_penalty": float(repetition_penalty),
1463
+ "top_k": int(top_k),
1464
+ "top_p": float(top_p),
1465
+ "speed": float(speed),
1466
+ "enable_text_splitting": enable_text_splitting,
1467
+ "fine_tuned": fine_tuned
1468
+ }
1469
+
1470
+ if args["ebook"] is None:
1471
+ yield gr.update(value='Error: a file is required.')
1472
+ return
1473
+
1474
+ try:
1475
+ is_converting = True
1476
+ progress_status, audiobook_file = convert_ebook(args)
1477
+ if audiobook_file is None:
1478
+ if is_converting:
1479
+ yield gr.update(value='Conversion cancelled.')
1480
+ return
1481
+ else:
1482
+ yield gr.update(value='Conversion failed.')
1483
+ return
1484
+ else:
1485
+ yield progress_status
1486
+ return
1487
+ except Exception as e:
1488
+ yield DependencyError(e)
1489
+ return
1490
+
1491
+ gr_ebook_file.change(
1492
+ fn=update_convert_btn,
1493
+ inputs=[gr_ebook_file, gr_custom_model_file, gr_session],
1494
+ outputs=gr_convert_btn
1495
+ ).then(
1496
+ fn=change_gr_ebook_file,
1497
+ inputs=[gr_ebook_file, gr_session],
1498
+ outputs=[gr_modal_html]
1499
+ )
1500
+ gr_language.change(
1501
+ fn=lambda selected, session_id: change_gr_language(dict(language_options).get(selected, 'Unknown'), session_id),
1502
+ inputs=[gr_language, gr_session],
1503
+ outputs=[gr_language, gr_tts_engine, gr_fine_tuned, gr_custom_model_list]
1504
+ )
1505
+ gr_audiobooks_ddn.change(
1506
+ fn=change_gr_audiobooks_ddn,
1507
+ inputs=gr_audiobooks_ddn,
1508
+ outputs=[gr_audiobook_link, gr_audio_player, gr_audio_player]
1509
+ )
1510
+ gr_custom_model_file.change(
1511
+ fn=change_gr_custom_model_file,
1512
+ inputs=[gr_custom_model_file, gr_session],
1513
+ outputs=[gr_fine_tuned, gr_custom_model_list, gr_conversion_progress]
1514
+ )
1515
+ gr_custom_model_list.change(
1516
+ fn=change_gr_custom_model_list,
1517
+ inputs=gr_custom_model_list,
1518
+ outputs=gr_fine_tuned
1519
+ )
1520
+ gr_tts_engine.change(
1521
+ fn=change_gr_tts_engine,
1522
+ inputs=gr_tts_engine,
1523
+ outputs=gr_tab_preferences
1524
+ )
1525
+ gr_fine_tuned.change(
1526
+ fn=change_gr_fine_tuned,
1527
+ inputs=gr_fine_tuned,
1528
+ outputs=gr_group_custom_model
1529
+ )
1530
+ gr_session.change(
1531
+ fn=change_gr_data,
1532
+ inputs=gr_data,
1533
+ outputs=gr_write_data
1534
+ )
1535
+ gr_write_data.change(
1536
+ fn=None,
1537
+ inputs=gr_write_data,
1538
+ js='''
1539
+ (data) => {
1540
+ localStorage.clear();
1541
+ console.log(data);
1542
+ window.localStorage.setItem('data', JSON.stringify(data));
1543
+ }
1544
+ '''
1545
+ )
1546
+ gr_read_data.change(
1547
+ fn=change_gr_read_data,
1548
+ inputs=gr_read_data,
1549
+ outputs=[gr_data, gr_session_status, gr_session, gr_audiobooks_ddn, gr_custom_model_list]
1550
+ )
1551
+ gr_convert_btn.click(
1552
+ fn=update_convert_btn,
1553
+ inputs=None,
1554
+ outputs=gr_convert_btn
1555
+ ).then(
1556
+ fn=submit_convert_btn,
1557
+ inputs=[
1558
+ gr_session, gr_device, gr_ebook_file, gr_voice_file, gr_language,
1559
+ gr_custom_model_list, gr_temperature, gr_length_penalty,
1560
+ gr_repetition_penalty, gr_top_k, gr_top_p, gr_speed, gr_enable_text_splitting, gr_fine_tuned
1561
+ ],
1562
+ outputs=gr_conversion_progress
1563
+ ).then(
1564
+ fn=update_interface,
1565
+ inputs=None,
1566
+ outputs=[gr_convert_btn, gr_ebook_file, gr_voice_file, gr_audio_player, gr_audiobooks_ddn, gr_modal_html]
1567
+ )
1568
+ interface.load(
1569
+ fn=None,
1570
+ js='''
1571
+ () => {
1572
+ const dataStr = window.localStorage.getItem('data');
1573
+ if (dataStr) {
1574
+ const obj = JSON.parse(dataStr);
1575
+ obj.event = 'load';
1576
+ console.log(obj);
1577
+ return obj;
1578
+ }
1579
+ return null;
1580
+ }
1581
+ ''',
1582
+ outputs=gr_read_data
1583
+ )
1584
+
1585
+ try:
1586
+ interface.queue(default_concurrency_limit=interface_concurrency_limit).launch(server_name=interface_host, server_port=interface_port, share=is_gui_shared)
1587
+ except OSError as e:
1588
+ print(f'Connection error: {e}')
1589
+ except socket.error as e:
1590
+ print(f'Socket error: {e}')
1591
+ except KeyboardInterrupt:
1592
+ print('Server interrupted by user. Shutting down...')
1593
+ except Exception as e:
1594
+ print(f'An unexpected error occurred: {e}')
lib/lang.py ADDED
The diff for this file is too large to render. See raw diff
 
lib/tokenizer.py ADDED
@@ -0,0 +1,906 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ import os
3
+ import re
4
+ import textwrap
5
+ from functools import cached_property
6
+
7
+ import torch
8
+ from num2words import num2words
9
+ from spacy.lang.ar import Arabic
10
+ from spacy.lang.en import English
11
+ from spacy.lang.es import Spanish
12
+ from spacy.lang.hi import Hindi
13
+ from spacy.lang.ja import Japanese
14
+ from spacy.lang.zh import Chinese
15
+ from tokenizers import Tokenizer
16
+
17
+ from TTS.tts.layers.xtts.zh_num2words import TextNorm as zh_num2words
18
+
19
+ logger = logging.getLogger(__name__)
20
+
21
+
22
+ def get_spacy_lang(lang):
23
+ """Return Spacy language used for sentence splitting."""
24
+ if lang == "zh":
25
+ return Chinese()
26
+ elif lang == "ja":
27
+ return Japanese()
28
+ elif lang == "ar":
29
+ return Arabic()
30
+ elif lang == "es":
31
+ return Spanish()
32
+ elif lang == "hi":
33
+ return Hindi()
34
+ else:
35
+ # For most languages, English does the job
36
+ return English()
37
+
38
+
39
+ def split_sentence(text, lang, text_split_length=250):
40
+ """Preprocess the input text"""
41
+ text_splits = []
42
+ if text_split_length is not None and len(text) >= text_split_length:
43
+ text_splits.append("")
44
+ nlp = get_spacy_lang(lang)
45
+ nlp.add_pipe("sentencizer")
46
+ doc = nlp(text)
47
+ for sentence in doc.sents:
48
+ if len(text_splits[-1]) + len(str(sentence)) <= text_split_length:
49
+ # if the last sentence + the current sentence is less than the text_split_length
50
+ # then add the current sentence to the last sentence
51
+ text_splits[-1] += " " + str(sentence)
52
+ text_splits[-1] = text_splits[-1].lstrip()
53
+ elif len(str(sentence)) > text_split_length:
54
+ # if the current sentence is greater than the text_split_length
55
+ for line in textwrap.wrap(
56
+ str(sentence),
57
+ width=text_split_length,
58
+ drop_whitespace=True,
59
+ break_on_hyphens=False,
60
+ tabsize=1,
61
+ ):
62
+ text_splits.append(str(line))
63
+ else:
64
+ text_splits.append(str(sentence))
65
+
66
+ if len(text_splits) > 1:
67
+ if text_splits[0] == "":
68
+ del text_splits[0]
69
+ else:
70
+ text_splits = [text.lstrip()]
71
+
72
+ return text_splits
73
+
74
+
75
+ _whitespace_re = re.compile(r"\s+")
76
+
77
+ # List of (regular expression, replacement) pairs for abbreviations:
78
+ _abbreviations = {
79
+ "en": [
80
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
81
+ for x in [
82
+ ("mrs", "misess"),
83
+ ("mr", "mister"),
84
+ ("dr", "doctor"),
85
+ ("st", "saint"),
86
+ ("co", "company"),
87
+ ("jr", "junior"),
88
+ ("maj", "major"),
89
+ ("gen", "general"),
90
+ ("drs", "doctors"),
91
+ ("rev", "reverend"),
92
+ ("lt", "lieutenant"),
93
+ ("hon", "honorable"),
94
+ ("sgt", "sergeant"),
95
+ ("capt", "captain"),
96
+ ("esq", "esquire"),
97
+ ("ltd", "limited"),
98
+ ("col", "colonel"),
99
+ ("ft", "fort"),
100
+ ]
101
+ ],
102
+ "es": [
103
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
104
+ for x in [
105
+ ("sra", "señora"),
106
+ ("sr", "señor"),
107
+ ("dr", "doctor"),
108
+ ("dra", "doctora"),
109
+ ("st", "santo"),
110
+ ("co", "compañía"),
111
+ ("jr", "junior"),
112
+ ("ltd", "limitada"),
113
+ ]
114
+ ],
115
+ "fr": [
116
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
117
+ for x in [
118
+ ("mme", "madame"),
119
+ ("mr", "monsieur"),
120
+ ("dr", "docteur"),
121
+ ("st", "saint"),
122
+ ("co", "compagnie"),
123
+ ("jr", "junior"),
124
+ ("ltd", "limitée"),
125
+ ]
126
+ ],
127
+ "de": [
128
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
129
+ for x in [
130
+ ("fr", "frau"),
131
+ ("dr", "doktor"),
132
+ ("st", "sankt"),
133
+ ("co", "firma"),
134
+ ("jr", "junior"),
135
+ ]
136
+ ],
137
+ "pt": [
138
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
139
+ for x in [
140
+ ("sra", "senhora"),
141
+ ("sr", "senhor"),
142
+ ("dr", "doutor"),
143
+ ("dra", "doutora"),
144
+ ("st", "santo"),
145
+ ("co", "companhia"),
146
+ ("jr", "júnior"),
147
+ ("ltd", "limitada"),
148
+ ]
149
+ ],
150
+ "it": [
151
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
152
+ for x in [
153
+ # ("sig.ra", "signora"),
154
+ ("sig", "signore"),
155
+ ("dr", "dottore"),
156
+ ("st", "santo"),
157
+ ("co", "compagnia"),
158
+ ("jr", "junior"),
159
+ ("ltd", "limitata"),
160
+ ]
161
+ ],
162
+ "pl": [
163
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
164
+ for x in [
165
+ ("p", "pani"),
166
+ ("m", "pan"),
167
+ ("dr", "doktor"),
168
+ ("sw", "święty"),
169
+ ("jr", "junior"),
170
+ ]
171
+ ],
172
+ "ar": [
173
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
174
+ for x in [
175
+ # There are not many common abbreviations in Arabic as in English.
176
+ ]
177
+ ],
178
+ "zh": [
179
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
180
+ for x in [
181
+ # Chinese doesn't typically use abbreviations in the same way as Latin-based scripts.
182
+ ]
183
+ ],
184
+ "cs": [
185
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
186
+ for x in [
187
+ ("dr", "doktor"), # doctor
188
+ ("ing", "inženýr"), # engineer
189
+ ("p", "pan"), # Could also map to pani for woman but no easy way to do it
190
+ # Other abbreviations would be specialized and not as common.
191
+ ]
192
+ ],
193
+ "ru": [
194
+ (re.compile("\\b%s\\b" % x[0], re.IGNORECASE), x[1])
195
+ for x in [
196
+ ("г-жа", "госпожа"), # Mrs.
197
+ ("г-н", "господин"), # Mr.
198
+ ("д-р", "доктор"), # doctor
199
+ # Other abbreviations are less common or specialized.
200
+ ]
201
+ ],
202
+ "nl": [
203
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
204
+ for x in [
205
+ ("dhr", "de heer"), # Mr.
206
+ ("mevr", "mevrouw"), # Mrs.
207
+ ("dr", "dokter"), # doctor
208
+ ("jhr", "jonkheer"), # young lord or nobleman
209
+ # Dutch uses more abbreviations, but these are the most common ones.
210
+ ]
211
+ ],
212
+ "tr": [
213
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
214
+ for x in [
215
+ ("b", "bay"), # Mr.
216
+ ("byk", "büyük"), # büyük
217
+ ("dr", "doktor"), # doctor
218
+ # Add other Turkish abbreviations here if needed.
219
+ ]
220
+ ],
221
+ "hu": [
222
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
223
+ for x in [
224
+ ("dr", "doktor"), # doctor
225
+ ("b", "bácsi"), # Mr.
226
+ ("nőv", "nővér"), # nurse
227
+ # Add other Hungarian abbreviations here if needed.
228
+ ]
229
+ ],
230
+ "ko": [
231
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
232
+ for x in [
233
+ # Korean doesn't typically use abbreviations in the same way as Latin-based scripts.
234
+ ]
235
+ ],
236
+ "hi": [
237
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
238
+ for x in [
239
+ # Hindi doesn't typically use abbreviations in the same way as Latin-based scripts.
240
+ ]
241
+
242
+ ],
243
+ "vi": [
244
+ (re.compile("\\b%s\\." % x[0], re.IGNORECASE), x[1])
245
+ for x in [
246
+ ("ông", "ông"), # Mr.
247
+ ("bà", "bà"), # Mrs.
248
+ ("dr", "bác sĩ"), # doctor
249
+ ("ts", "tiến sĩ"), # PhD
250
+ ("st", "số thứ tự"), # ordinal
251
+ ]
252
+ ],
253
+ }
254
+
255
+
256
+ def expand_abbreviations_multilingual(text, lang="en"):
257
+ for regex, replacement in _abbreviations[lang]:
258
+ text = re.sub(regex, replacement, text)
259
+ return text
260
+
261
+
262
+ _symbols_multilingual = {
263
+ "en": [
264
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
265
+ for x in [
266
+ ("&", " and "),
267
+ ("@", " at "),
268
+ ("%", " percent "),
269
+ ("#", " hash "),
270
+ ("$", " dollar "),
271
+ ("£", " pound "),
272
+ ("°", " degree "),
273
+ ]
274
+ ],
275
+ "es": [
276
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
277
+ for x in [
278
+ ("&", " y "),
279
+ ("@", " arroba "),
280
+ ("%", " por ciento "),
281
+ ("#", " numeral "),
282
+ ("$", " dolar "),
283
+ ("£", " libra "),
284
+ ("°", " grados "),
285
+ ]
286
+ ],
287
+ "fr": [
288
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
289
+ for x in [
290
+ ("&", " et "),
291
+ ("@", " arobase "),
292
+ ("%", " pour cent "),
293
+ ("#", " dièse "),
294
+ ("$", " dollar "),
295
+ ("£", " livre "),
296
+ ("°", " degrés "),
297
+ ]
298
+ ],
299
+ "de": [
300
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
301
+ for x in [
302
+ ("&", " und "),
303
+ ("@", " at "),
304
+ ("%", " prozent "),
305
+ ("#", " raute "),
306
+ ("$", " dollar "),
307
+ ("£", " pfund "),
308
+ ("°", " grad "),
309
+ ]
310
+ ],
311
+ "pt": [
312
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
313
+ for x in [
314
+ ("&", " e "),
315
+ ("@", " arroba "),
316
+ ("%", " por cento "),
317
+ ("#", " cardinal "),
318
+ ("$", " dólar "),
319
+ ("£", " libra "),
320
+ ("°", " graus "),
321
+ ]
322
+ ],
323
+ "it": [
324
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
325
+ for x in [
326
+ ("&", " e "),
327
+ ("@", " chiocciola "),
328
+ ("%", " per cento "),
329
+ ("#", " cancelletto "),
330
+ ("$", " dollaro "),
331
+ ("£", " sterlina "),
332
+ ("��", " gradi "),
333
+ ]
334
+ ],
335
+ "pl": [
336
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
337
+ for x in [
338
+ ("&", " i "),
339
+ ("@", " małpa "),
340
+ ("%", " procent "),
341
+ ("#", " krzyżyk "),
342
+ ("$", " dolar "),
343
+ ("£", " funt "),
344
+ ("°", " stopnie "),
345
+ ]
346
+ ],
347
+ "ar": [
348
+ # Arabic
349
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
350
+ for x in [
351
+ ("&", " و "),
352
+ ("@", " على "),
353
+ ("%", " في المئة "),
354
+ ("#", " رقم "),
355
+ ("$", " دولار "),
356
+ ("£", " جنيه "),
357
+ ("°", " درجة "),
358
+ ]
359
+ ],
360
+ "zh": [
361
+ # Chinese
362
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
363
+ for x in [
364
+ ("&", " 和 "),
365
+ ("@", " 在 "),
366
+ ("%", " 百分之 "),
367
+ ("#", " 号 "),
368
+ ("$", " 美元 "),
369
+ ("£", " 英镑 "),
370
+ ("°", " 度 "),
371
+ ]
372
+ ],
373
+ "cs": [
374
+ # Czech
375
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
376
+ for x in [
377
+ ("&", " a "),
378
+ ("@", " na "),
379
+ ("%", " procento "),
380
+ ("#", " křížek "),
381
+ ("$", " dolar "),
382
+ ("£", " libra "),
383
+ ("°", " stupně "),
384
+ ]
385
+ ],
386
+ "ru": [
387
+ # Russian
388
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
389
+ for x in [
390
+ ("&", " и "),
391
+ ("@", " собака "),
392
+ ("%", " процентов "),
393
+ ("#", " номер "),
394
+ ("$", " доллар "),
395
+ ("£", " фунт "),
396
+ ("°", " градус "),
397
+ ]
398
+ ],
399
+ "nl": [
400
+ # Dutch
401
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
402
+ for x in [
403
+ ("&", " en "),
404
+ ("@", " bij "),
405
+ ("%", " procent "),
406
+ ("#", " hekje "),
407
+ ("$", " dollar "),
408
+ ("£", " pond "),
409
+ ("°", " graden "),
410
+ ]
411
+ ],
412
+ "tr": [
413
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
414
+ for x in [
415
+ ("&", " ve "),
416
+ ("@", " at "),
417
+ ("%", " yüzde "),
418
+ ("#", " diyez "),
419
+ ("$", " dolar "),
420
+ ("£", " sterlin "),
421
+ ("°", " derece "),
422
+ ]
423
+ ],
424
+ "hu": [
425
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
426
+ for x in [
427
+ ("&", " és "),
428
+ ("@", " kukac "),
429
+ ("%", " százalék "),
430
+ ("#", " kettőskereszt "),
431
+ ("$", " dollár "),
432
+ ("£", " font "),
433
+ ("°", " fok "),
434
+ ]
435
+ ],
436
+ "ko": [
437
+ # Korean
438
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
439
+ for x in [
440
+ ("&", " 그리고 "),
441
+ ("@", " 에 "),
442
+ ("%", " 퍼센트 "),
443
+ ("#", " 번호 "),
444
+ ("$", " 달러 "),
445
+ ("£", " 파운드 "),
446
+ ("°", " 도 "),
447
+ ]
448
+ ],
449
+ "hi": [
450
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
451
+ for x in [
452
+ ("&", " और "),
453
+ ("@", " ऐट दी रेट "),
454
+ ("%", " प्रतिशत "),
455
+ ("#", " हैश "),
456
+ ("$", " डॉलर "),
457
+ ("£", " पाउंड "),
458
+ ("°", " डिग्री "),
459
+ ]
460
+ ],
461
+ "vi": [
462
+ (re.compile(r"%s" % re.escape(x[0]), re.IGNORECASE), x[1])
463
+ for x in [
464
+ ("&", " và "), # and
465
+ ("@", " a còng "), # at
466
+ ("%", " phần trăm "), # percent
467
+ ("#", " dấu thăng "), # hash
468
+ ("$", " đô la "), # dollar
469
+ ("£", " bảng Anh "), # pound
470
+ ("°", " độ "), # degree
471
+ ]
472
+ ],
473
+ }
474
+
475
+
476
+ def expand_symbols_multilingual(text, lang="en"):
477
+ for regex, replacement in _symbols_multilingual[lang]:
478
+ text = re.sub(regex, replacement, text)
479
+ text = text.replace(" ", " ") # Ensure there are no double spaces
480
+ return text.strip()
481
+
482
+
483
+ _ordinal_re = {
484
+ "en": re.compile(r"([0-9]+)(st|nd|rd|th)"),
485
+ "es": re.compile(r"([0-9]+)(º|ª|er|o|a|os|as)"),
486
+ "fr": re.compile(r"([0-9]+)(º|ª|er|re|e|ème)"),
487
+ "de": re.compile(r"([0-9]+)(st|nd|rd|th|º|ª|\.(?=\s|$))"),
488
+ "pt": re.compile(r"([0-9]+)(º|ª|o|a|os|as)"),
489
+ "it": re.compile(r"([0-9]+)(º|°|ª|o|a|i|e)"),
490
+ "pl": re.compile(r"([0-9]+)(º|ª|st|nd|rd|th)"),
491
+ "ar": re.compile(r"([0-9]+)(ون|ين|ث|ر|ى)"),
492
+ "cs": re.compile(r"([0-9]+)\.(?=\s|$)"), # In Czech, a dot is often used after the number to indicate ordinals.
493
+ "ru": re.compile(r"([0-9]+)(-й|-я|-е|-ое|-ье|-го)"),
494
+ "nl": re.compile(r"([0-9]+)(de|ste|e)"),
495
+ "tr": re.compile(r"([0-9]+)(\.|inci|nci|uncu|üncü|\.)"),
496
+ "hu": re.compile(r"([0-9]+)(\.|adik|edik|odik|edik|ödik|ödike|ik)"),
497
+ "ko": re.compile(r"([0-9]+)(번째|번|차|째)"),
498
+ "hi": re.compile(r"([0-9]+)(st|nd|rd|th)"), # To check
499
+ "vi": re.compile(r"([0-9]+)(th|thứ)?"), # Matches "1", "thứ 1", "2", "thứ 2"
500
+ }
501
+ _number_re = re.compile(r"[0-9]+")
502
+ _currency_re = {
503
+ "USD": re.compile(r"((\$[0-9\.\,]*[0-9]+)|([0-9\.\,]*[0-9]+\$))"),
504
+ "GBP": re.compile(r"((£[0-9\.\,]*[0-9]+)|([0-9\.\,]*[0-9]+£))"),
505
+ "EUR": re.compile(r"(([0-9\.\,]*[0-9]+€)|((€[0-9\.\,]*[0-9]+)))"),
506
+ }
507
+
508
+ _comma_number_re = re.compile(r"\b\d{1,3}(,\d{3})*(\.\d+)?\b")
509
+ _dot_number_re = re.compile(r"\b\d{1,3}(.\d{3})*(\,\d+)?\b")
510
+ _decimal_number_re = re.compile(r"([0-9]+[.,][0-9]+)")
511
+
512
+
513
+ def _remove_commas(m):
514
+ text = m.group(0)
515
+ if "," in text:
516
+ text = text.replace(",", "")
517
+ return text
518
+
519
+
520
+ def _remove_dots(m):
521
+ text = m.group(0)
522
+ if "." in text:
523
+ text = text.replace(".", "")
524
+ return text
525
+
526
+
527
+ def _expand_decimal_point(m, lang="en"):
528
+ amount = m.group(1).replace(",", ".")
529
+ return num2words(float(amount), lang=lang if lang != "cs" else "cz")
530
+
531
+
532
+ def _expand_currency(m, lang="en", currency="USD"):
533
+ amount = float((re.sub(r"[^\d.]", "", m.group(0).replace(",", "."))))
534
+ full_amount = num2words(amount, to="currency", currency=currency, lang=lang if lang != "cs" else "cz")
535
+
536
+ and_equivalents = {
537
+ "en": ", ",
538
+ "es": " con ",
539
+ "fr": " et ",
540
+ "de": " und ",
541
+ "pt": " e ",
542
+ "it": " e ",
543
+ "pl": ", ",
544
+ "cs": ", ",
545
+ "ru": ", ",
546
+ "nl": ", ",
547
+ "ar": ", ",
548
+ "tr": ", ",
549
+ "hu": ", ",
550
+ "ko": ", ",
551
+ "hi": ", ",
552
+ }
553
+
554
+ if amount.is_integer():
555
+ last_and = full_amount.rfind(and_equivalents[lang])
556
+ if last_and != -1:
557
+ full_amount = full_amount[:last_and]
558
+
559
+ return full_amount
560
+
561
+
562
+ def _expand_ordinal(m, lang="en"):
563
+ return num2words(int(m.group(1)), ordinal=True, lang=lang if lang != "cs" else "cz")
564
+
565
+
566
+ def _expand_number(m, lang="en"):
567
+ return num2words(int(m.group(0)), lang=lang if lang != "cs" else "cz")
568
+
569
+
570
+ def expand_numbers_multilingual(text, lang="en"):
571
+ if lang == "zh":
572
+ text = zh_num2words()(text)
573
+ else:
574
+ if lang in ["en", "ru"]:
575
+ text = re.sub(_comma_number_re, _remove_commas, text)
576
+ else:
577
+ text = re.sub(_dot_number_re, _remove_dots, text)
578
+ try:
579
+ text = re.sub(_currency_re["GBP"], lambda m: _expand_currency(m, lang, "GBP"), text)
580
+ text = re.sub(_currency_re["USD"], lambda m: _expand_currency(m, lang, "USD"), text)
581
+ text = re.sub(_currency_re["EUR"], lambda m: _expand_currency(m, lang, "EUR"), text)
582
+ except:
583
+ pass
584
+ if lang != "tr":
585
+ text = re.sub(_decimal_number_re, lambda m: _expand_decimal_point(m, lang), text)
586
+ text = re.sub(_ordinal_re[lang], lambda m: _expand_ordinal(m, lang), text)
587
+ text = re.sub(_number_re, lambda m: _expand_number(m, lang), text)
588
+ return text
589
+
590
+
591
+ def lowercase(text):
592
+ return text.lower()
593
+
594
+
595
+ def collapse_whitespace(text):
596
+ return re.sub(_whitespace_re, " ", text)
597
+
598
+
599
+ def multilingual_cleaners(text, lang):
600
+ text = text.replace('"', "")
601
+ if lang == "tr":
602
+ text = text.replace("İ", "i")
603
+ text = text.replace("Ö", "ö")
604
+ text = text.replace("Ü", "ü")
605
+ text = lowercase(text)
606
+ text = expand_numbers_multilingual(text, lang)
607
+ text = expand_abbreviations_multilingual(text, lang)
608
+ text = expand_symbols_multilingual(text, lang=lang)
609
+ text = collapse_whitespace(text)
610
+ return text
611
+
612
+
613
+ def basic_cleaners(text):
614
+ """Basic pipeline that lowercases and collapses whitespace without transliteration."""
615
+ text = lowercase(text)
616
+ text = collapse_whitespace(text)
617
+ return text
618
+
619
+
620
+ def chinese_transliterate(text):
621
+ try:
622
+ import pypinyin
623
+ except ImportError as e:
624
+ raise ImportError("Chinese requires: pypinyin") from e
625
+ return "".join(
626
+ [p[0] for p in pypinyin.pinyin(text, style=pypinyin.Style.TONE3, heteronym=False, neutral_tone_with_five=True)]
627
+ )
628
+
629
+
630
+ def japanese_cleaners(text, katsu):
631
+ text = katsu.romaji(text)
632
+ text = lowercase(text)
633
+ return text
634
+
635
+
636
+ def korean_transliterate(text):
637
+ try:
638
+ from hangul_romanize import Transliter
639
+ from hangul_romanize.rule import academic
640
+ except ImportError as e:
641
+ raise ImportError("Korean requires: hangul_romanize") from e
642
+ r = Transliter(academic)
643
+ return r.translit(text)
644
+
645
+
646
+ DEFAULT_VOCAB_FILE = os.path.join(os.path.dirname(os.path.realpath(__file__)), "../data/tokenizer.json")
647
+
648
+
649
+ class VoiceBpeTokenizer:
650
+ def __init__(self, vocab_file=None):
651
+ self.tokenizer = None
652
+ if vocab_file is not None:
653
+ self.tokenizer = Tokenizer.from_file(vocab_file)
654
+ self.char_limits = {
655
+ "en": 250,
656
+ "de": 253,
657
+ "fr": 273,
658
+ "es": 239,
659
+ "it": 213,
660
+ "pt": 203,
661
+ "pl": 224,
662
+ "zh": 82,
663
+ "ar": 166,
664
+ "cs": 186,
665
+ "ru": 182,
666
+ "nl": 251,
667
+ "tr": 226,
668
+ "ja": 71,
669
+ "hu": 224,
670
+ "ko": 95,
671
+ "hi": 150,
672
+ "vi": 250,
673
+ }
674
+
675
+ @cached_property
676
+ def katsu(self):
677
+ import cutlet
678
+
679
+ return cutlet.Cutlet()
680
+
681
+ def check_input_length(self, txt, lang):
682
+ lang = lang.split("-")[0] # remove the region
683
+ limit = self.char_limits.get(lang, 250)
684
+ if len(txt) > limit:
685
+ logger.warning(
686
+ "The text length exceeds the character limit of %d for language '%s', this might cause truncated audio.",
687
+ limit,
688
+ lang,
689
+ )
690
+
691
+ def preprocess_text(self, txt, lang):
692
+ if lang in {"ar", "cs", "de", "en", "es", "fr", "hi", "hu", "it", "nl", "pl", "pt", "ru", "tr", "zh", "ko", "vi"}:
693
+ txt = multilingual_cleaners(txt, lang)
694
+ if lang == "zh":
695
+ txt = chinese_transliterate(txt)
696
+ if lang == "ko":
697
+ txt = korean_transliterate(txt)
698
+ elif lang == "ja":
699
+ txt = japanese_cleaners(txt, self.katsu)
700
+ else:
701
+ raise NotImplementedError(f"Language '{lang}' is not supported.")
702
+ return txt
703
+
704
+ def encode(self, txt, lang):
705
+ lang = lang.split("-")[0] # remove the region
706
+ self.check_input_length(txt, lang)
707
+ txt = self.preprocess_text(txt, lang)
708
+ lang = "zh-cn" if lang == "zh" else lang
709
+ txt = f"[{lang}]{txt}"
710
+ txt = txt.replace(" ", "[SPACE]")
711
+ return self.tokenizer.encode(txt).ids
712
+
713
+ def decode(self, seq):
714
+ if isinstance(seq, torch.Tensor):
715
+ seq = seq.cpu().numpy()
716
+ txt = self.tokenizer.decode(seq, skip_special_tokens=False).replace(" ", "")
717
+ txt = txt.replace("[SPACE]", " ")
718
+ txt = txt.replace("[STOP]", "")
719
+ txt = txt.replace("[UNK]", "")
720
+ return txt
721
+
722
+ def __len__(self):
723
+ return self.tokenizer.get_vocab_size()
724
+
725
+ def get_number_tokens(self):
726
+ return max(self.tokenizer.get_vocab().values()) + 1
727
+
728
+
729
+ def test_expand_numbers_multilingual():
730
+ test_cases = [
731
+ # English
732
+ ("In 12.5 seconds.", "In twelve point five seconds.", "en"),
733
+ ("There were 50 soldiers.", "There were fifty soldiers.", "en"),
734
+ ("This is a 1st test", "This is a first test", "en"),
735
+ ("That will be $20 sir.", "That will be twenty dollars sir.", "en"),
736
+ ("That will be 20€ sir.", "That will be twenty euro sir.", "en"),
737
+ ("That will be 20.15€ sir.", "That will be twenty euro, fifteen cents sir.", "en"),
738
+ ("That's 100,000.5.", "That's one hundred thousand point five.", "en"),
739
+ # French
740
+ ("En 12,5 secondes.", "En douze virgule cinq secondes.", "fr"),
741
+ ("Il y avait 50 soldats.", "Il y avait cinquante soldats.", "fr"),
742
+ ("Ceci est un 1er test", "Ceci est un premier test", "fr"),
743
+ ("Cela vous fera $20 monsieur.", "Cela vous fera vingt dollars monsieur.", "fr"),
744
+ ("Cela vous fera 20€ monsieur.", "Cela vous fera vingt euros monsieur.", "fr"),
745
+ ("Cela vous fera 20,15€ monsieur.", "Cela vous fera vingt euros et quinze centimes monsieur.", "fr"),
746
+ ("Ce sera 100.000,5.", "Ce sera cent mille virgule cinq.", "fr"),
747
+ # German
748
+ ("In 12,5 Sekunden.", "In zwölf Komma fünf Sekunden.", "de"),
749
+ ("Es gab 50 Soldaten.", "Es gab fünfzig Soldaten.", "de"),
750
+ ("Dies ist ein 1. Test", "Dies ist ein erste Test", "de"), # Issue with gender
751
+ ("Das macht $20 Herr.", "Das macht zwanzig Dollar Herr.", "de"),
752
+ ("Das macht 20€ Herr.", "Das macht zwanzig Euro Herr.", "de"),
753
+ ("Das macht 20,15€ Herr.", "Das macht zwanzig Euro und fünfzehn Cent Herr.", "de"),
754
+ # Spanish
755
+ ("En 12,5 segundos.", "En doce punto cinco segundos.", "es"),
756
+ ("Había 50 soldados.", "Había cincuenta soldados.", "es"),
757
+ ("Este es un 1er test", "Este es un primero test", "es"),
758
+ ("Eso le costará $20 señor.", "Eso le costará veinte dólares señor.", "es"),
759
+ ("Eso le costará 20€ señor.", "Eso le costará veinte euros señor.", "es"),
760
+ ("Eso le costará 20,15€ señor.", "Eso le costará veinte euros con quince céntimos señor.", "es"),
761
+ # Italian
762
+ ("In 12,5 secondi.", "In dodici virgola cinque secondi.", "it"),
763
+ ("C'erano 50 soldati.", "C'erano cinquanta soldati.", "it"),
764
+ ("Questo è un 1° test", "Questo è un primo test", "it"),
765
+ ("Ti costerà $20 signore.", "Ti costerà venti dollari signore.", "it"),
766
+ ("Ti costerà 20€ signore.", "Ti costerà venti euro signore.", "it"),
767
+ ("Ti costerà 20,15€ signore.", "Ti costerà venti euro e quindici centesimi signore.", "it"),
768
+ # Portuguese
769
+ ("Em 12,5 segundos.", "Em doze vírgula cinco segundos.", "pt"),
770
+ ("Havia 50 soldados.", "Havia cinquenta soldados.", "pt"),
771
+ ("Este é um 1º teste", "Este é um primeiro teste", "pt"),
772
+ ("Isso custará $20 senhor.", "Isso custará vinte dólares senhor.", "pt"),
773
+ ("Isso custará 20€ senhor.", "Isso custará vinte euros senhor.", "pt"),
774
+ (
775
+ "Isso custará 20,15€ senhor.",
776
+ "Isso custará vinte euros e quinze cêntimos senhor.",
777
+ "pt",
778
+ ), # "cêntimos" should be "centavos" num2words issue
779
+ # Polish
780
+ ("W 12,5 sekundy.", "W dwanaście przecinek pięć sekundy.", "pl"),
781
+ ("Było 50 żołnierzy.", "Było pięćdziesiąt żołnierzy.", "pl"),
782
+ ("To będzie kosztować 20€ panie.", "To będzie kosztować dwadzieścia euro panie.", "pl"),
783
+ ("To będzie kosztować 20,15€ panie.", "To będzie kosztować dwadzieścia euro, piętnaście centów panie.", "pl"),
784
+ # Arabic
785
+ ("في الـ 12,5 ثانية.", "في الـ اثنا عشر , خمسون ثانية.", "ar"),
786
+ ("كان هناك 50 جنديًا.", "كان هناك خمسون جنديًا.", "ar"),
787
+ # ("ستكون النتيجة $20 يا سيد.", 'ستكون النتيجة عشرون دولار يا سيد.', 'ar'), # $ and € are mising from num2words
788
+ # ("ستكون النتيجة 20€ يا سيد.", 'ستكون النتيجة عشرون يورو يا سيد.', 'ar'),
789
+ # Czech
790
+ ("Za 12,5 vteřiny.", "Za dvanáct celá pět vteřiny.", "cs"),
791
+ ("Bylo tam 50 vojáků.", "Bylo tam padesát vojáků.", "cs"),
792
+ ("To bude stát 20€ pane.", "To bude stát dvacet euro pane.", "cs"),
793
+ ("To bude 20.15€ pane.", "To bude dvacet euro, patnáct centů pane.", "cs"),
794
+ # Russian
795
+ ("Через 12.5 секунды.", "Через двенадцать запятая пять секунды.", "ru"),
796
+ ("Там было 50 солдат.", "Там было пятьдесят солдат.", "ru"),
797
+ ("Это будет 20.15€ сэр.", "Это будет двадцать евро, пятнадцать центов сэр.", "ru"),
798
+ ("Это будет стоить 20€ господин.", "Это будет стоить двадцать евро господин.", "ru"),
799
+ # Dutch
800
+ ("In 12,5 seconden.", "In twaalf komma vijf seconden.", "nl"),
801
+ ("Er waren 50 soldaten.", "Er waren vijftig soldaten.", "nl"),
802
+ ("Dat wordt dan $20 meneer.", "Dat wordt dan twintig dollar meneer.", "nl"),
803
+ ("Dat wordt dan 20€ meneer.", "Dat wordt dan twintig euro meneer.", "nl"),
804
+ # Chinese (Simplified)
805
+ ("在12.5秒内", "在十二点五秒内", "zh"),
806
+ ("有50名士兵", "有五十名士兵", "zh"),
807
+ # ("那将是$20先生", '那将是二十美元先生', 'zh'), currency doesn't work
808
+ # ("那将是20€先生", '那将是二十欧元先生', 'zh'),
809
+ # Turkish
810
+ # ("12,5 saniye içinde.", 'On iki virgül beş saniye içinde.', 'tr'), # decimal doesn't work for TR
811
+ ("50 asker vardı.", "elli asker vardı.", "tr"),
812
+ ("Bu 1. test", "Bu birinci test", "tr"),
813
+ # ("Bu 100.000,5.", 'Bu yüz bin virgül beş.', 'tr'),
814
+ # Hungarian
815
+ ("12,5 másodperc alatt.", "tizenkettő egész öt tized másodperc alatt.", "hu"),
816
+ ("50 katona volt.", "ötven katona volt.", "hu"),
817
+ ("Ez az 1. teszt", "Ez az első teszt", "hu"),
818
+ # Korean
819
+ ("12.5 초 안에.", "십이 점 다섯 초 안에.", "ko"),
820
+ ("50 명의 병사가 있었다.", "오십 명의 병사가 있었다.", "ko"),
821
+ ("이것은 1 번째 테스트입니다", "이것은 첫 번째 테스트입니다", "ko"),
822
+ # Hindi
823
+ ("12.5 सेकंड में।", "साढ़े बारह सेकंड में।", "hi"),
824
+ ("50 सैनिक थे।", "पचास सैनिक थे।", "hi"),
825
+ ]
826
+ for a, b, lang in test_cases:
827
+ out = expand_numbers_multilingual(a, lang=lang)
828
+ assert out == b, f"'{out}' vs '{b}'"
829
+
830
+
831
+ def test_abbreviations_multilingual():
832
+ test_cases = [
833
+ # English
834
+ ("Hello Mr. Smith.", "Hello mister Smith.", "en"),
835
+ ("Dr. Jones is here.", "doctor Jones is here.", "en"),
836
+ # Spanish
837
+ ("Hola Sr. Garcia.", "Hola señor Garcia.", "es"),
838
+ ("La Dra. Martinez es muy buena.", "La doctora Martinez es muy buena.", "es"),
839
+ # French
840
+ ("Bonjour Mr. Dupond.", "Bonjour monsieur Dupond.", "fr"),
841
+ ("Mme. Moreau est absente aujourd'hui.", "madame Moreau est absente aujourd'hui.", "fr"),
842
+ # German
843
+ ("Frau Dr. Müller ist sehr klug.", "Frau doktor Müller ist sehr klug.", "de"),
844
+ # Portuguese
845
+ ("Olá Sr. Silva.", "Olá senhor Silva.", "pt"),
846
+ ("Dra. Costa, você está disponível?", "doutora Costa, você está disponível?", "pt"),
847
+ # Italian
848
+ ("Buongiorno, Sig. Rossi.", "Buongiorno, signore Rossi.", "it"),
849
+ # ("Sig.ra Bianchi, posso aiutarti?", 'signora Bianchi, posso aiutarti?', 'it'), # Issue with matching that pattern
850
+ # Polish
851
+ ("Dzień dobry, P. Kowalski.", "Dzień dobry, pani Kowalski.", "pl"),
852
+ ("M. Nowak, czy mogę zadać pytanie?", "pan Nowak, czy mogę zadać pytanie?", "pl"),
853
+ # Czech
854
+ ("P. Novák", "pan Novák", "cs"),
855
+ ("Dr. Vojtěch", "doktor Vojtěch", "cs"),
856
+ # Dutch
857
+ ("Dhr. Jansen", "de heer Jansen", "nl"),
858
+ ("Mevr. de Vries", "mevrouw de Vries", "nl"),
859
+ # Russian
860
+ ("Здравствуйте Г-н Иванов.", "Здравствуйте господин Иванов.", "ru"),
861
+ ("Д-р Смирнов здесь, чтобы увидеть вас.", "доктор Смирнов здесь, чтобы увидеть вас.", "ru"),
862
+ # Turkish
863
+ ("Merhaba B. Yılmaz.", "Merhaba bay Yılmaz.", "tr"),
864
+ ("Dr. Ayşe burada.", "doktor Ayşe burada.", "tr"),
865
+ # Hungarian
866
+ ("Dr. Szabó itt van.", "doktor Szabó itt van.", "hu"),
867
+ ]
868
+
869
+ for a, b, lang in test_cases:
870
+ out = expand_abbreviations_multilingual(a, lang=lang)
871
+ assert out == b, f"'{out}' vs '{b}'"
872
+
873
+
874
+ def test_symbols_multilingual():
875
+ test_cases = [
876
+ ("I have 14% battery", "I have 14 percent battery", "en"),
877
+ ("Te veo @ la fiesta", "Te veo arroba la fiesta", "es"),
878
+ ("J'ai 14° de fièvre", "J'ai 14 degrés de fièvre", "fr"),
879
+ ("Die Rechnung beträgt £ 20", "Die Rechnung beträgt pfund 20", "de"),
880
+ ("O meu email é ana&joao@gmail.com", "O meu email é ana e joao arroba gmail.com", "pt"),
881
+ ("linguaggio di programmazione C#", "linguaggio di programmazione C cancelletto", "it"),
882
+ ("Moja temperatura to 36.6°", "Moja temperatura to 36.6 stopnie", "pl"),
883
+ ("Mám 14% baterie", "Mám 14 procento baterie", "cs"),
884
+ ("Těším se na tebe @ party", "Těším se na tebe na party", "cs"),
885
+ ("У меня 14% заряда", "У меня 14 процентов заряда", "ru"),
886
+ ("Я буду @ дома", "Я буду собака дома", "ru"),
887
+ ("Ik heb 14% batterij", "Ik heb 14 procent batterij", "nl"),
888
+ ("Ik zie je @ het feest", "Ik zie je bij het feest", "nl"),
889
+ ("لدي 14% في البطارية", "لدي 14 في المئة في البطارية", "ar"),
890
+ ("我的电量为 14%", "我的电量为 14 百分之", "zh"),
891
+ ("Pilim %14 dolu.", "Pilim yüzde 14 dolu.", "tr"),
892
+ ("Az akkumulátorom töltöttsége 14%", "Az akkumulátorom töltöttsége 14 százalék", "hu"),
893
+ ("배터리 잔량이 14%입니다.", "배터리 잔량이 14 퍼센트입니다.", "ko"),
894
+ ("मेरे पास 14% बैटरी है।", "मेरे पास चौदह प्रतिशत बैटरी है।", "hi"),
895
+ ]
896
+
897
+ for a, b, lang in test_cases:
898
+ out = expand_symbols_multilingual(a, lang=lang)
899
+ assert out == b, f"'{out}' vs '{b}'"
900
+
901
+
902
+ if __name__ == "__main__":
903
+ test_expand_numbers_multilingual()
904
+ test_abbreviations_multilingual()
905
+ test_symbols_multilingual()
906
+