theoisnthere doctord98 commited on
Commit
b83c388
0 Parent(s):

Duplicate from doctord98/zeeka5456

Browse files

Co-authored-by: doctord98 <doctord98@users.noreply.huggingface.co>

This view is limited to 50 files because it contains too many changes.   See raw diff
.gitattributes ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
Dockerfile ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10
2
+
3
+ WORKDIR /app
4
+
5
+ COPY requirements.txt .
6
+ RUN pip install -r requirements.txt
7
+
8
+ RUN mkdir /.cache && chmod -R 777 /.cache
9
+ RUN mkdir .chroma && chmod -R 777 .chroma
10
+
11
+ COPY . .
12
+
13
+ RUN chmod -R 777 /app
14
+
15
+ EXPOSE 7860
16
+
17
+ CMD ["python", "server.py", "--cpu", "--share", "--secure", "--enable-modules=summarize,classify,silero-tts,edge-tts,chromadb", "--classification-model=joeddav/distilbert-base-uncased-go-emotions-student", "--summarization-model=slauw87/bart_summarisation"]
README.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: zeeka5456
3
+ emoji: 🐠
4
+ colorFrom: pink
5
+ colorTo: yellow
6
+ sdk: docker
7
+ pinned: false
8
+ license: mit
9
+ duplicated_from: doctord98/zeeka5456
10
+ ---
11
+ Doctord98 is the boss
constants.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Constants
2
+ # Also try: 'Qiliang/bart-large-cnn-samsum-ElectrifAi_v10'
3
+ DEFAULT_SUMMARIZATION_MODEL = "Qiliang/bart-large-cnn-samsum-ChatGPT_v3"
4
+ # Also try: 'joeddav/distilbert-base-uncased-go-emotions-student'
5
+ DEFAULT_CLASSIFICATION_MODEL = "nateraw/bert-base-uncased-emotion"
6
+ # Also try: 'Salesforce/blip-image-captioning-base'
7
+ DEFAULT_CAPTIONING_MODEL = "Salesforce/blip-image-captioning-large"
8
+ DEFAULT_SD_MODEL = "ckpt/anything-v4.5-vae-swapped"
9
+ DEFAULT_EMBEDDING_MODEL = "sentence-transformers/all-mpnet-base-v2"
10
+ DEFAULT_REMOTE_SD_HOST = "127.0.0.1"
11
+ DEFAULT_REMOTE_SD_PORT = 7860
12
+ DEFAULT_CHROMA_PORT = 8000
13
+ SILERO_SAMPLES_PATH = "tts_samples"
14
+ SILERO_SAMPLE_TEXT = "The quick brown fox jumps over the lazy dog"
15
+ # ALL_MODULES = ['caption', 'summarize', 'classify', 'keywords', 'prompt', 'sd']
16
+ DEFAULT_SUMMARIZE_PARAMS = {
17
+ "temperature": 1.0,
18
+ "repetition_penalty": 1.0,
19
+ "max_length": 500,
20
+ "min_length": 200,
21
+ "length_penalty": 1.5,
22
+ "bad_words": [
23
+ "\n",
24
+ '"',
25
+ "*",
26
+ "[",
27
+ "]",
28
+ "{",
29
+ "}",
30
+ ":",
31
+ "(",
32
+ ")",
33
+ "<",
34
+ ">",
35
+ "Â",
36
+ "The text ends",
37
+ "The story ends",
38
+ "The text is",
39
+ "The story is",
40
+ ],
41
+ }
42
+
43
+ PROMPT_PREFIX = "best quality, absurdres, "
44
+ NEGATIVE_PROMPT = """lowres, bad anatomy, error body, error hair, error arm,
45
+ error hands, bad hands, error fingers, bad fingers, missing fingers
46
+ error legs, bad legs, multiple legs, missing legs, error lighting,
47
+ error shadow, error reflection, text, error, extra digit, fewer digits,
48
+ cropped, worst quality, low quality, normal quality, jpeg artifacts,
49
+ signature, watermark, username, blurry"""
requirements.txt ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ flask
2
+ flask-cloudflared
3
+ flask-cors
4
+ flask-compress
5
+ markdown
6
+ Pillow
7
+ colorama
8
+ webuiapi
9
+ --extra-index-url https://download.pytorch.org/whl/cu117
10
+ torch==2.0.0+cu117
11
+ torchvision==0.15.1
12
+ torchaudio==2.0.1+cu117
13
+ accelerate
14
+ transformers==4.28.1
15
+ diffusers==0.16.1
16
+ silero-api-server
17
+ chromadb
18
+ sentence_transformers
19
+ edge-tts
20
+ Werkzeug
server.py ADDED
@@ -0,0 +1,844 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functools import wraps
2
+ from flask import (
3
+ Flask,
4
+ jsonify,
5
+ request,
6
+ Response,
7
+ render_template_string,
8
+ abort,
9
+ send_from_directory,
10
+ send_file,
11
+ )
12
+ from flask_cors import CORS
13
+ from flask_compress import Compress
14
+ import markdown
15
+ import argparse
16
+ from transformers import AutoTokenizer, AutoProcessor, pipeline
17
+ from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM
18
+ from transformers import BlipForConditionalGeneration
19
+ import unicodedata
20
+ import torch
21
+ import time
22
+ import os
23
+ import gc
24
+ import secrets
25
+ from PIL import Image
26
+ import base64
27
+ from io import BytesIO
28
+ from random import randint
29
+ import webuiapi
30
+ import hashlib
31
+ from constants import *
32
+ from colorama import Fore, Style, init as colorama_init
33
+
34
+ colorama_init()
35
+
36
+
37
+ class SplitArgs(argparse.Action):
38
+ def __call__(self, parser, namespace, values, option_string=None):
39
+ setattr(
40
+ namespace, self.dest, values.replace('"', "").replace("'", "").split(",")
41
+ )
42
+
43
+
44
+ # Script arguments
45
+ parser = argparse.ArgumentParser(
46
+ prog="SillyTavern Extras", description="Web API for transformers models"
47
+ )
48
+ parser.add_argument(
49
+ "--port", type=int, help="Specify the port on which the application is hosted"
50
+ )
51
+ parser.add_argument(
52
+ "--listen", action="store_true", help="Host the app on the local network"
53
+ )
54
+ parser.add_argument(
55
+ "--share", action="store_true", help="Share the app on CloudFlare tunnel"
56
+ )
57
+ parser.add_argument("--cpu", action="store_true", help="Run the models on the CPU")
58
+ parser.add_argument("--summarization-model", help="Load a custom summarization model")
59
+ parser.add_argument(
60
+ "--classification-model", help="Load a custom text classification model"
61
+ )
62
+ parser.add_argument("--captioning-model", help="Load a custom captioning model")
63
+ parser.add_argument("--embedding-model", help="Load a custom text embedding model")
64
+ parser.add_argument("--chroma-host", help="Host IP for a remote ChromaDB instance")
65
+ parser.add_argument("--chroma-port", help="HTTP port for a remote ChromaDB instance (defaults to 8000)")
66
+ parser.add_argument("--chroma-folder", help="Path for chromadb persistence folder", default='.chroma_db')
67
+ parser.add_argument(
68
+ "--secure", action="store_true", help="Enforces the use of an API key"
69
+ )
70
+
71
+ sd_group = parser.add_mutually_exclusive_group()
72
+
73
+ local_sd = sd_group.add_argument_group("sd-local")
74
+ local_sd.add_argument("--sd-model", help="Load a custom SD image generation model")
75
+ local_sd.add_argument("--sd-cpu", help="Force the SD pipeline to run on the CPU")
76
+
77
+ remote_sd = sd_group.add_argument_group("sd-remote")
78
+ remote_sd.add_argument(
79
+ "--sd-remote", action="store_true", help="Use a remote backend for SD"
80
+ )
81
+ remote_sd.add_argument(
82
+ "--sd-remote-host", type=str, help="Specify the host of the remote SD backend"
83
+ )
84
+ remote_sd.add_argument(
85
+ "--sd-remote-port", type=int, help="Specify the port of the remote SD backend"
86
+ )
87
+ remote_sd.add_argument(
88
+ "--sd-remote-ssl", action="store_true", help="Use SSL for the remote SD backend"
89
+ )
90
+ remote_sd.add_argument(
91
+ "--sd-remote-auth",
92
+ type=str,
93
+ help="Specify the username:password for the remote SD backend (if required)",
94
+ )
95
+
96
+ parser.add_argument(
97
+ "--enable-modules",
98
+ action=SplitArgs,
99
+ default=[],
100
+ help="Override a list of enabled modules",
101
+ )
102
+
103
+ args = parser.parse_args()
104
+
105
+ port = 7860
106
+ host = "0.0.0.0"
107
+ summarization_model = (
108
+ args.summarization_model
109
+ if args.summarization_model
110
+ else DEFAULT_SUMMARIZATION_MODEL
111
+ )
112
+ classification_model = (
113
+ args.classification_model
114
+ if args.classification_model
115
+ else DEFAULT_CLASSIFICATION_MODEL
116
+ )
117
+ captioning_model = (
118
+ args.captioning_model if args.captioning_model else DEFAULT_CAPTIONING_MODEL
119
+ )
120
+ embedding_model = (
121
+ args.embedding_model if args.embedding_model else DEFAULT_EMBEDDING_MODEL
122
+ )
123
+
124
+ sd_use_remote = False if args.sd_model else True
125
+ sd_model = args.sd_model if args.sd_model else DEFAULT_SD_MODEL
126
+ sd_remote_host = args.sd_remote_host if args.sd_remote_host else DEFAULT_REMOTE_SD_HOST
127
+ sd_remote_port = args.sd_remote_port if args.sd_remote_port else DEFAULT_REMOTE_SD_PORT
128
+ sd_remote_ssl = args.sd_remote_ssl
129
+ sd_remote_auth = args.sd_remote_auth
130
+
131
+ modules = (
132
+ args.enable_modules if args.enable_modules and len(args.enable_modules) > 0 else []
133
+ )
134
+
135
+ if len(modules) == 0:
136
+ print(
137
+ f"{Fore.RED}{Style.BRIGHT}You did not select any modules to run! Choose them by adding an --enable-modules option"
138
+ )
139
+ print(f"Example: --enable-modules=caption,summarize{Style.RESET_ALL}")
140
+
141
+ # Models init
142
+ device_string = "cuda:0" if torch.cuda.is_available() and not args.cpu else "cpu"
143
+ device = torch.device(device_string)
144
+ torch_dtype = torch.float32 if device_string == "cpu" else torch.float16
145
+
146
+ if "caption" in modules:
147
+ print("Initializing an image captioning model...")
148
+ captioning_processor = AutoProcessor.from_pretrained(captioning_model)
149
+ if "blip" in captioning_model:
150
+ captioning_transformer = BlipForConditionalGeneration.from_pretrained(
151
+ captioning_model, torch_dtype=torch_dtype
152
+ ).to(device)
153
+ else:
154
+ captioning_transformer = AutoModelForCausalLM.from_pretrained(
155
+ captioning_model, torch_dtype=torch_dtype
156
+ ).to(device)
157
+
158
+ if "summarize" in modules:
159
+ print("Initializing a text summarization model...")
160
+ summarization_tokenizer = AutoTokenizer.from_pretrained(summarization_model)
161
+ summarization_transformer = AutoModelForSeq2SeqLM.from_pretrained(
162
+ summarization_model, torch_dtype=torch_dtype
163
+ ).to(device)
164
+
165
+ if "classify" in modules:
166
+ print("Initializing a sentiment classification pipeline...")
167
+ classification_pipe = pipeline(
168
+ "text-classification",
169
+ model=classification_model,
170
+ top_k=None,
171
+ device=device,
172
+ torch_dtype=torch_dtype,
173
+ )
174
+
175
+ if "sd" in modules and not sd_use_remote:
176
+ from diffusers import StableDiffusionPipeline
177
+ from diffusers import EulerAncestralDiscreteScheduler
178
+
179
+ print("Initializing Stable Diffusion pipeline")
180
+ sd_device_string = (
181
+ "cuda" if torch.cuda.is_available() and not args.sd_cpu else "cpu"
182
+ )
183
+ sd_device = torch.device(sd_device_string)
184
+ sd_torch_dtype = torch.float32 if sd_device_string == "cpu" else torch.float16
185
+ sd_pipe = StableDiffusionPipeline.from_pretrained(
186
+ sd_model, custom_pipeline="lpw_stable_diffusion", torch_dtype=sd_torch_dtype
187
+ ).to(sd_device)
188
+ sd_pipe.safety_checker = lambda images, clip_input: (images, False)
189
+ sd_pipe.enable_attention_slicing()
190
+ # pipe.scheduler = KarrasVeScheduler.from_config(pipe.scheduler.config)
191
+ sd_pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(
192
+ sd_pipe.scheduler.config
193
+ )
194
+ elif "sd" in modules and sd_use_remote:
195
+ print("Initializing Stable Diffusion connection")
196
+ try:
197
+ sd_remote = webuiapi.WebUIApi(
198
+ host=sd_remote_host, port=sd_remote_port, use_https=sd_remote_ssl
199
+ )
200
+ if sd_remote_auth:
201
+ username, password = sd_remote_auth.split(":")
202
+ sd_remote.set_auth(username, password)
203
+ sd_remote.util_wait_for_ready()
204
+ except Exception as e:
205
+ # remote sd from modules
206
+ print(
207
+ f"{Fore.RED}{Style.BRIGHT}Could not connect to remote SD backend at http{'s' if sd_remote_ssl else ''}://{sd_remote_host}:{sd_remote_port}! Disabling SD module...{Style.RESET_ALL}"
208
+ )
209
+ modules.remove("sd")
210
+
211
+ if "tts" in modules:
212
+ print("tts module is deprecated. Please use silero-tts instead.")
213
+ modules.remove("tts")
214
+ modules.append("silero-tts")
215
+
216
+
217
+ if "silero-tts" in modules:
218
+ if not os.path.exists(SILERO_SAMPLES_PATH):
219
+ os.makedirs(SILERO_SAMPLES_PATH)
220
+ print("Initializing Silero TTS server")
221
+ from silero_api_server import tts
222
+
223
+ tts_service = tts.SileroTtsService(SILERO_SAMPLES_PATH)
224
+ if len(os.listdir(SILERO_SAMPLES_PATH)) == 0:
225
+ print("Generating Silero TTS samples...")
226
+ tts_service.update_sample_text(SILERO_SAMPLE_TEXT)
227
+ tts_service.generate_samples()
228
+
229
+
230
+ if "edge-tts" in modules:
231
+ print("Initializing Edge TTS client")
232
+ import tts_edge as edge
233
+
234
+
235
+ if "chromadb" in modules:
236
+ print("Initializing ChromaDB")
237
+ import chromadb
238
+ import posthog
239
+ from chromadb.config import Settings
240
+ from sentence_transformers import SentenceTransformer
241
+
242
+ # Assume that the user wants in-memory unless a host is specified
243
+ # Also disable chromadb telemetry
244
+ posthog.capture = lambda *args, **kwargs: None
245
+ if args.chroma_host is None:
246
+ chromadb_client = chromadb.Client(Settings(anonymized_telemetry=False, persist_directory=args.chroma_folder, chroma_db_impl='duckdb+parquet'))
247
+ print(f"ChromaDB is running in-memory with persistence. Persistence is stored in {args.chroma_folder}. Can be cleared by deleting the folder or purging db.")
248
+ else:
249
+ chroma_port=(
250
+ args.chroma_port if args.chroma_port else DEFAULT_CHROMA_PORT
251
+ )
252
+ chromadb_client = chromadb.Client(
253
+ Settings(
254
+ anonymized_telemetry=False,
255
+ chroma_api_impl="rest",
256
+ chroma_server_host=args.chroma_host,
257
+ chroma_server_http_port=chroma_port
258
+ )
259
+ )
260
+ print(f"ChromaDB is remotely configured at {args.chroma_host}:{chroma_port}")
261
+
262
+ chromadb_embedder = SentenceTransformer(embedding_model)
263
+ chromadb_embed_fn = lambda *args, **kwargs: chromadb_embedder.encode(*args, **kwargs).tolist()
264
+
265
+ # Check if the db is connected and running, otherwise tell the user
266
+ try:
267
+ chromadb_client.heartbeat()
268
+ print("Successfully pinged ChromaDB! Your client is successfully connected.")
269
+ except:
270
+ print("Could not ping ChromaDB! If you are running remotely, please check your host and port!")
271
+
272
+ # Flask init
273
+ app = Flask(__name__)
274
+ CORS(app) # allow cross-domain requests
275
+ Compress(app) # compress responses
276
+ app.config["MAX_CONTENT_LENGTH"] = 100 * 1024 * 1024
277
+
278
+
279
+ def require_module(name):
280
+ def wrapper(fn):
281
+ @wraps(fn)
282
+ def decorated_view(*args, **kwargs):
283
+ if name not in modules:
284
+ abort(403, "Module is disabled by config")
285
+ return fn(*args, **kwargs)
286
+
287
+ return decorated_view
288
+
289
+ return wrapper
290
+
291
+
292
+ # AI stuff
293
+ def classify_text(text: str) -> list:
294
+ output = classification_pipe(
295
+ text,
296
+ truncation=True,
297
+ max_length=classification_pipe.model.config.max_position_embeddings,
298
+ )[0]
299
+ return sorted(output, key=lambda x: x["score"], reverse=True)
300
+
301
+
302
+ def caption_image(raw_image: Image, max_new_tokens: int = 20) -> str:
303
+ inputs = captioning_processor(raw_image.convert("RGB"), return_tensors="pt").to(
304
+ device, torch_dtype
305
+ )
306
+ outputs = captioning_transformer.generate(**inputs, max_new_tokens=max_new_tokens)
307
+ caption = captioning_processor.decode(outputs[0], skip_special_tokens=True)
308
+ return caption
309
+
310
+
311
+ def summarize_chunks(text: str, params: dict) -> str:
312
+ try:
313
+ return summarize(text, params)
314
+ except IndexError:
315
+ print(
316
+ "Sequence length too large for model, cutting text in half and calling again"
317
+ )
318
+ new_params = params.copy()
319
+ new_params["max_length"] = new_params["max_length"] // 2
320
+ new_params["min_length"] = new_params["min_length"] // 2
321
+ return summarize_chunks(
322
+ text[: (len(text) // 2)], new_params
323
+ ) + summarize_chunks(text[(len(text) // 2) :], new_params)
324
+
325
+
326
+ def summarize(text: str, params: dict) -> str:
327
+ # Tokenize input
328
+ inputs = summarization_tokenizer(text, return_tensors="pt").to(device)
329
+ token_count = len(inputs[0])
330
+
331
+ bad_words_ids = [
332
+ summarization_tokenizer(bad_word, add_special_tokens=False).input_ids
333
+ for bad_word in params["bad_words"]
334
+ ]
335
+ summary_ids = summarization_transformer.generate(
336
+ inputs["input_ids"],
337
+ num_beams=2,
338
+ max_new_tokens=max(token_count, int(params["max_length"])),
339
+ min_new_tokens=min(token_count, int(params["min_length"])),
340
+ repetition_penalty=float(params["repetition_penalty"]),
341
+ temperature=float(params["temperature"]),
342
+ length_penalty=float(params["length_penalty"]),
343
+ bad_words_ids=bad_words_ids,
344
+ )
345
+ summary = summarization_tokenizer.batch_decode(
346
+ summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
347
+ )[0]
348
+ summary = normalize_string(summary)
349
+ return summary
350
+
351
+
352
+ def normalize_string(input: str) -> str:
353
+ output = " ".join(unicodedata.normalize("NFKC", input).strip().split())
354
+ return output
355
+
356
+
357
+ def generate_image(data: dict) -> Image:
358
+ prompt = normalize_string(f'{data["prompt_prefix"]} {data["prompt"]}')
359
+
360
+ if sd_use_remote:
361
+ image = sd_remote.txt2img(
362
+ prompt=prompt,
363
+ negative_prompt=data["negative_prompt"],
364
+ sampler_name=data["sampler"],
365
+ steps=data["steps"],
366
+ cfg_scale=data["scale"],
367
+ width=data["width"],
368
+ height=data["height"],
369
+ restore_faces=data["restore_faces"],
370
+ enable_hr=data["enable_hr"],
371
+ save_images=True,
372
+ send_images=True,
373
+ do_not_save_grid=False,
374
+ do_not_save_samples=False,
375
+ ).image
376
+ else:
377
+ image = sd_pipe(
378
+ prompt=prompt,
379
+ negative_prompt=data["negative_prompt"],
380
+ num_inference_steps=data["steps"],
381
+ guidance_scale=data["scale"],
382
+ width=data["width"],
383
+ height=data["height"],
384
+ ).images[0]
385
+
386
+ image.save("./debug.png")
387
+ return image
388
+
389
+
390
+ def image_to_base64(image: Image, quality: int = 75) -> str:
391
+ buffer = BytesIO()
392
+ image.convert("RGB")
393
+ image.save(buffer, format="JPEG", quality=quality)
394
+ img_str = base64.b64encode(buffer.getvalue()).decode("utf-8")
395
+ return img_str
396
+
397
+ # Reads an API key from an already existing file. If that file doesn't exist, create it.
398
+ if args.secure:
399
+ try:
400
+ with open("api_key.txt", "r") as txt:
401
+ api_key = txt.read().replace('\n', '')
402
+ except:
403
+ api_key = secrets.token_hex(5)
404
+ with open("api_key.txt", "w") as txt:
405
+ txt.write(api_key)
406
+
407
+ print(f"Your API key is {api_key}")
408
+ elif args.share and args.secure != True:
409
+ print("WARNING: This instance is publicly exposed without an API key! It is highly recommended to restart with the \"--secure\" argument!")
410
+ else:
411
+ print("No API key given because you are running locally.")
412
+
413
+ @app.before_request
414
+ def before_request():
415
+ # Request time measuring
416
+ request.start_time = time.time()
417
+
418
+ # Checks if an API key is present and valid, otherwise return unauthorized
419
+ # The options check is required so CORS doesn't get angry
420
+ try:
421
+ if request.method != 'OPTIONS' and args.secure and request.authorization.token != api_key:
422
+ print(f"WARNING: Unauthorized API key access from {request.remote_addr}")
423
+ response = jsonify({ 'error': '401: Invalid API key' })
424
+ response.status_code = 401
425
+ return response
426
+ except Exception as e:
427
+ print(f"API key check error: {e}")
428
+ return "401 Unauthorized\n{}\n\n".format(e), 401
429
+
430
+
431
+ @app.after_request
432
+ def after_request(response):
433
+ duration = time.time() - request.start_time
434
+ response.headers["X-Request-Duration"] = str(duration)
435
+ return response
436
+
437
+
438
+ @app.route("/", methods=["GET"])
439
+ def index():
440
+ with open("./README.md", "r", encoding="utf8") as f:
441
+ content = f.read()
442
+ return render_template_string(markdown.markdown(content, extensions=["tables"]))
443
+
444
+
445
+ @app.route("/api/extensions", methods=["GET"])
446
+ def get_extensions():
447
+ extensions = dict(
448
+ {
449
+ "extensions": [
450
+ {
451
+ "name": "not-supported",
452
+ "metadata": {
453
+ "display_name": """<span style="white-space:break-spaces;">Extensions serving using Extensions API is no longer supported. Please update the mod from: <a href="https://github.com/Cohee1207/SillyTavern">https://github.com/Cohee1207/SillyTavern</a></span>""",
454
+ "requires": [],
455
+ "assets": [],
456
+ },
457
+ }
458
+ ]
459
+ }
460
+ )
461
+ return jsonify(extensions)
462
+
463
+
464
+ @app.route("/api/caption", methods=["POST"])
465
+ @require_module("caption")
466
+ def api_caption():
467
+ data = request.get_json()
468
+
469
+ if "image" not in data or not isinstance(data["image"], str):
470
+ abort(400, '"image" is required')
471
+
472
+ image = Image.open(BytesIO(base64.b64decode(data["image"])))
473
+ image = image.convert("RGB")
474
+ image.thumbnail((512, 512))
475
+ caption = caption_image(image)
476
+ thumbnail = image_to_base64(image)
477
+ print("Caption:", caption, sep="\n")
478
+ gc.collect()
479
+ return jsonify({"caption": caption, "thumbnail": thumbnail})
480
+
481
+
482
+ @app.route("/api/summarize", methods=["POST"])
483
+ @require_module("summarize")
484
+ def api_summarize():
485
+ data = request.get_json()
486
+
487
+ if "text" not in data or not isinstance(data["text"], str):
488
+ abort(400, '"text" is required')
489
+
490
+ params = DEFAULT_SUMMARIZE_PARAMS.copy()
491
+
492
+ if "params" in data and isinstance(data["params"], dict):
493
+ params.update(data["params"])
494
+
495
+ print("Summary input:", data["text"], sep="\n")
496
+ summary = summarize_chunks(data["text"], params)
497
+ print("Summary output:", summary, sep="\n")
498
+ gc.collect()
499
+ return jsonify({"summary": summary})
500
+
501
+
502
+ @app.route("/api/classify", methods=["POST"])
503
+ @require_module("classify")
504
+ def api_classify():
505
+ data = request.get_json()
506
+
507
+ if "text" not in data or not isinstance(data["text"], str):
508
+ abort(400, '"text" is required')
509
+
510
+ print("Classification input:", data["text"], sep="\n")
511
+ classification = classify_text(data["text"])
512
+ print("Classification output:", classification, sep="\n")
513
+ gc.collect()
514
+ return jsonify({"classification": classification})
515
+
516
+
517
+ @app.route("/api/classify/labels", methods=["GET"])
518
+ @require_module("classify")
519
+ def api_classify_labels():
520
+ classification = classify_text("")
521
+ labels = [x["label"] for x in classification]
522
+ return jsonify({"labels": labels})
523
+
524
+
525
+ @app.route("/api/image", methods=["POST"])
526
+ @require_module("sd")
527
+ def api_image():
528
+ required_fields = {
529
+ "prompt": str,
530
+ }
531
+
532
+ optional_fields = {
533
+ "steps": 30,
534
+ "scale": 6,
535
+ "sampler": "DDIM",
536
+ "width": 512,
537
+ "height": 512,
538
+ "restore_faces": False,
539
+ "enable_hr": False,
540
+ "prompt_prefix": PROMPT_PREFIX,
541
+ "negative_prompt": NEGATIVE_PROMPT,
542
+ }
543
+
544
+ data = request.get_json()
545
+
546
+ # Check required fields
547
+ for field, field_type in required_fields.items():
548
+ if field not in data or not isinstance(data[field], field_type):
549
+ abort(400, f'"{field}" is required')
550
+
551
+ # Set optional fields to default values if not provided
552
+ for field, default_value in optional_fields.items():
553
+ type_match = (
554
+ (int, float)
555
+ if isinstance(default_value, (int, float))
556
+ else type(default_value)
557
+ )
558
+ if field not in data or not isinstance(data[field], type_match):
559
+ data[field] = default_value
560
+
561
+ try:
562
+ print("SD inputs:", data, sep="\n")
563
+ image = generate_image(data)
564
+ base64image = image_to_base64(image, quality=90)
565
+ return jsonify({"image": base64image})
566
+ except RuntimeError as e:
567
+ abort(400, str(e))
568
+
569
+
570
+ @app.route("/api/image/model", methods=["POST"])
571
+ @require_module("sd")
572
+ def api_image_model_set():
573
+ data = request.get_json()
574
+
575
+ if not sd_use_remote:
576
+ abort(400, "Changing model for local sd is not supported.")
577
+ if "model" not in data or not isinstance(data["model"], str):
578
+ abort(400, '"model" is required')
579
+
580
+ old_model = sd_remote.util_get_current_model()
581
+ sd_remote.util_set_model(data["model"], find_closest=False)
582
+ # sd_remote.util_set_model(data['model'])
583
+ sd_remote.util_wait_for_ready()
584
+ new_model = sd_remote.util_get_current_model()
585
+
586
+ return jsonify({"previous_model": old_model, "current_model": new_model})
587
+
588
+
589
+ @app.route("/api/image/model", methods=["GET"])
590
+ @require_module("sd")
591
+ def api_image_model_get():
592
+ model = sd_model
593
+
594
+ if sd_use_remote:
595
+ model = sd_remote.util_get_current_model()
596
+
597
+ return jsonify({"model": model})
598
+
599
+
600
+ @app.route("/api/image/models", methods=["GET"])
601
+ @require_module("sd")
602
+ def api_image_models():
603
+ models = [sd_model]
604
+
605
+ if sd_use_remote:
606
+ models = sd_remote.util_get_model_names()
607
+
608
+ return jsonify({"models": models})
609
+
610
+
611
+ @app.route("/api/image/samplers", methods=["GET"])
612
+ @require_module("sd")
613
+ def api_image_samplers():
614
+ samplers = ["Euler a"]
615
+
616
+ if sd_use_remote:
617
+ samplers = [sampler["name"] for sampler in sd_remote.get_samplers()]
618
+
619
+ return jsonify({"samplers": samplers})
620
+
621
+
622
+ @app.route("/api/modules", methods=["GET"])
623
+ def get_modules():
624
+ return jsonify({"modules": modules})
625
+
626
+
627
+ @app.route("/api/tts/speakers", methods=["GET"])
628
+ @require_module("silero-tts")
629
+ def tts_speakers():
630
+ voices = [
631
+ {
632
+ "name": speaker,
633
+ "voice_id": speaker,
634
+ "preview_url": f"{str(request.url_root)}api/tts/sample/{speaker}",
635
+ }
636
+ for speaker in tts_service.get_speakers()
637
+ ]
638
+ return jsonify(voices)
639
+
640
+
641
+ @app.route("/api/tts/generate", methods=["POST"])
642
+ @require_module("silero-tts")
643
+ def tts_generate():
644
+ voice = request.get_json()
645
+ if "text" not in voice or not isinstance(voice["text"], str):
646
+ abort(400, '"text" is required')
647
+ if "speaker" not in voice or not isinstance(voice["speaker"], str):
648
+ abort(400, '"speaker" is required')
649
+ # Remove asterisks
650
+ voice["text"] = voice["text"].replace("*", "")
651
+ try:
652
+ audio = tts_service.generate(voice["speaker"], voice["text"])
653
+ return send_file(audio, mimetype="audio/x-wav")
654
+ except Exception as e:
655
+ print(e)
656
+ abort(500, voice["speaker"])
657
+
658
+
659
+ @app.route("/api/tts/sample/<speaker>", methods=["GET"])
660
+ @require_module("silero-tts")
661
+ def tts_play_sample(speaker: str):
662
+ return send_from_directory(SILERO_SAMPLES_PATH, f"{speaker}.wav")
663
+
664
+
665
+ @app.route("/api/edge-tts/list", methods=["GET"])
666
+ @require_module("edge-tts")
667
+ def edge_tts_list():
668
+ voices = edge.get_voices()
669
+ return jsonify(voices)
670
+
671
+
672
+ @app.route("/api/edge-tts/generate", methods=["POST"])
673
+ @require_module("edge-tts")
674
+ def edge_tts_generate():
675
+ data = request.get_json()
676
+ if "text" not in data or not isinstance(data["text"], str):
677
+ abort(400, '"text" is required')
678
+ if "voice" not in data or not isinstance(data["voice"], str):
679
+ abort(400, '"voice" is required')
680
+ if "rate" in data and isinstance(data['rate'], int):
681
+ rate = data['rate']
682
+ else:
683
+ rate = 0
684
+ # Remove asterisks
685
+ data["text"] = data["text"].replace("*", "")
686
+ try:
687
+ audio = edge.generate_audio(text=data["text"], voice=data["voice"], rate=rate)
688
+ return Response(audio, mimetype="audio/mpeg")
689
+ except Exception as e:
690
+ print(e)
691
+ abort(500, data["voice"])
692
+
693
+
694
+ @app.route("/api/chromadb", methods=["POST"])
695
+ @require_module("chromadb")
696
+ def chromadb_add_messages():
697
+ data = request.get_json()
698
+ if "chat_id" not in data or not isinstance(data["chat_id"], str):
699
+ abort(400, '"chat_id" is required')
700
+ if "messages" not in data or not isinstance(data["messages"], list):
701
+ abort(400, '"messages" is required')
702
+
703
+ chat_id_md5 = hashlib.md5(data["chat_id"].encode()).hexdigest()
704
+ collection = chromadb_client.get_or_create_collection(
705
+ name=f"chat-{chat_id_md5}", embedding_function=chromadb_embed_fn
706
+ )
707
+
708
+ documents = [m["content"] for m in data["messages"]]
709
+ ids = [m["id"] for m in data["messages"]]
710
+ metadatas = [
711
+ {"role": m["role"], "date": m["date"], "meta": m.get("meta", "")}
712
+ for m in data["messages"]
713
+ ]
714
+
715
+ collection.upsert(
716
+ ids=ids,
717
+ documents=documents,
718
+ metadatas=metadatas,
719
+ )
720
+
721
+ return jsonify({"count": len(ids)})
722
+
723
+
724
+ @app.route("/api/chromadb/purge", methods=["POST"])
725
+ @require_module("chromadb")
726
+ def chromadb_purge():
727
+ data = request.get_json()
728
+ if "chat_id" not in data or not isinstance(data["chat_id"], str):
729
+ abort(400, '"chat_id" is required')
730
+
731
+ chat_id_md5 = hashlib.md5(data["chat_id"].encode()).hexdigest()
732
+ collection = chromadb_client.get_or_create_collection(
733
+ name=f"chat-{chat_id_md5}", embedding_function=chromadb_embed_fn
734
+ )
735
+
736
+ count = collection.count()
737
+ collection.delete()
738
+ #Write deletion to persistent folder
739
+ chromadb_client.persist()
740
+ print("ChromaDB embeddings deleted", count)
741
+ return 'Ok', 200
742
+
743
+
744
+ @app.route("/api/chromadb/query", methods=["POST"])
745
+ @require_module("chromadb")
746
+ def chromadb_query():
747
+ data = request.get_json()
748
+ if "chat_id" not in data or not isinstance(data["chat_id"], str):
749
+ abort(400, '"chat_id" is required')
750
+ if "query" not in data or not isinstance(data["query"], str):
751
+ abort(400, '"query" is required')
752
+
753
+ if "n_results" not in data or not isinstance(data["n_results"], int):
754
+ n_results = 1
755
+ else:
756
+ n_results = data["n_results"]
757
+
758
+ chat_id_md5 = hashlib.md5(data["chat_id"].encode()).hexdigest()
759
+ collection = chromadb_client.get_or_create_collection(
760
+ name=f"chat-{chat_id_md5}", embedding_function=chromadb_embed_fn
761
+ )
762
+
763
+ n_results = min(collection.count(), n_results)
764
+ query_result = collection.query(
765
+ query_texts=[data["query"]],
766
+ n_results=n_results,
767
+ )
768
+
769
+ documents = query_result["documents"][0]
770
+ ids = query_result["ids"][0]
771
+ metadatas = query_result["metadatas"][0]
772
+ distances = query_result["distances"][0]
773
+
774
+ messages = [
775
+ {
776
+ "id": ids[i],
777
+ "date": metadatas[i]["date"],
778
+ "role": metadatas[i]["role"],
779
+ "meta": metadatas[i]["meta"],
780
+ "content": documents[i],
781
+ "distance": distances[i],
782
+ }
783
+ for i in range(len(ids))
784
+ ]
785
+
786
+ return jsonify(messages)
787
+
788
+
789
+ @app.route("/api/chromadb/export", methods=["POST"])
790
+ @require_module("chromadb")
791
+ def chromadb_export():
792
+ data = request.get_json()
793
+ if "chat_id" not in data or not isinstance(data["chat_id"], str):
794
+ abort(400, '"chat_id" is required')
795
+
796
+ chat_id_md5 = hashlib.md5(data["chat_id"].encode()).hexdigest()
797
+ collection = chromadb_client.get_or_create_collection(
798
+ name=f"chat-{chat_id_md5}", embedding_function=chromadb_embed_fn
799
+ )
800
+ collection_content = collection.get()
801
+ documents = collection_content.get('documents', [])
802
+ ids = collection_content.get('ids', [])
803
+ metadatas = collection_content.get('metadatas', [])
804
+
805
+ content = [
806
+ {
807
+ "id": ids[i],
808
+ "metadata": metadatas[i],
809
+ "document": documents[i],
810
+ }
811
+ for i in range(len(ids))
812
+ ]
813
+
814
+ export = {
815
+ "chat_id": data["chat_id"],
816
+ "content": content
817
+ }
818
+
819
+
820
+ return jsonify(export)
821
+
822
+ @app.route("/api/chromadb/import", methods=["POST"])
823
+ @require_module("chromadb")
824
+ def chromadb_import():
825
+ data = request.get_json()
826
+ content = data['content']
827
+ if "chat_id" not in data or not isinstance(data["chat_id"], str):
828
+ abort(400, '"chat_id" is required')
829
+
830
+ chat_id_md5 = hashlib.md5(data["chat_id"].encode()).hexdigest()
831
+ collection = chromadb_client.get_or_create_collection(
832
+ name=f"chat-{chat_id_md5}", embedding_function=chromadb_embed_fn
833
+ )
834
+
835
+ documents = [item['document'] for item in content]
836
+ metadatas = [item['metadata'] for item in content]
837
+ ids = [item['id'] for item in content]
838
+
839
+
840
+ collection.upsert(documents=documents, metadatas=metadatas, ids=ids)
841
+
842
+ return jsonify({"count": len(ids)})
843
+
844
+ app.run(host=host, port=port)
tts_edge.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import io
2
+ import edge_tts
3
+ import asyncio
4
+
5
+
6
+ def get_voices():
7
+ voices = asyncio.run(edge_tts.list_voices())
8
+ return voices
9
+
10
+
11
+ async def _iterate_chunks(audio):
12
+ async for chunk in audio.stream():
13
+ if chunk["type"] == "audio":
14
+ yield chunk["data"]
15
+
16
+
17
+ async def _async_generator_to_list(async_gen):
18
+ result = []
19
+ async for item in async_gen:
20
+ result.append(item)
21
+ return result
22
+
23
+
24
+ def generate_audio(text: str, voice: str, rate: int) -> bytes:
25
+ sign = '+' if rate > 0 else '-'
26
+ rate = f'{sign}{abs(rate)}%'
27
+ audio = edge_tts.Communicate(text=text, voice=voice, rate=rate)
28
+ chunks = asyncio.run(_async_generator_to_list(_iterate_chunks(audio)))
29
+ buffer = io.BytesIO()
30
+
31
+ for chunk in chunks:
32
+ buffer.write(chunk)
33
+
34
+ return buffer.getvalue()
tts_samples/en_0.wav ADDED
Binary file (232 kB). View file
 
tts_samples/en_1.wav ADDED
Binary file (254 kB). View file
 
tts_samples/en_10.wav ADDED
Binary file (188 kB). View file
 
tts_samples/en_11.wav ADDED
Binary file (210 kB). View file
 
tts_samples/en_114.wav ADDED
Binary file (246 kB). View file
 
tts_samples/en_115.wav ADDED
Binary file (164 kB). View file
 
tts_samples/en_116.wav ADDED
Binary file (216 kB). View file
 
tts_samples/en_117.wav ADDED
Binary file (227 kB). View file
 
tts_samples/en_12.wav ADDED
Binary file (221 kB). View file
 
tts_samples/en_13.wav ADDED
Binary file (224 kB). View file
 
tts_samples/en_14.wav ADDED
Binary file (188 kB). View file
 
tts_samples/en_15.wav ADDED
Binary file (204 kB). View file
 
tts_samples/en_16.wav ADDED
Binary file (224 kB). View file
 
tts_samples/en_17.wav ADDED
Binary file (215 kB). View file
 
tts_samples/en_18.wav ADDED
Binary file (214 kB). View file
 
tts_samples/en_19.wav ADDED
Binary file (235 kB). View file
 
tts_samples/en_2.wav ADDED
Binary file (258 kB). View file
 
tts_samples/en_20.wav ADDED
Binary file (247 kB). View file
 
tts_samples/en_21.wav ADDED
Binary file (232 kB). View file
 
tts_samples/en_22.wav ADDED
Binary file (236 kB). View file
 
tts_samples/en_23.wav ADDED
Binary file (223 kB). View file
 
tts_samples/en_24.wav ADDED
Binary file (216 kB). View file
 
tts_samples/en_25.wav ADDED
Binary file (185 kB). View file
 
tts_samples/en_26.wav ADDED
Binary file (262 kB). View file
 
tts_samples/en_27.wav ADDED
Binary file (248 kB). View file
 
tts_samples/en_28.wav ADDED
Binary file (180 kB). View file
 
tts_samples/en_29.wav ADDED
Binary file (210 kB). View file
 
tts_samples/en_3.wav ADDED
Binary file (234 kB). View file
 
tts_samples/en_30.wav ADDED
Binary file (202 kB). View file
 
tts_samples/en_31.wav ADDED
Binary file (215 kB). View file
 
tts_samples/en_32.wav ADDED
Binary file (212 kB). View file
 
tts_samples/en_33.wav ADDED
Binary file (226 kB). View file
 
tts_samples/en_34.wav ADDED
Binary file (218 kB). View file
 
tts_samples/en_35.wav ADDED
Binary file (197 kB). View file
 
tts_samples/en_36.wav ADDED
Binary file (224 kB). View file
 
tts_samples/en_37.wav ADDED
Binary file (214 kB). View file
 
tts_samples/en_38.wav ADDED
Binary file (211 kB). View file
 
tts_samples/en_39.wav ADDED
Binary file (199 kB). View file
 
tts_samples/en_4.wav ADDED
Binary file (252 kB). View file
 
tts_samples/en_40.wav ADDED
Binary file (211 kB). View file
 
tts_samples/en_41.wav ADDED
Binary file (215 kB). View file
 
tts_samples/en_42.wav ADDED
Binary file (246 kB). View file
 
tts_samples/en_43.wav ADDED
Binary file (238 kB). View file