davidmezzetti commited on
Commit
2236087
1 Parent(s): 9dc2724

Add code example

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -19,13 +19,51 @@ license: apache-2.0
19
  txtai has a built in Text to Speech (TTS) pipeline that makes using this model easy.
20
 
21
  ```python
 
 
 
 
 
 
 
 
 
 
 
 
22
  ```
23
 
24
  ## Usage with ONNX
25
 
26
  This model can also be run directly with ONNX provided the input text is tokenized. Tokenization can be done with [ttstokenizer](https://github.com/neuml/ttstokenizer).
27
 
 
 
28
  ```python
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ```
30
 
31
  ## How to export
 
19
  txtai has a built in Text to Speech (TTS) pipeline that makes using this model easy.
20
 
21
  ```python
22
+ import soundfile as sf
23
+
24
+ from txtai.pipeline import TextToSpeech
25
+
26
+ # Build pipeline
27
+ tts = TextToSpeech("NeuML/ljspeech-vits-onnx")
28
+
29
+ # Generate speech
30
+ speech = tts("Say something here")
31
+
32
+ # Write to file
33
+ sf.write("out.wav", speech, 22050)
34
  ```
35
 
36
  ## Usage with ONNX
37
 
38
  This model can also be run directly with ONNX provided the input text is tokenized. Tokenization can be done with [ttstokenizer](https://github.com/neuml/ttstokenizer).
39
 
40
+ Note that the txtai pipeline has additional functionality such as batching large inputs together that would need to be duplicated with this method.
41
+
42
  ```python
43
+ import onnxruntime
44
+ import soundfile as sf
45
+ import yaml
46
+
47
+ from ttstokenizer import TTSTokenizer
48
+
49
+ # This example assumes the files have been downloaded locally
50
+ with open("ljspeech-vits-onnx/config.yaml", "r", encoding="utf-8") as f:
51
+ config = yaml.safe_load(f)
52
+
53
+ # Create model
54
+ model = onnxruntime.InferenceSession("ljspeech-vits-onnx/model.onnx", providers=["CPUExecutionProvider"])
55
+
56
+ # Create tokenizer
57
+ tokenizer = TTSTokenizer(config["token"]["list"])
58
+
59
+ # Tokenize inputs
60
+ inputs = tokenizer("Say something here")
61
+
62
+ # Generate speech
63
+ outputs = model.run(None, {"text": inputs})
64
+
65
+ # Write to file
66
+ sf.write("out.wav", outputs[0], 22050)
67
  ```
68
 
69
  ## How to export