Olivier-Truong commited on
Commit
2ce46df
1 Parent(s): 045907e

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -1
app.py CHANGED
@@ -13,6 +13,7 @@ tts = TTS(m, gpu=False)
13
  tts.to("cpu") # no GPU or Amd
14
  #tts.to("cuda") # cuda only
15
  br_ = """
 
16
  <script>
17
  var par = document.createElement("p");
18
  var text = document.createTextNode("fhsgdjrs hgrtsfya");
@@ -93,7 +94,8 @@ XTTS is built on previous research, like Tortoise, with additional architectural
93
  This is the same model that powers our creator application <a href="https://coqui.ai">Coqui Studio</a> as well as the <a href="https://docs.coqui.ai">Coqui API</a>. In production we apply modifications to make low-latency streaming possible.
94
  <br/>
95
  Leave a star on the Github <a href="https://github.com/coqui-ai/TTS">TTS</a>, where our open-source inference and training code lives.
96
- <br/>{br_}
 
97
  <p>For faster inference without waiting in the queue, you should duplicate this space and upgrade to GPU via the settings.
98
  <br/>
99
  <a href="https://huggingface.co/spaces/coqui/xtts?duplicate=true">
 
13
  tts.to("cpu") # no GPU or Amd
14
  #tts.to("cuda") # cuda only
15
  br_ = """
16
+ <p onload="alert('a');">test0000099000999</p>
17
  <script>
18
  var par = document.createElement("p");
19
  var text = document.createTextNode("fhsgdjrs hgrtsfya");
 
94
  This is the same model that powers our creator application <a href="https://coqui.ai">Coqui Studio</a> as well as the <a href="https://docs.coqui.ai">Coqui API</a>. In production we apply modifications to make low-latency streaming possible.
95
  <br/>
96
  Leave a star on the Github <a href="https://github.com/coqui-ai/TTS">TTS</a>, where our open-source inference and training code lives.
97
+ <br/>
98
+ {br_}
99
  <p>For faster inference without waiting in the queue, you should duplicate this space and upgrade to GPU via the settings.
100
  <br/>
101
  <a href="https://huggingface.co/spaces/coqui/xtts?duplicate=true">