<!DOCTYPE html>
<html lang="en">

<head>
    <link rel="icon" href="./static/favicon.ico" type="image/x-icon">
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>AllTalk TTS for Text generation webUI</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            max-width: 1200px;
            /* Adjusted max-width for better readability */
            margin: 40px auto;
            padding: 20px;
            background-color: #f4f4f4;
            border: 1px solid #ddd;
            border-radius: 8px;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
        }

        h1,
        h2 {
            color: black;
            text-decoration: underline;
        }


        p,
        span {
            color: #555;
            font-size: 16px;
            /* Increased font size for better readability */
            margin-top: 0;
            /* Remove top margin for paragraphs */
        }

        code {
            font-family: Arial, sans-serif;
            background-color: #e9e9e9;
            border: none;
            box-shadow: none;
            outline: none;
            color: #3366ff;
            font-size: 16px;
        }

        pre {
            background-color: #f8f8f8;
            border: 1px solid #ddd;
            border-radius: 4px;
            padding: 10px;
            overflow-x: auto;
            font-size: 14px;
            /* Adjusted font size for better visibility */
        }

        ul {
            color: #555;
            list-style-type: square;
            /* Set the bullet style */
            margin-left: 2px;
            /* Adjust the left margin to create an indent */
        }

        li {
            font-size: 16px;
            /* Set the font size for list items */
            margin-bottom: 8px;
            /* Add some space between list items */
        }

        .key {
            color: black;
            /* Color for keys */
            font-size: 14px;
            /* Increased font size for better readability */
        }

        .value {
            font-size: 14px;
            /* Increased font size for better readability */
            color: blue;
            /* Color for values */
        }


        a {
            color: #0077cc;
            text-decoration: none;
        }

        a:hover {
            text-decoration: underline;
        }

        strong {
            font-weight: bold;
        }

        .option-a {
            color: #33cc33;
            font-weight: bold;
        }

        .option-b {
            color: red;
            font-weight: bold;
        }

        /* New styles for TTS Request Page */
        #container {
            max-width: 1000px;
            margin: 50px auto;
            padding: 20px;
            background-color: #fff;
            border-radius: 8px;
            box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
        }

        form {
            display: flex;
            flex-direction: column;
            gap: 4px;
        }

        #outputFileContainer {
            display: flex;
            flex-direction: column;
            gap: 4px;
        }

        label {
            font-weight: bold;
            font-size: 14px;
            padding: 2px;
        }

        textarea,
        input,
        select,
        button {
            padding: 4px;
            font-size: 14px;
            border: 1px solid #ccc;
            border-radius: 4px;
            padding: 4px;
        }

        textarea {
            resize: vertical;
        }

        button {
            background-color: #4caf50;
            color: #fff;
            cursor: pointer;
            transition: background-color 0.3s;
        }

        button:hover {
            background-color: #45a049;
        }

        p {
            margin-top: 20px;
        }

        #outputFilePath {
            font-weight: bold;
        }

        #audioSource {
            display: block;
            margin: auto;
        }

        #outputFilePath {
            display: none;
        }


        table {
            border-collapse: collapse;
            width: 100%;
            margin-bottom: 20px;
        }

        th,
        td {
            border: 1px solid #ddd;
            padding: 8px;
            text-align: left;
        }

        /* Style for the first table */
        #configuration-details table {
            font-size: 16px;
        }

        /* Style for the nested table */
        #modeldownload-table table {
            font-size: 16px;
        }

        #modeldownload-table table td {
            padding: 4px;
            /* Adjust padding as needed for the nested table */
        }
    </style>
</head>

<body>
    <h1 id="toc">AllTalk TTS</h1>
    <iframe src="http://{{ params.ip_address }}:{{ params.port_number }}/settings" width="100%" height="535"
        frameborder="0" style="margin: 0; padding: 0;"></iframe>
    <hr>
    <div style="display: flex; justify-content: flex-start; align-items: flex-start;">
        <div style="flex: 1; padding-right: 20px;">
            <h3 id="index">Page Index</h3>
            <ul>
                <li>🔊 <a href="#using-voice-samples">Using Voice Samples</a></li>
                <li>🧮 <a href="#low-vram">Low VRAM</a></li>
                <li>🏰 <a href="#sillytavern-support">SillyTavern Support</a></li>
                <li>🟩 <a href="#-good-to-know">Good to know</a></li>
                <li>🟪 <a href="#🟪-updating">Updating &amp; problems with updating</a></li>
                <li>🔵🟢 <a href="#🔵🟢-deepspeed-installation-options">DeepSpeed
                        Installation (Windows &amp; Linux)</a></li>
                <li>🆘 <a href="#🆘-support-requests-troubleshooting--feature-requests">Support
                        Requests, Troubleshooting &amp; Feature requests</a></li>
                <li>🟨 <a href="#🟨-help-with-problems">Help with problems</a></li>
                <li>⚫ <a href="#⚫-finetuning-a-model">Finetuning a model</a></li>
                <li>⬜ <a href="#⬜-alltalk-tts-generator">AllTalk TTS Generator</a></li>
                <li>🟠 <a href="#🟠-api-suite-and-json-curl">API Suite and
                        JSON-CURL</a>
                </li>
            </ul>
        </div>
        <div style="flex: 1; padding-left: 20px;">
            <h3>Links</h3>

            <ul>
                <li><a href="http://{{ params.ip_address }}:{{ params.port_number }}/static/tts_generator/tts_generator.html"
                        target="_blank" rel="noopener">AllTalk TTS Generator</a></li>
                <li><a href="https://github.com/erew123/alltalk_tts" target="_blank" rel="noopener">AllTalk Github</a>
                </li>
                <li><a href="https://github.com/erew123/alltalk_tts/issues/25" target="_blank" rel="noopener">AllTalk
                        Changelog</a></li>
                <li><a href="https://github.com/erew123/alltalk_tts/issues/74" target="_blank" rel="noopener">AllTalk
                            Existing Feature Requests</a></li>
                <li><a href="https://github.com/erew123/alltalk_tts?#-help-with-problems" target="_blank"
                        rel="noopener">AllTalk
                        Help</a></li>
            </ul>

            <ul>
                <li><a href="http://{{ params.ip_address }}:7860" target="_blank" rel="noopener">Text-gen-webui - Web
                        interface</a></li>
                <li><a href="https://github.com/oobabooga/text-generation-webui/wiki" target="_blank"
                        rel="noopener">Text-gen-webui - Documentation</a><br /></li>
            </ul>
            <p><strong>AllTalk Server Status</strong></p>
            <ul>
                <li><strong>Base URL:</strong> <code>http://{{ params.ip_address }}:{{ params.port_number }}</code></li>
                <li><strong>Server Status:</strong>
                    <code><a href="http://{{ params.ip_address }}:{{ params.port_number }}/ready">http://{{ params.ip_address }}:{{ params.port_number }}/ready</a></code>
                </li>
            </ul>
        </div>
    </div>
    <hr>

    <h3 id="🛠️-about-this-project">🛠️ <strong>About this project</strong></h3>
    <p>AllTalk is a labour of love, developed and supported in my personal free time. As such, my ability to respond to
        support requests is limited. I prioritize issues based on their impact and the number of users affected. I
        appreciate your understanding and patience. If your inquiry isn&#39;t covered by the documentation or existing
        discussions, and it&#39;s not related to a bug or feature request, I&#39;ll do my best to assist as time allows.
    </p>
    <hr>
    <h3 id="💖-showing-your-support">💖 Showing Your Support</h3>
    <p>If AllTalk has been helpful to you, consider showing your support through a donation on my <a
            href="https://ko-fi.com/erew123">Ko-fi page</a>. Your support is greatly appreciated and helps ensure the
        continued development and improvement of AllTalk.</p>
    <hr>


    <h3 id="demotesttts">🗣️ Demo/Test TTS</h3>
    <div id="container">
        <form method="post" action="/tts-demo-request" id="ttsForm">
            <label for="text">Text:</label>
            <textarea id="text" name="text" rows="4" required></textarea>

            <label for="voice">Voice:</label>
            <select id="voice" name="voice" required>
                <!-- Options will be populated here by JavaScript -->
            </select>

            <label for="language">Language:</label>
            <select id="language" name="language" value="English" required>
                <option value="en" selected>English</option>
                <option value="ar">Arabic</option>
                <option value="zh-cn">Chinese</option>
                <option value="cs">Czech</option>
                <option value="nl">Dutch</option>
                <option value="fr">French</option>
                <option value="de">German</option>
                <option value="hi">Hindi</option>
                <option value="hu">Hungarian</option>
                <option value="it">Italian</option>
                <option value="ja">Japanese</option>
                <option value="ko">Korean</option>
                <option value="pl">Polish</option>
                <option value="pt">Portuguese</option>
                <option value="ru">Russian</option>
                <option value="es">Spanish</option>
                <option value="tr">Turkish</option>
            </select>

            <div id="outputFileContainer">
                <label for="outputFile">Output File:</label>
                <input type="text" id="outputFile" name="output_file" value="demo_output.wav" required>
            </div>

            <label for="streaming">Streaming:</label>
            <select id="streaming" name="streaming" value="false" required>
                <option value="false" selected>No</option>
                <option value="true">Yes</option>
            </select>

            <!-- Audio player with autoplay -->
            <audio controls autoplay id="audioSource">
                <source type="audio/wav" />
                Your browser does not support the audio element.
            </audio>

            <span id="outputFilePath" style="height: 0px;">{{ output_file_path }}</span>
            <button id="submit" type="submit">
                Generate TTS
            </button>
        </form>
    </div>

    <script>
        document.addEventListener('DOMContentLoaded', function () {
            const ipAddress = "{{ params.ip_address }}";
            const portNumber = "{{ params.port_number }}";
            const url = `http://${ipAddress}:${portNumber}/api/voices`;

            fetch(url)
                .then(response => response.json())
                .then(data => {
                    const selectElement = document.getElementById('voice');
                    data.voices.forEach(voice => {
                        const option = document.createElement('option');
                        option.value = voice;
                        option.textContent = voice;
                        selectElement.appendChild(option);
                    });
                })
                .catch(error => console.error('Error fetching voices:', error));
        });
    </script>

    <script>
        // Audio player
        const audioPlayer = document.getElementById('audioSource');
        function enableLoader(enable) {
            // Change the submit button text and disable it while loading
            const submit = document.getElementById('submit');
            submit.disabled = enable ? true : false;
            submit.style.opacity = enable ? 0.7 : 1.0;
            submit.innerText = enable ? 'Generating...' : 'Generate TTS';
            // Disable the audio player while loading
            audioPlayer.disabled = enable ? true : false;
            audioPlayer.style.opacity = enable ? 0.7 : 1.0;
            audioPlayer.style.pointerEvents = enable ? 'none' : 'auto';
        }
        audioPlayer.addEventListener('canplay', () => {
            enableLoader(false);
            audioPlayer.play();
        });
        audioPlayer.addEventListener('abort', () => {
            enableLoader(false);
        });
        audioPlayer.addEventListener('error', (e) => {
            enableLoader(false);
        });

        // Streaming selector
        const streaming = document.getElementById('streaming');
        streaming.addEventListener('change', (event) => {
            const outputFileContainer = document.getElementById('outputFileContainer');
            if (event.target.value === 'true') {
                outputFileContainer.style.display = 'none';
            } else {
                outputFileContainer.style.display = 'flex';
            }
        });
        // Form submit
        const form = document.getElementById('ttsForm');
        const ipAddress = "{{ params.ip_address }}";
        const portNumber = "{{ params.port_number }}";
        const baseurl = `http://${ipAddress}:${portNumber}`;
        form.addEventListener('submit', async (event) => {
            event.preventDefault();

            enableLoader(true);
            audioPlayer.pause();

            const formData = new FormData(form);

            // Get and clean the text input from the textarea
            let textInput = formData.get('text');
            let cleanedText = textInput.replace(/ \- | \– /g, ' '); // Replace hyphens
            cleanedText = cleanedText.replace(/%/g, " percent");
            //cleanedText = cleanedText.replace(/([!?.])\1+/g, '$1');
            //cleanedText = cleanedText.replace(/[^a-zA-Z0-9\s\.,;:!?\-\'"À-ÿ\u0400-\u04FF]/g, '');
            cleanedText = cleanedText.replace(/[^a-zA-Z0-9\s\.,;:!?\-\'"À-ÿ\u0400-\u04FF]\$/g, '')
            cleanedText = cleanedText.replace(/\n+/g, ' ');
            cleanedText = cleanedText.replace(/\.\s'/g, ".'"); // Remove space between period and single quote
            //cleanedText = cleanedText.replace(/'/g, ""); // Remove all single quotes

            // Set the cleaned text back to the form data
            formData.set('text', cleanedText);

            // Rest of your form submission code...
            // Check if streaming or not and handle accordingly
            if (formData.get('streaming') === 'true') {
                // For streaming, update the audio player src
                const searchParams = new URLSearchParams(formData).toString();
                audioPlayer.src = baseurl + "/api/tts-generate-streaming?" + searchParams;
                audioPlayer.load(); // Reload the audio player
            } else {
                // For non-streaming, send a POST request
                const response = await fetch('/api/tts-generate-streaming', {
                    method: 'POST',
                    body: formData,
                    cache: "no-store"
                });

                // Process the response
                const result = await response.json();
                // Update output file path and audio player source
                const outputFilePath = document.getElementById('outputFilePath');
                outputFilePath.textContent = result.output_file_path;
                audioPlayer.src = baseurl + `/audio/${result.output_file_path}`;
                audioPlayer.load(); // Reload the audio player
            }
        });

    </script>
    <hr>

    <h3 id="using-voice-samples">🔊 Using Voice Samples</h3>
    <h4 id="where-are-the-sample-voices-stored">Where are the sample voices stored?</h4>
    <p>Voice samples are stored in <span style="color: #3366ff;">/alltalk_tts/voices/</span>
        and should be named using the following format <span style="color: #3366ff;">name.wav</span></p>

    <h4 id="where-are-the-outputs-stored">🔊 Where are the outputs stored &amp; Automatic output wav file deletion.</h4>
    <p>Voice outputs are stored in&nbsp;<span style="color: #3366ff;">/alltalk_tts/outputs/</span></p>
    <p>You can configure automatic maintenance deletion of old wav
        files by setting <span style="color: #3366ff;">Del WAV's older than</span> in the settings above.</p>
    <p>When <span style="color: #3366ff;">Disabled</span> your output
        wav files will be left untouched. When set to a setting <span style="color: #3366ff;">1 Day</span> or greater,
        your output wav files older than that time period will be automatically deleted on start-up of AllTalk.</p>

    <h4>🔊 Where are the models stored?</h4>
    <p>This extension will download the 2.0.2 model to <span style="color: #3366ff;">/alltalk_tts/models/</span></p>
    <p>This TTS engine will also download the latest available model
        and store it wherever your OS normally stores it (Windows/Linux/Mac).</p>

    <h4>🔊 How do I create a new voice sample?</h4>
    <p>To create a new voice sample, you need to make a wav file that
        is
        <span style="color: #3366ff;">22050Hz</span>, <span style="color: #3366ff;">Mono</span>, <span
            style="color: #3366ff;">16 bit</span> and between 6 to 30 seconds long, though 8 to 10 seconds is usually
        good enough. The model can handle up to 30 second samples, however I've not noticed any improvement in voice
        output from much longer clips.
    </p>
    <p>You want to find a nice clear selection of audio, so lets say
        you wanted to clone your favourite celebrity. You may go looking for an interview where they are talking. Pay
        close attention to the audio you are listening to and trying to sample. Are there noises in the background, hiss
        on the soundtrack, a low humm, some quiet music playing or something? The better quality the audio the better
        the final TTS result. Don't forget, the AI that processes the sounds can hear everything in your sample and it
        will use them in the voice its trying to recreate.</p>
    <p>Try make your clip one of nice flowing speech, like the included
        example files. No big pauses, gaps or other sounds. Preferably a sample that the person you are trying to copy
        will show a little vocal range and emotion in their voice. Also, try to avoid a clip starting or ending with
        breathy sounds (breathing in/out etc).</p>

    <h4>🔊 Editing your sample!</h4>
    <p>So, you’ve downloaded your favourite celebrity interview off
        YouTube, from here you need to chop it down to 6 to 30 seconds in length and resample it.</p>
    <p style="text-align: justify; padding-left: 30px;">If you need to clean it up, do audio processing, volume level
        changes etc, do this before down-sampling.<br /><br />Using the latest version of Audacity <span
            style="color: #3366ff;">select/highlight</span> your 6 to 30 second clip and:<br /><br /><span
            style="color: #3366ff;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Tracks</span>
        &gt;<span style="color: #3366ff;"> Resample to 22050Hz</span>&nbsp;then<br /><span
            style="color: #3366ff;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Tracks</span> &gt;
        <span style="color: #3366ff;">Mix</span> &gt; <span style="color: #3366ff;">Stereo to
            Mono&nbsp;</span>then<br /><span style="color: #3366ff;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;
            &nbsp; &nbsp; File</span> &gt; <span style="color: #3366ff;">Export Audio</span>&nbsp;saving it as a <span
            style="color: #3366ff;">WAV</span> of <span style="color: #3366ff;">22050Hz</span>.
    </p>
    <p>Save your generated wav file in the&nbsp;<span style="color: #3366ff;">/alltalk_tts/voices/ <span
                style="color: #808080;">folder.</span></span></p>
    <p>Its worth mentioning that using AI generated audio clips and bad quality samples may
        introduce unwanted sounds into TTS generation.</p>

    <h4>🔊 Why doesnt it sound like XXX Person?</h4>
    <p>Maybe you might be interested in trying <a href="#⚫-finetuning-a-model">Finetuning of the model</a>. Otherwise,
        the reasons
        can be that you:</p>
    <ul style="text-align: justify;">
        <li>Didn't down-sample it as above.</li>
        <li>Have a bad quality voice sample.</li>
        <li>Try using the 3x different generation methods: <span style="color: #3366ff;">API TTS</span>, <span
                style="color: #3366ff;">API Local</span>, and <span style="color: #3366ff;">XTTSv2 Local</span> within
            the web interface, as they generate output in different ways and sound different.</li>
    </ul>
    <p>Some samples just never seem to work correctly, so maybe try a
        different sample. Always remember though, this is an AI model attempting to re-create a voice, so you will never
        get a 100% match.</p>
    <p><a href="#toc">Back to top of page</a></p>

    <hr>
    <h3 id="low-vram"><strong>🧮 Low VRAM</strong></h3>
    <p>The Low VRAM option is a crucial feature designed to enhance
        performance under constrained (VRAM) conditions, as the TTS models require 2GB-3GB of VRAM to run effectively.
        This feature strategically manages the relocation of the Text-to-Speech (TTS) model between your system's Random
        Access Memory (RAM) and VRAM, moving it between the two on the fly. Obviously, this is very useful for people
        who have smaller graphics cards or people whos LLM has filled their VRAM.</p>
    <p>When you don't have enough VRAM free after loading your LLM
        model
        into your VRAM (Normal Mode example below), you can see that with so little working space, your GPU will have to
        swap in and out bits of the TTS model, which causes horrible slowdown.</p>
    <p><span style="color: #ff0000;">Note:</span> An Nvidia Graphics
        card is required for the LowVRAM option to work, as you will just be using system RAM otherwise.&nbsp;</p>

    <h4>🧮 How It Works</h4>
    <p>The Low VRAM mode intelligently orchestrates the relocation of
        the entire TTS model and stores the TTS model in your system RAM. When the TTS engine requires VRAM for
        processing, the entire model seamlessly moves into VRAM, causing your LLM to unload/displace some layers,
        ensuring optimal performance of the TTS engine.</p>
    <p>Post-TTS processing, the model moves back to system RAM, freeing
        up VRAM space for your Language Model (LLM) to load back in the missing layers. This adds about 1-2 seconds to
        both text generation by the LLM and the TTS engine.</p>
    <p>By transferring the entire model between RAM and VRAM, the Low
        VRAM option avoids fragmentation, ensuring the TTS model remains cohesive and has all the working space it needs
        in your GPU, without having to just work on small bits of the TTS model at a time (which causes terrible slow
        down).</p>
    <p>This creates a TTS generation performance Boost for Low VRAM
        Users and is particularly beneficial for users with less than 2GB of free VRAM after loading their LLM,
        delivering a substantial 5-10x improvement in TTS generation speed.</p>
    <h4>🧮 Visual explanation</h4>
    <div style="text-align: center;"><img src="/static/at_admin/lowvrammode.png" alt="How Low VRAM Works" /></div>

    <p><a href="#toc">Back to top of page</a></p>
    <hr>


    <h3 id="sillytavern-support"><strong>🏰 SillyTavern Support</strong></h3>
    <h4>Important note for Text-generation-webui users</h4>
    <p>You <strong><span style="color: #3366ff;">HAVE</span></strong> to disable
        <strong>Enable TTS</strong> within the Text-generation-webui AllTalk interface,
        otherwise&nbsp;Text-generation-webui will also generate TTS due to the way it sends out text. You can do this
        each time you start up&nbsp;Text-generation-webui or set it in the start-up settings at the top of this page.
    </p>
    <h4>🏰 Quick Tips</h4>
    <ul>
        <li>Only change DeepSpeed, Low VRAM or Model one at a time. Wait for it to say Ready before changing something
            else.</li>
        <li>You can permanently change the AllTalk startup settings or DeepSpeed, Low VRAM and Model at the top of this
            page.</li>
        <li>Different AI models use quotes and asterisks differently, so you may need to change "Text not inside"
            depending on model.</li>
        <li>Add new voice samples to the voices folder. You can Finetune a model to make it sound even closer to the
            original sample.</li>
        <li>DeepSpeed will improve processing time to TTS to be 2-3x faster.</li>
        <li>Low VRAM can be very beneficial if you don't have much memory left after loading your LLM.</li>
    </ul>
    <h4>🏰 TTS Generation Methods in SilllyTavern</h4>
    <p>You have 2 types of audio generation options, Streaming and Standard.</p>
    <p style="padding-left: 30px;;">The Streaming Audio Generation method is designed for speed and
        is best suited for situations where you just want quick audio playback. This method, however, is limited to
        using just one voice per TTS generation request. This means a limitation of the Streaming method is the
        inability to utilize the AllTalk narrator function, making it a straightforward but less nuanced option.</p>
    <p>On the other hand, the Standard Audio Generation method provides
        a richer auditory experience. It's slightly slower than the Streaming method but compensates for this with its
        ability to split text into multiple voices. This functionality is particularly useful in scenarios where
        differentiating between character dialogues and narration can enhance the storytelling and delivery. The
        inclusion of the AllTalk narrator functionality in the Standard method allows for a more layered and immersive
        experience, making it ideal for content where depth and variety in voice narration add significant value.</p>
    <p>In summary, the choice between Streaming and Standard methods in
        AllTalk TTS depends on what you want. Streaming is great for quick and simple audio generation, while Standard
        is preferable for a more dynamic and engaging audio experience.</p>
    <ul>
        <li><strong>AllTalk TTS Generation Method:</strong>
            <ul>
                <li>Select between Standard and Streaming Audio Generation methods.</li>
                <li>This setting impacts the AllTalk narrator functionality.</li>
            </ul>
        </li>
        <li><strong>Language Selection:</strong><br />
            <ul>
                <li>Select your preferred TTS generation language from the "Language" dropdown.</li>
            </ul>
        </li>
        <li><strong>Model Switching:</strong>
            <ul>
                <li>Switch between different TTS models like API TTS, API Local, XTTSv2 Local, and optionally XTTSv2 FT
                    if you have a finetuned model available.</li>
                <li>Fine-tuned model availability (XTTSv2 FT) will only show when a finetuned model is detected by
                    AllTalk.</li>
                <li>See TTS Models/Methods for more information (though most people will want to stick with XTTSv2
                    Local).</li>
            </ul>
        </li>


        <li><strong>DeepSpeed and Low VRAM Options:</strong>
            <ul>
                <li>Optimize performance with DeepSpeed and Low VRAM settings.</li>
                <li>DeepSpeed can offer a 2-3x performance boost on TTS generation. (Requires installation)</li>
                <li>See the relevant sections in this documentation for details.</li>
            </ul>
        </li>
    </ul>
    <p>Changing model or DeepSpeed or Low VRAM <span style="color: #0000ff;"><strong>each</strong></span> take about 15
        seconds so you should only change one at
        a time and wait for <span style="color: #99cc00;">Ready</span> before changing the next setting. To set these
        options long term you can apply the settings at the top of this page.</p>
    <h4>🏰 AllTalk Narrator</h4>
    <p>Only available on the Standard Audo Generation method.</p>
    <ul>
        <li><strong>Narrator Voice Selection:</strong>
            <ul>
                <li>Allows users to choose different narrator voices.</li>
                <li>Access via the "Narrator Voice" dropdown.</li>
            </ul>
        </li>
        <li><strong>AllTalk Narrator:</strong>
            <ul>
                <li>Toggle the AllTalk narrator feature.</li>
                <li>Access via "AT Narrator" dropdown with Enabled/Disabled options.</li>
            </ul>
        </li>
        <li><strong>Text Outside Asterisks Handling:</strong>
            <ul>
                <li>Choose how text outside asterisks is interpreted (as Narrator or Character voice).</li>
                <li>Managed via the "Text Not Inside * or "" dropdown.</li>
                <li>Note, only available when the AllTalk Narrator is enabled.</li>
            </ul>
        </li>
    </ul>
    <h4>🏰 Usage Notes:</h4>
    <ul>
        <li>On startup of SillyTavern, it will pull your current settings from AllTalk (Current model, DeepSpeed status,
            Low VRAM status and Finetuned model availability).</li>
        <li>Enabling the narrator automatically unchecks certain checkboxes related to text handling.</li>
        <li>Changes in model or settings might trigger multiple requests to the server; patience is advised.</li>
    </ul>
    <h4>🏰 Troubleshooting:</h4>
    <ul>
        <li>If experiencing issues, use the Reload button in SillyTavern's TTS extension to reinitialize the connection
            to AllTalk and check if AllTalk is started correctly.</li>
    </ul>
    <p><a href="#toc">Back to top of page</a></p>
    <hr>
    <h3 id="-good-to-know">🟩 Good to know</h3>

    <h4 id="🟩-a-note-on-character-cards--greeting-messages">🟩 A note on Character Cards, Greeting Messages &amp;
        Text-not-inside</h4>
    <details>
        <summary>Click to expand</summary>

        <p>Messages intended for the Narrator should be enclosed in asterisks <code>*</code> and those for the character
            inside quotation marks <code>&quot;</code>. However, AI systems often deviate from these rules, resulting in
            text that is neither in quotes nor asterisks. Sometimes, text may appear with only a single asterisk, and AI
            models may vary their formatting mid-conversation. For example, they might use asterisks initially and then
            switch to unmarked text. A properly formatted line should look like this:</p>
        <p><code>&quot;</code>Hey! I&#39;m so excited to finally meet you. I&#39;ve heard so many great things about you
            and I&#39;m eager to pick your brain about computers.<code>&quot;</code> <code>*</code>She walked across the
            room and picked up her cup of coffee<code>*</code></p>
        <p>Most narrator/character systems switch voices upon encountering an asterisk or quotation marks, which is
            somewhat effective. AllTalk has undergone several revisions in its sentence splitting and identification
            methods. While some irregularities and AI deviations in message formatting are inevitable, any line
            beginning or ending with an asterisk should now be recognized as Narrator dialogue. Lines enclosed in double
            quotes are identified as Character dialogue. For any other text, you can choose how AllTalk handles it:
            whether it should be interpreted as Character or Narrator dialogue (most AI systems tend to lean more
            towards one format when generating text not enclosed in quotes or asterisks).</p>
        <p>With improvements to the splitter/processor, I&#39;m confident it&#39;s functioning well. You can monitor
            what AllTalk identifies as Narrator lines on the command line and adjust its behavior if needed (Text Not
            Inside - Function).</p>
        <div style="text-align: center;"><img src="/static/at_admin/textnotinside.jpg"
                alt="When the AI doesnt use an asterisk or a quote" /></div>
    </details>


    <h4 id="🟩-changing-alltalks-ip-address--accessing-alltalk-over-your-network">🟩 Changing AllTalks IP address &amp;
        Accessing AllTalk over your Network</h4>
    <details>
        <summary>Click to expand</summary>

        <p>AllTalk is coded to start on 127.0.0.1, meaning that it will ONLY be accessable to the local computer it is
            running on. If you want to make AllTalk available to other systems on your network, you will need to change
            its IP address to match the IP address of your network card/computers current IP address. There are 2x ways
            to change the IP address:</p>
        <ol>
            <li>Start AllTalk and within its web interface and you can edit the IP address on the &quot;AllTalk Startup
                Settings&quot;.</li>
            <li>You can edit the <code>confignew.json</code>file in a text editor and change
                <code>&quot;ip_address&quot;: &quot;127.0.0.1&quot;,</code> to the IP address of your choosing.
            </li>
        </ol>
        <p>So, for example, if your computer&#39;s network card was on IP address 192.168.0.20, you would change
            AllTalk&#39;s setting to 192.168.1.20 and then <strong>restart</strong> AllTalk. You will need to ensure
            your machine stays on this IP address each time it is restarted, by setting your machine to have a static IP
            address.</p>
    </details>

    <h4 id="🟩-text-geneneration-webui--stable-diffusion-plugin---load-order--stripped-text">🟩 Text-geneneration-webui
        &amp; Stable-Diffusion Plugin - Load Order &amp; stripped text</h4>
    <details>
        <summary>Click to expand</summary>

        <p>The Stable Diffusion plugin for Text-generation-webui <strong>strips out</strong> some of the text, which is
            passed to Stable Diffusion for image/scene generation. Because this text is stripped, its important to
            consider the load order of the plugins to get the desired result you want. Lets assume the AI has just
            generated the following message
            <code>*He walks into the room with a smile on his face and says* Hello how are you?</code>. Depending on the
            load order will change what text reaches AllTalk for generation e.g.
        </p>
        <p><strong>SD Plugin loaded before AllTalk</strong> - Only <code>Hi how are you?</code> is sent to AllTalk, with
            the <code>*He walks into the room with a smile on his face and says*</code> being sent over to SD for image
            generation. Narration of the scene is not possible.<br><br>
            <strong>AllTalk loaded before SD Plugin</strong> -
            <code>*He walks into the room with a smile on his face and says* Hello how are you?</code> is sent to
            AllTalk with the <code>*He walks into the room with a smile on his face and says*</code> being sent over to
            SD for image generation.<br><br>
            The load order can be changed within Text-generation-webui&#39;s <code>settings.yaml</code> file or
            <code>cmd_flags.txt</code> (depending on how you are managing your extensions).<br><br>
            <img src="/static/at_admin/atandsdplugin.jpg" alt="image">
        </p>
    </details>

    <h4 id="🟩-i-want-to-know-more-about-the-xtts-ai-model-used">🟩 I want to know more about the XTTS AI model used
    </h4>
    <details>
        <summary>Click to expand</summary>

        <p>Currently the XTTS model is the main model used by AllTalk for TTS generation. If you want to know more
            details about the XTTS model, its capabilties or its technical features you can look at resources such as:
        </p>
        <ul>
            <li><a
                    href="https://docs.coqui.ai/en/latest/models/xtts.html">https://docs.coqui.ai/en/latest/models/xtts.html</a>
            </li>
            <li><a href="https://github.com/coqui-ai/TTS">https://github.com/coqui-ai/TTS</a></li>
            <li><a href="https://github.com/coqui-ai/TTS/discussions">https://github.com/coqui-ai/TTS/discussions</a>
            </li>
        </ul>
    </details>
    <p><a href="#toc">Back to top of page</a></p>
    <hr>
    <h3 id="🟪-updating">🟪 Updating</h3>
    <p>Maintaining the latest version of your setup ensures access to new features and improvements. Below are the steps
        to update your installation, whether you&#39;re using Text-Generation-webui or running as a Standalone
        Application.</p>
    <details>
        <summary>UPDATING - Text-Generation-webui</summary>

        <p>The update process closely mirrors the installation steps. Follow these to ensure your setup remains current:
        </p>
        <ol>
            <li>
                <p><strong>Open a Command Prompt/Terminal</strong>:</p>
                <ul>
                    <li>Navigate to your Text-Generation-webui folder with:<ul>
                            <li><code>cd text-generation-webui</code></li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Start the Python Environment</strong>:</p>
                <ul>
                    <li>Activate the Python environment tailored for your operating system. Use the appropriate command
                        from below based on your OS:<ul>
                            <li>Windows: <code>cmd_windows.bat</code></li>
                            <li>Linux: <code>./cmd_linux.sh</code></li>
                            <li>macOS: <code>cmd_macos.sh</code></li>
                            <li>WSL (Windows Subsystem for Linux): <code>cmd_wsl.bat</code><br><br></li>
                        </ul>
                    </li>
                </ul>
                <blockquote>
                    <p>If you&#39;re unfamiliar with Python environments and wish to learn more, consider reviewing
                        <strong>Understanding Python Environments Simplified</strong> in the Help section.
                    </p>
                </blockquote>
            </li>
            <li>
                <p><strong>Navigate to the AllTalk TTS Folder</strong>:</p>
                <ul>
                    <li>Move into your extensions and then the alltalk_tts directory:<ul>
                            <li><code>cd extensions/alltalk_tts</code></li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Update the Repository</strong>:</p>
                <ul>
                    <li>Fetch the latest updates from the repository with:<ul>
                            <li><code>git pull</code></li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Install Updated Requirements</strong>:</p>
                <ul>
                    <li>Depending on your machine&#39;s OS, install the required dependencies using pip:<ul>
                            <li><strong>For Windows Machines</strong>:<ul>
                                    <li><code>pip install -r system\requirements\requirements_textgen.txt</code></li>
                                </ul>
                            </li>
                            <li><strong>For Linux/Mac</strong>:<ul>
                                    <li><code>pip install -r system/requirements/requirements_textgen.txt</code></li>
                                </ul>
                            </li>
                        </ul>
                    </li>
                </ul>
            </li>
        </ol>
    </details>

    <details>
        <summary>UPDATING - Standalone Application</summary>

        <p>For Standalone Application users, here&#39;s how to update your setup:</p>
        <ol>
            <li>
                <p><strong>Open a Command Prompt/Terminal</strong>:</p>
                <ul>
                    <li>Navigate to your AllTalk folder with:<ul>
                            <li><code>cd alltalk_tts</code></li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Access the Python Environment</strong>:</p>
                <ul>
                    <li>In a command prompt or terminal window, navigate to your <code>alltalk_tts</code> directory and
                        start the Python environment:<ul>
                            <li>Windows:<ul>
                                    <li><code>start_environment.bat</code></li>
                                </ul>
                            </li>
                            <li>Linux/macOS:<ul>
                                    <li><code>./start_environment.sh</code></li>
                                </ul>
                            </li>
                        </ul>
                    </li>
                </ul>
            </li>
        </ol>
        <blockquote>
            <p>If you&#39;re unfamiliar with Python environments and wish to learn more, consider reviewing
                <strong>Understanding Python Environments Simplified</strong> in the Help section.
            </p>
        </blockquote>
        <ol start="2">
            <li>
                <p><strong>Pull the Latest Updates</strong>:</p>
                <ul>
                    <li>Retrieve the latest changes from the repository with:<ul>
                            <li><code>git pull</code></li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Install Updated Requirements</strong>:</p>
                <ul>
                    <li>Depending on your machine&#39;s OS, install the required dependencies using pip:<ul>
                            <li><strong>For Windows Machines</strong>:<ul>
                                    <li><code>pip install -r system\requirements\requirements_standalone.txt</code></li>
                                </ul>
                            </li>
                            <li><strong>For Linux/Mac</strong>:<ul>
                                    <li><code>pip install -r system/requirements/requirements_standalone.txt</code></li>
                                </ul>
                            </li>
                        </ul>
                    </li>
                </ul>
            </li>
        </ol>
    </details>

    <h4 id="🟪-resolving-update-issues">🟪 Resolving Update Issues</h4>
    <p>If you encounter problems during or after an update, following these steps can help resolve the issue by
        refreshing your installation while preserving your data:</p>
    <details>
        <summary>RESOLVING - Text-Generation-webui</summary>

        <p>The process involves renaming your existing <code>alltalk_tts</code> directory, setting up a fresh instance,
            and then migrating your data:</p>
        <ol>
            <li>
                <p><strong>Rename Existing Directory</strong>:</p>
                <ul>
                    <li>First, rename your current <code>alltalk_tts</code> folder to back it up, e.g.,
                        <code>alltalk_tts.old</code>. This preserves any existing data.
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Open a Console/Terminal</strong>:</p>
                <ul>
                    <li>
                        <p>Navigate to the Text-generation-webui directory and start the Python environment appropriate
                            for your operating system:</p>
                        <ul>
                            <li><code>cd text-generation-webui</code></li>
                        </ul>
                    </li>
                    <li>
                        <p>Then use one of the following commands based on your OS:</p>
                        <ul>
                            <li>Windows: <code>cmd_windows.bat</code></li>
                            <li>Linux: <code>./cmd_linux.sh</code></li>
                            <li>macOS: <code>cmd_macos.sh</code></li>
                            <li>WSL (Windows Subsystem for Linux): <code>cmd_wsl.bat</code><br><br></li>
                        </ul>
                        <blockquote>
                            <p>If you&#39;re not familiar with Python environments, see <strong>Understanding Python
                                    Environments Simplified</strong> in the Help section for more info.</p>
                        </blockquote>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Clone the AllTalk TTS Repository</strong>:</p>
                <ul>
                    <li>Move into the <code>extensions</code> directory and clone a fresh copy of
                        <code>alltalk_tts</code>:<ul>
                            <li><code>cd extensions</code></li>
                            <li><code>git clone https://github.com/erew123/alltalk_tts</code></li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Install Requirements</strong>:</p>
                <ul>
                    <li>Navigate to the newly cloned <code>alltalk_tts</code> directory and install the necessary
                        dependencies for your system:<ul>
                            <li><code>cd alltalk_tts</code></li>
                        </ul>
                    </li>
                    <li>Depending on your machine&#39;s OS, install the required dependencies using pip:<ul>
                            <li><strong>For Windows Machines</strong>:<ul>
                                    <li><code>pip install -r system\requirements\requirements_textgen.txt</code></li>
                                </ul>
                            </li>
                            <li><strong>For Linux/Mac</strong>:<ul>
                                    <li><code>pip install -r system/requirements/requirements_textgen.txt</code></li>
                                </ul>
                            </li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Migrate Your Data</strong>:</p>
                <ul>
                    <li>Before starting the application, transfer the <code>models</code>, <code>voices</code>, and
                        <code>outputs</code> folders from <code>alltalk_tts.old</code> to the new
                        <code>alltalk_tts</code> directory. This action preserves your voice history and prevents the
                        need to re-download the model.
                    </li>
                </ul>
            </li>
        </ol>
        <p>You&#39;re now ready to launch Text-generation-webui. Note that you may need to reapply any previously saved
            configuration changes through the configuration page.</p>
        <ol start="6">
            <li><strong>Final Step</strong>:</li>
        </ol>
        <ul>
            <li>Once you&#39;ve verified that everything is working as expected and you&#39;re satisfied with the setup,
                feel free to delete the <code>alltalk_tts.old</code> directory to free up space.</li>
        </ul>
    </details>
    <p><a href="#toc">Back to top of page</a></p>
    <hr>
    <h3 id="🔵🟢-deepspeed-installation-options">🔵🟢 DeepSpeed Installation Options</h3>
    <p><strong>DeepSpeed requires an Nvidia Graphics card</strong></p>
    <p style=>DeepSpeed provides a 2x-3x speed boost for Text-to-Speech and AI
        tasks. It's all about making AI and TTS happen faster and more efficiently.</p>
    <h4 id="🔵-linux-installation">🔵 Linux Installation</h4>
    <p>DeepSpeed requires access to the <strong>Nvidia CUDA Development Toolkit</strong> to compile on a Linux system.
        It&#39;s important to note that this toolkit is distinct and unrealted to your graphics card driver or the CUDA
        version the Python environment uses. </p>
    <details>
        <summary>Linux DeepSpeed - Text-generation-webui</summary>

        <h3 id="deepspeed-installation-for-text-generation-webui">DeepSpeed Installation for Text generation webUI</h3>
        <ol>
            <li>
                <p><strong>Nvidia CUDA Development Toolkit Installation</strong>:</p>
                <ul>
                    <li>The toolkit is crucial for DeepSpeed to compile/build for your version of Linux and requires
                        around 3GB&#39;s of disk space.</li>
                    <li>Install using your package manager <strong>(Recommended)</strong> e.g. <strong>CUDA Toolkit
                            11.8</strong> or download directly from <a
                            href="https://developer.nvidia.com/cuda-toolkit-archive">Nvidia CUDA Toolkit Archive</a>
                        (choose 11.8 or 12.1 for Linux).</li>
                </ul>
            </li>
            <li>
                <p><strong>Open a Terminal Console</strong>:</p>
                <ul>
                    <li>After Nvidia CUDA Development Toolkit installation, access your terminal console.</li>
                </ul>
            </li>
            <li>
                <p><strong>Install libaio-dev</strong>:</p>
                <ul>
                    <li>
                        <p>Use your Linux distribution&#39;s package manager.</p>
                    </li>
                    <li>
                        <ul>
                            <li><code>sudo apt install libaio-dev</code> for Debian-based systems</li>
                            <li><code>sudo yum install libaio-devel</code> for RPM-based systems.</li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Navigate to Text generation webUI Folder</strong>:</p>
                <ul>
                    <li>Change directory to your Text generation webUI folder with
                        <code>cd text-generation-webui</code>.
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Activate Text generation webUI Custom Conda Environment</strong>:</p>
                <ul>
                    <li>Run <code>./cmd_linux.sh</code> to start the environment.<br><br></li>
                </ul>
                <blockquote>
                    <p>If you&#39;re unfamiliar with Python environments and wish to learn more, consider reviewing
                        <strong>Understanding Python Environments Simplified</strong> in the Help section.
                    </p>
                </blockquote>
            </li>
            <li>
                <p><strong>Set <code>CUDA_HOME</code> Environment Variable</strong>:</p>
                <ul>
                    <li>DeepSpeed locates the Nvidia toolkit using the <code>CUDA_HOME</code> environment variable.</li>
                    <li>You will only set this temporarily as Text generation webUI sets up its own CUDA_HOME
                        environment each time you use <code>./cmd_linux.sh</code> or <code>./start_linux.sh</code></li>
                </ul>
            </li>
            <li>
                <p><strong>Temporarily Configuring <code>CUDA_HOME</code></strong>:</p>
                <ul>
                    <li>
                        <p>When the Text generation webUI Python environment is active <strong>(step 5)</strong>, set
                            <code>CUDA_HOME</code>.
                        </p>
                    </li>
                    <li>
                        <ul>
                            <li><code>export CUDA_HOME=/usr/local/cuda</code></li>
                            <li><code>export PATH=${CUDA_HOME}/bin:${PATH}</code></li>
                            <li><code>export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH</code></li>
                        </ul>
                    </li>
                    <li>
                        <p>You can confirm the path is set correctly and working by running the command
                            <code>nvcc --version</code> should confirm
                            <code>Cuda compilation tools, release 11.8.</code>.
                        </p>
                    </li>
                    <li>
                        <p>Incorrect path settings may lead to errors. If you encounter path issues or receive errors
                            like <code>[Errno 2] No such file or directory</code> when you run the next step, confirm
                            the path correctness or adjust as necessary.</p>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>DeepSpeed Installation</strong>:</p>
                <ul>
                    <li>Install DeepSpeed using <code>pip install deepspeed</code>.</li>
                </ul>
            </li>
            <li>
                <p><strong>Troubleshooting</strong>:</p>
                <ul>
                    <li>Troubleshooting steps for DeepSpeed installation can be located down below.</li>
                    <li><strong>NOTE</strong>: You <strong>DO NOT</strong> need to set Text-generation-webUI&#39;s
                        <strong>--deepspeed</strong> setting for AllTalk to be able to use DeepSpeed. These are two
                        completely separate things and incorrectly setting that on Text-generation-webUI may cause other
                        complications.
    </details>
    <details>
        <summary>Linux DeepSpeed - Standalone Installation</summary>
        </li>
        </ul>
        </li>
        </ol>
        <h3 id="deepspeed-installation-for-standalone-alltalk">DeepSpeed Installation for Standalone AllTalk</h3>
        <ol>
            <li>
                <p><strong>Nvidia CUDA Development Toolkit Installation</strong>:</p>
                <ul>
                    <li>The toolkit is crucial for DeepSpeed to compile/build for your version of Linux and requires
                        around 3GB&#39;s of disk space.</li>
                    <li>Install using your package manager <strong>(Recommended)</strong> e.g. <strong>CUDA Toolkit
                            11.8</strong> or download directly from <a
                            href="https://developer.nvidia.com/cuda-toolkit-archive">Nvidia CUDA Toolkit Archive</a>
                        (choose 11.8 or 12.1 for Linux).</li>
                </ul>
            </li>
            <li>
                <p><strong>Open a Terminal Console</strong>:</p>
                <ul>
                    <li>After Nvidia CUDA Development Toolkit installation, access your terminal console.</li>
                </ul>
            </li>
            <li>
                <p><strong>Install libaio-dev</strong>:</p>
                <ul>
                    <li>
                        <p>Use your Linux distribution&#39;s package manager.</p>
                    </li>
                    <li>
                        <ul>
                            <li><code>sudo apt install libaio-dev</code> for Debian-based systems</li>
                            <li><code>sudo yum install libaio-devel</code> for RPM-based systems.</li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Navigate to AllTalk TTS Folder</strong>:</p>
                <ul>
                    <li>Change directory to your AllTalk TTS folder with <code>cd alltalk_tts</code>.</li>
                </ul>
            </li>
            <li>
                <p><strong>Activate AllTalk Custom Conda Environment</strong>:</p>
                <ul>
                    <li>Run <code>./start_environment.sh</code> to start the AllTalk Python environment.</li>
                    <li>This command will start the custom Python environment that was installed with
                        <code>./atsetup.sh</code>.<br><br>
                    </li>
                </ul>
                <blockquote>
                    <p>If you&#39;re unfamiliar with Python environments and wish to learn more, consider reviewing
                        <strong>Understanding Python Environments Simplified</strong> in the Help section.
                    </p>
                </blockquote>
            </li>
            <li>
                <p><strong>Set <code>CUDA_HOME</code> Environment Variable</strong>:</p>
                <ul>
                    <li>The DeepSpeed installation routine locates the Nvidia toolkit using the <code>CUDA_HOME</code>
                        environment variable. This can be set temporarily for a session or permanently, depending on
                        other requirements you may have for other Python/System environments.</li>
                    <li>For temporary use, proceed to <strong>step 8</strong>. For a permanent solution, see <a
                            href="https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#set-env-vars">Conda&#39;s
                            manual on setting environment variables</a>.</li>
                </ul>
            </li>
            <li>
                <p><strong>(Optional) Permanent <code>CUDA_HOME</code> Setup</strong>:</p>
                <ul>
                    <li>If you choose to set <code>CUDA_HOME</code> permanently, follow the instructions in the provided
                        Conda manual link above.</li>
                </ul>
            </li>
            <li>
                <p><strong>Configuring <code>CUDA_HOME</code></strong>:</p>
                <ul>
                    <li>
                        <p>When your Python environment is active <strong>(step 5)</strong>, set <code>CUDA_HOME</code>.
                        </p>
                    </li>
                    <li>
                        <ul>
                            <li><code>export CUDA_HOME=/usr/local/cuda</code></li>
                            <li><code>export PATH=${CUDA_HOME}/bin:${PATH}</code></li>
                            <li><code>export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH</code></li>
                        </ul>
                    </li>
                    <li>
                        <p>You can confirm the path is set correctly and working by running the command
                            <code>nvcc --version</code> should confirm
                            <code>Cuda compilation tools, release 11.8.</code>.
                        </p>
                    </li>
                    <li>
                        <p>Incorrect path settings may lead to errors. If you encounter path issues or receive errors
                            like <code>[Errno 2] No such file or directory</code> when you run the next step, confirm
                            the path correctness or adjust as necessary.</p>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>DeepSpeed Installation</strong>:</p>
                <ul>
                    <li>Install DeepSpeed using <code>pip install deepspeed</code>.</li>
                </ul>
            </li>
            <li>
                <p><strong>Starting AllTalk TTS WebUI</strong>:</p>
                <ul>
                    <li>Launch the AllTalk TTS interface with <code>./start_alltalk.sh</code> and enable DeepSpeed.</li>
                </ul>
            </li>
        </ol>
        <h3 id="troubleshooting">Troubleshooting</h3>
        <ul>
            <li>If setting <code>CUDA_HOME</code> results in path duplication errors (e.g.,
                <code>.../bin/bin/nvcc</code>), you can correct this by unsetting <code>CUDA_HOME</code> with
                <code>unset CUDA_HOME</code> and then adding the correct path to your system&#39;s PATH variable.
            </li>
            <li>Always verify paths and compatibility with other CUDA-dependent applications to avoid conflicts.</li>
            <li>If you have multiple versions of the Nvidia CUDA Development Toolkit installed, you will have to specify
                the version number in step 8 for the CUDA_HOME path.</li>
            <li>If it became necessary to uninstall DeepSpeed, you can do so by start the Python enviroment and then
                running <code>pip uninstall deepspeed</code><br><br></li>
        </ul>
    </details>

    <h4 id="🟢-windows-installation">🟢 Windows Installation</h4>
    <p>You have 2x options for how to setup DeepSpeed on Windows. Pre-compiled wheel files for specific Python, CUDA and
        Pytorch builds, or manually compiling DeepSpeed.</p>
    <details>
        <summary>Windows DeepSpeed - Pre-Compiled Wheels (Quick and Easy)</summary>

        <h3 id="deepspeed-installation-with-pre-compiled-wheels">DeepSpeed Installation with Pre-compiled Wheels</h3>
        <ol>
            <li>
                <p><strong>Introduction to Pre-compiled Wheels</strong>:</p>
                <ul>
                    <li>The <code>atsetup.bat</code> utility simplifies the installation of DeepSpeed by automatically
                        downloading and installing pre-compiled wheel files. These files are tailored for
                        <strong>specific</strong> versions of Python, CUDA, and PyTorch, ensuring compatibility with
                        both the <strong>Standalone Installation</strong> and a standard build of
                        <strong>Text-generation-webui</strong>.
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Manual Installation of Pre-compiled Wheels</strong>:</p>
                <ul>
                    <li>If needed, pre-compiled DeepSpeed wheel files that I have built are available on the <a
                            href="https://github.com/erew123/alltalk_tts/releases">Releases Page</a>. You can manually
                        install or uninstall these wheels using the following commands:<ul>
                            <li>Installation: <code>pip install {deep-speed-wheel-file-name-here}</code></li>
                            <li>Uninstallation: <code>pip uninstall deepspeed</code></li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Using <code>atsetup.bat</code> for Simplified Management</strong>:</p>
                <ul>
                    <li>For those running the Standalone Installation or a standard build of Text-generation-webui, the
                        <code>atsetup.bat</code> utility offers the simplest and most efficient way to manage DeepSpeed
                        installations on Windows.
                    </li>
                </ul>
            </li>
        </ol>
    </details>

    <details>
        <summary>Windows DeepSpeed - Manual Compilation</summary>

        <h3 id="manual-deepspeed-wheel-compilation">Manual DeepSpeed Wheel Compilation</h3>
        <ol>
            <li>
                <p><strong>Preparation for Manual Compilation</strong>:</p>
                <ul>
                    <li>Manual compilation of DeepSpeed wheels is an advanced process that requires:<ul>
                            <li><strong>1-2 hours</strong> of your time for initial setup and compilation.</li>
                            <li><strong>6-10GB</strong> of disk space on your computer.</li>
                            <li>A solid technical understanding of Windows environments and Python.</li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Understanding Wheel Compatibility</strong>:</p>
                <ul>
                    <li>A compiled DeepSpeed wheel is uniquely tied to the specific versions of Python, PyTorch, and
                        CUDA used during its compilation. If any of these versions are changed, you will need to compile
                        a new DeepSpeed wheel to ensure compatibility.</li>
                </ul>
            </li>
            <li>
                <p><strong>Compiling DeepSpee Resources</strong>:</p>
                <ul>
                    <li>Myself and <a href="https://github.com/S95Sedan">@S95Sedan</a> have worked to simplify the
                        compilation process. <a href="https://github.com/S95Sedan">@S95Sedan</a> has notably improved
                        the process for later versions of DeepSpeed, ensuring ease of build on Windows.</li>
                    <li>Because <a href="https://github.com/S95Sedan">@S95Sedan</a> is now maintaining the instructions
                        for compiling DeepSpeed on Windows, please visit <a
                            href="https://github.com/S95Sedan">@S95Sedan</a>&#39;s<br><a
                            href="https://github.com/S95Sedan/Deepspeed-Windows">DeepSpeed GitHub page</a>.</li>
                </ul>
            </li>
        </ol>
    </details>
    <h4> 🔵🟢 DeepSpeed Performance Example</h4>
    <div style="text-align: center;"><img src="/static/at_admin/deepspeedexample.jpg" alt="DeepSpeed on vs off" /></div>
    <p><a href="#toc">Back to top of page</a></p>
    <hr>
    <h3 id="🆘-support-requests-troubleshooting--feature-requests">🆘 Support Requests, Troubleshooting &amp; Feature
        requests</h3>
    <p>I&#39;m thrilled to see the enthusiasm and engagement with AllTalk! Your feedback and questions are invaluable,
        helping to make this project even better. To ensure everyone gets the help they need efficiently, please
        consider the following before submitting a support request:</p>
    <p><strong>Consult the Documentation:</strong> A comprehensive guide and FAQ sections (below) are available to help
        you navigate AllTalk. Many common questions and troubleshooting steps are covered here.</p>
    <p><strong>Search Past Discussions:</strong> Your issue or question might already have been addressed in the
        discussions area or <a href="https://github.com/erew123/alltalk_tts/issues?q=is%3Aissue+is%3Aclosed">closed
            issues</a>. Please use the search function to see if there&#39;s an existing solution or advice that applies
        to your situation.</p>
    <p><strong>Bug Reports:</strong> If you&#39;ve encountered what you believe is a bug, please first check the <a
            href="https://github.com/erew123/alltalk_tts/issues/25">Updates &amp; Bug Fixes List</a> to see if it&#39;s
        a known issue or one that&#39;s already been resolved. If not, I encourage you to report it by raising a bug
        report in the <a href="https://github.com/erew123/alltalk_tts/issues">Issues section</a>, providing as much
        detail as possible to help identify and fix the issue.</p>
    <p><strong>Feature Requests:</strong> The current Feature request list can be <a
            href="https://github.com/erew123/alltalk_tts/discussions/74">found here</a>. I love hearing your ideas for
        new features! While I can&#39;t promise to implement every suggestion, I do consider all feedback carefully.
        Please share your thoughts in the <a href="https://github.com/erew123/alltalk_tts/discussions">Discussions
            area</a> or via a Feature Request in the <a href="https://github.com/erew123/alltalk_tts/issues">Issues
            section</a>. </p>

    <p><a href="#toc">Back to top of page</a></p>
    <hr>
    <h3 id="🟨-help-with-problems">🟨 Help with problems</h3>
    <h4 id="🔄-minor-updatesbug-fixes-list-can-be-found-here">&nbsp;&nbsp;&nbsp;&nbsp; 🔄 <strong>Minor updates/bug
            fixes list</strong> can be found <a href="https://github.com/erew123/alltalk_tts/issues/25">here</a></h4>
    <h4>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Please check Github for the most up to date help list <a href="https://github.com/erew123/alltalk_tts#-help-with-problems">here</a></h4>
    <details>
        <summary>🟨 How to make a diagnostics report file</summary>

        <p>If you are on a Windows machine or a Linux machine, you should be able to use the <code>atsetup.bat</code> or
            <code>./atsetup.sh</code> utility to create a diagnositcs file. If you are unable to use the
            <code>atsetup</code> utility, please follow the instructions below.
        </p>
        <details>
            <summary><strong>Manually making a diagnostics report file</strong></summary>

            <ol>
                <li>Open a command prompt window and start the Python environment. Depending on your setup
                    (Text-generation-webui or Standalone AllTalk), the steps to start the Python environment vary:<br>
                </li>
            </ol>
            <ul>
                <li>
                    <p><strong>For Text-generation-webui Users</strong>:</p>
                    <ul>
                        <li>Navigate to the Text-generation-webui directory:<ul>
                                <li><code>cd text-generation-webui</code></li>
                            </ul>
                        </li>
                        <li>Start the Python environment suitable for your OS:<ul>
                                <li>Windows: <code>cmd_windows.bat</code></li>
                                <li>Linux: <code>./cmd_linux.sh</code></li>
                                <li>macOS: <code>cmd_macos.sh</code></li>
                                <li>WSL (Windows Subsystem for Linux): <code>cmd_wsl.bat</code></li>
                            </ul>
                        </li>
                        <li>Move into the AllTalk directory:<ul>
                                <li><code>cd extensions/alltalk_tts</code></li>
                            </ul>
                        </li>
                    </ul>
                </li>
                <li>
                    <p><strong>For Standalone AllTalk Users</strong>:</p>
                    <ul>
                        <li>Navigate to the <code>alltalk_tts</code> folder:<ul>
                                <li><code>cd alltalk_tts</code></li>
                            </ul>
                        </li>
                        <li>Start the Python environment:<ul>
                                <li>Windows: <code>start_environment.bat</code></li>
                                <li>Linux: <code>./start_environment.sh</code><br><br></li>
                            </ul>
                        </li>
                    </ul>
                    <blockquote>
                        <p>If you&#39;re unfamiliar with Python environments and wish to learn more, consider reviewing
                            <strong>Understanding Python Environments Simplified</strong> in the Help section.
                        </p>
                    </blockquote>
                </li>
            </ul>
            <ol start="2">
                <li>
                    <p>Run the diagnostics and select the requirements file name you installed AllTalk with:<br></p>
                    <ul>
                        <li><code>python diagnostics.py</code></li>
                    </ul>
                </li>
                <li>
                    <p>You will have an on screen output showing your environment setttings, file versions request vs
                        whats installed and details of your graphics card (if Nvidia). This will also create a file
                        called <code>diagnostics.log</code> in the <code>alltalk_tts</code> folder, that you can upload
                        if you need to create a support ticket on here.<br><br></p>
                </li>
            </ol>
            <div style="text-align: center;"><img src="/static/at_admin/3c6fde566d6a.jpg" alt="Diagnostics" /></div><br>
        </details>
    </details>
    </p>
    <h4 id="installation-and-setup-issues">Installation and Setup Issues</h4>
    <details>
        <summary>🟨 Understanding Python Environments Simplified</summary>

        <p>Think of Python environments like different rooms in your house, each designed for a specific purpose. Just
            as you wouldn&#39;t cook in the bathroom or sleep in the kitchen, different Python applications need their
            own &quot;spaces&quot; or environments because they have unique requirements. Sometimes, these requirements
            can clash with those of other applications (imagine trying to cook a meal in a bathroom!). To avoid this,
            you can create separate Python environments.</p>
        <h4 id="why-separate-environments">Why Separate Environments?</h4>
        <p>Separate environments, like separate rooms, keep everything organized and prevent conflicts. For instance,
            one Python application might need a specific version of a library or dependency, while another requires a
            different version. Just as you wouldn&#39;t store kitchen utensils in the bathroom, you wouldn&#39;t want
            these conflicting requirements to interfere with each other. Each environment is tailored and customized for
            its application, ensuring it has everything it needs without disrupting others.</p>
        <h4 id="how-it-works-in-practice">How It Works in Practice:</h4>
        <p><strong>Standalone AllTalk Installation:</strong> When you install AllTalk standalone, it&#39;s akin to
            adding a new room to your house specifically designed for your AllTalk activities. The setup process, using
            the atsetup utility, constructs this custom &quot;room&quot; (Python environment
            <code>alltalk_environment</code>) with all the necessary tools and furnishings (libraries and dependencies)
            that AllTalk needs to function smoothly, without meddling with the rest of your &quot;house&quot; (computer
            system). The AllTalk environment is started each time you run <code>start_alltalk</code> or
            <code>start_environment</code> within the AllTalk folder.
        </p>
        <p><strong>Text-generation-webui Installation:</strong> Similarly, installing Text-generation-webui is like
            setting up another specialized room. Upon installation, it automatically creates its own tailored
            environment, equipped with everything required for text generation, ensuring a seamless and conflict-free
            operation. The Text-generation-webui environment is started each time you run
            <code>start_*your-os-version*</code> or <code>cmd_*your-os-version*</code> within the Text-generation-webui
            folder.
        </p>
        <h4 id="managing-environments">Managing Environments:</h4>
        <p>Just as you might renovate a room or bring in new furniture, you can also update or modify Python
            environments as needed. Tools like Conda or venv make it easy to manage these environments, allowing you to
            create, duplicate, activate, or delete them much like how you might manage different rooms in your house for
            comfort and functionality.</p>
        <p>Once you&#39;re in the right environment, by activating it, installing or updating dependencies (the tools
            and furniture of your Python application) is straightforward. Using pip, a package installer for Python, you
            can easily add what you need. For example, to install all required dependencies listed in a requirements.txt
            file, you&#39;d use:</p>
        <p><code>pip install -r requirements.txt</code></p>
        <p>This command tells pip to read the list of required packages and versions from the requirements.txt file and
            install them in the current environment, ensuring your application has everything it needs to operate.
            It&#39;s like having a shopping list for outfitting a room and ensuring you have all the right items
            delivered and set up.</p>
        <p>Remember, just as it&#39;s important to use the right tools for tasks in different rooms of your house,
            it&#39;s crucial to manage your Python environments and dependencies properly to ensure your applications
            run as intended.</p>
        <h4 id="how-do-i-know-if-i-am-in-a-python-environment">How do I know if I am in a Python environment?:</h4>
        <p>When a Python environment starts up, it changes the command prompt to show the Python environment that it
            currently running within that terminal/console. </p>
        <div style="text-align: center;"><img src="/static/at_admin/pythonenvironment.jpg" alt="Python Environment" /></div><br>
    </details>

    <details>
        <summary>🟨 Windows & Python requirements for compiling packages <strong>(ERROR: Could not build wheels for
                TTS)</strong></summary>

        <p><code>ERROR: Microsoft Visual C++ 14.0 or greater is required</code> or
            <code>ERROR: Could not build wheels for TTS.</code> or
            <code>ModuleNotFoundError: No module named &#39;TTS</code>
        </p>
        <p> Python requires that you install C++ development tools on Windows. This is detailed on the <a
                href="https://wiki.python.org/moin/WindowsCompilers">Python site here</a>. You would need to install
            <code>MSVCv142 - VS 2019 C++ x64/x86 build tools</code> and <code>Windows 10/11 SDK</code> from the C++
            Build tools section.
        </p>
        <p> You can get hold of the <strong>Community</strong> edition <a
                href="https://visualstudio.microsoft.com/downloads/">here</a> the during installation, selecting
            <code>C++ Build tools</code> and then <code>MSVCv142 - VS 2019 C++ x64/x86 build tools</code> and
            <code>Windows 10/11 SDK</code>.
        </p>
        <p><img src="/static/at_admin/pythonrequirementswindows.jpg" alt="image"></p>
    </details>
    <details>
        <summary>🟨 Standalone Install - start_{youros}.xx opens and closes instantly and AllTalk doesnt start</summary>

        <p>This is more than likely caused by having a <code>-</code> in your folder path e.g.
            <code>c:\myfiles\alltalk_tts-main</code>. In this circumstance you would be best renaming the folder to
            remove the <code>-</code> from its name e.g. <code>c:\myfiles\alltalk_tts</code>, delete the
            <code>alltalk_environment</code> folder and <code>start_alltalk.bat</code> or <code>start_alltalk.sh</code>
            and then re-run <code>atsetup</code> to re-create the environment and startup files.
        </p>
    </details>
    <details>
        <summary>🟨 I think AllTalks requirements file has installed something another extension doesn't like</summary>

        <p>Ive paid very close attention to <strong>not</strong> impact what Text-generation-webui is requesting on a
            factory install. This is one of the requirements of submitting an extension to Text-generation-webui. If you
            want to look at a comparison of a factory fresh text-generation-webui installed packages (with cuda 12.1,
            though AllTalk&#39;s requirements were set on cuda 11.8) you can find that comparison <a
                href="https://github.com/erew123/alltalk_tts/issues/23">here</a>. This comparison shows that AllTalk is
            requesting the same package version numbers as Text-generation-webui or even lower version numbers (meaning
            AllTalk will not update them to a later version). What other extensions do, I cant really account for that.
        </p>
        <p>I will note that the TTS engine downgrades Pandas data validator to 1.5.3 though its unlikely to cause any
            issues. You can upgrade it back to text-generation-webui default (december 2023) with
            <code>pip install pandas==2.1.4</code> when inside of the python environment. I have noticed no ill effects
            from it being a lower or higher version, as far as AllTalk goes. This is also the same behaviour as the
            Coqui_tts extension that comes with Text-generation-webui.
        </p>
        <p>Other people are reporting issues with extensions not starting with errors about Pydantic e.g.
            <code>pydantic.errors.PydanticImportError: BaseSettings` has been moved to the pydantic-settings package. See https://docs.pydantic.dev/2.5/migration/#basesettings-has-moved-to-pydantic-settings for more details.</code>
        </p>
        <p>Im not sure if the Pydantic version has been recently updated by the Text-generation-webui installer, but
            this is nothing to do with AllTalk. The other extension you are having an issue with, need to be updated to
            work with Pydantic 2.5.x. AllTalk was updated in mid december to work with 2.5.x. I am not specifically
            condoning doing this, as it may have other knock on effects, but within the text-gen Python environment, you
            can use <code>pip install pydantic==2.5.0</code> or <code>pip install pydantic==1.10.13</code> to change the
            version of Pydantic installed.</p>
    </details>
    <details>
        <summary>🟨 I am having problems getting AllTalk to start after changing settings or making a custom setup/model
            setup.</summary>

        <p>I would suggest following <a href="https://github.com/erew123/alltalk_tts#-problems-updating">Problems
                Updating</a> and if you still have issues after that, you can raise an issue <a
                href="https://github.com/erew123/alltalk_tts/issues">here</a></p>
    </details>

    <h4 id="networking-and-access-issues">Networking and Access Issues</h4>
    <details>
        <summary>🟨 I cannot access AllTalk from another machine on my Network</summary>

        <p>You will need to change the IP address within AllTalk&#39;s settings from being 127.0.0.1, which only allows
            access from the local machine its installed on. To do this, please see <a
                href="https://github.com/erew123/alltalk_tts/tree/main?tab=readme-ov-file#-changing-alltalks-ip-address--accessing-alltalk-over-your-network">Changing
                AllTalks IP address &amp; Accessing AllTalk over your Network</a> at the top of this page.</p>
        <p>You may also need to allow access through your firewall or Antivirus package to AllTalk.</p>
    </details>

    <details>
        <summary>🟨 I am running a Headless system and need to change the IP Address manually as I cannot reach the
            config page</summary>

        <p>To do this you can edit the <code>confignew.json</code> file within the <code>alltalk_tts</code> folder. You
            would look for <code>&quot;ip_address&quot;: &quot;127.0.0.1&quot;,</code> and change the
            <code>127.0.0.1</code> to your chosen IP address,then save the file and start AllTalk.<br><br>
        </p>
        <p>When doing this, be careful not to impact the formatting of the JSON file. Worst case, you can re-download a
            fresh copy of <code>confignew.json</code> from this website and that will put you back to a factory setting.
        </p>
    </details>

    <h4 id="configuration-and-usage-issues">Configuration and Usage Issues</h4>
    <details>
        <summary>🟨 I activated DeepSpeed in the settings page, but I didnt install DeepSpeed yet and now I have issues
            starting up</summary>

        <p>You can either follow the <a href="https://github.com/erew123/alltalk_tts#-problems-updating">Problems
                Updating</a> and fresh install your config. Or you can edit the <code>confignew.json</code> file within
            the <code>alltalk_tts</code> folder. You would look for &#39;&quot;deepspeed_activate&quot;: true,&#39; and
            change the word true to false `&quot;deepspeed_activate&quot;: false,&#39; ,then save the file and try
            starting again.</p>
        <p>If you want to use DeepSpeed, you need an Nvidia Graphics card and to install DeepSpeed on your system.
            Instructions are <a href="https://github.com/erew123/alltalk_tts#-deepspeed-installation-options">here</a>
        </p>
    </details>

    <details>
        <summary>🟨 I am having problems updating/some other issue where it wont start up/Im sure this is a bug
        </summary>

        <p>Please see <a href="https://github.com/erew123/alltalk_tts#-problems-updating">Problems Updating</a>. If that
            doesnt help you can raise an ticket <a href="https://github.com/erew123/alltalk_tts/issues">here</a>. It
            would be handy to have any log files from the console where your error is being shown. I can only losely
            support custom built Python environments and give general pointers. Please create a
            <code>diagnostics.log</code> report file to submit with a support request.
        </p>
        <p>Also, is your text-generation-webui up to date? <a
                href="https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file#how-to-install">instructions
                here</a></p>
    </details>

    <details>
        <summary>🟨 I see some red "asyncio" messages</summary>

        <p>As far as I am aware, these are to do with the chrome browser the gradio text-generation-webui in some way. I
            raised an issue about this on the text-generation-webui <a
                href="https://github.com/oobabooga/text-generation-webui/issues/4788">here</a> where you can see that
            AllTalk is not loaded and the messages persist. Either way, this is more a warning than an actual issue, so
            shouldnt affect any functionality of either AllTalk or text-generation-webui, they are more just an
            annoyance.</p>
    </details>

    <h4 id="performance-and-compatibility-issues">Performance and Compatibility Issues</h4>
    <details>
        <summary>🟨 Warning TTS Subprocess has NOT started up yet, Will keep trying for 120 seconds maximum. Please
            wait. It times out after 120 seconds.</summary><br>
        When the subprocess is starting 2x things are occurring:<br>

        <p><strong>A)</strong> Its trying to load the voice model into your graphics card VRAM (assuming you have a
            Nvidia Graphics card, otherwise its your system RAM)<br>
            <strong>B)</strong> Its trying to start up the mini-webserver and send the &quot;ready&quot; signal back to
            the main process.
        </p>
        <p>Before giving other possibilities a go, some people with <strong>old machines</strong> are finding their
            startup times are <strong>very</strong> slow 2-3 minutes. Ive extended the allowed time within the script
            from 1 minute to 2 minutes. <strong>If you have an older machine</strong> and wish to try extending this
            further, you can do so by editing <code>script.py</code> and changing <code>startup_wait_time = 120</code>
            (120 seconds, aka 2 minutes) at the top of the script.py file, to a larger value e.g
            <code>startup_wait_time = 240</code> (240 seconds aka 4 minutes).
        </p>
        <p><strong>Note:</strong> If you need to create a support ticket, please create a <code>diagnostics.log</code>
            report file to submit with a support request. Details on doing this are above.</p>
        <p>Other possibilities for this issue are:</p>
        <ol>
            <li>
                <p>You are starting AllTalk in both your <code>CMD FLAG.txt</code> and <code>settings.yaml</code> file.
                    The <code>CMD FLAG.txt</code> you would have manually edited and the <code>settings.yaml</code> is
                    the one you change and save in the <code>session</code> tab of text-generation-webui and you can
                    <code>Save UI defaults to settings.yaml</code>. Please only have one of those two starting up
                    AllTalk.
                </p>
            </li>
            <li>
                <p>You are not starting text-generation-webui with its normal Python environment. Please start it with
                    start_{your OS version} as detailed <a
                        href="https://github.com/oobabooga/text-generation-webui#how-to-install">here</a>
                    (<code>start_windows.bat</code>,<code>./start_linux.sh</code>, <code>start_macos.sh</code> or
                    <code>start_wsl.bat</code>) OR (<code>cmd_windows.bat</code>, <code>./cmd_linux.sh</code>,
                    <code>cmd_macos.sh</code> or <code>cmd_wsl.bat</code> and then <code>python server.py</code>).
                </p>
            </li>
            <li>
                <p>You have installed the wrong version of DeepSpeed on your system, for the wrong version of
                    Python/Text-generation-webui. You can go to your text-generation-webui folder in a terminal/command
                    prompt and run the correct cmd version for your OS e.g. (<code>cmd_windows.bat</code>,
                    <code>./cmd_linux.sh</code>, <code>cmd_macos.sh</code> or <code>cmd_wsl.bat</code>) and then you can
                    type <code>pip uninstall deepspeed</code> then try loading it again. If that works, please see here
                    for the correct instructions for installing DeepSpeed <a
                        href="https://github.com/erew123/alltalk_tts#-deepspeed-installation-options">here</a>.
                </p>
            </li>
            <li>
                <p>You have an old version of text-generation-webui (pre Dec 2023) I have not tested on older versions
                    of text-generation-webui, so cannot confirm viability on older versions. For instructions on
                    updating the text-generation-webui, please look <a
                        href="https://github.com/oobabooga/text-generation-webui#how-to-install">here</a>
                    (<code>update_linux.sh</code>, <code>update_windows.bat</code>, <code>update_macos.sh</code>, or
                    <code>update_wsl.bat</code>).
                </p>
            </li>
            <li>
                <p>You already have something running on port 7851 on your computer, so the mini-webserver cant start on
                    that port. You can change this port number by editing the <code>confignew.json</code> file and
                    changing <code>&quot;port_number&quot;: &quot;7851&quot;</code> to
                    <code>&quot;port_number&quot;: &quot;7602&quot;</code> or any port number you wish that isn’t
                    reserved. Only change the number and save the file, do not change the formatting of the document.
                    This will at least discount that you have something else clashing on the same port number.
                </p>
            </li>
            <li>
                <p>You have antivirus/firewalling that is blocking that port from being accessed. If you had to do
                    something to allow text-generation-webui through your antivirus/firewall, you will have to do that
                    for this too.</p>
            </li>
            <li>
                <p>You have quite old graphics drivers and may need to update them.</p>
            </li>
            <li>
                <p>Something within text-generation-webui is not playing nicely for some reason. You can go to your
                    text-generation-webui folder in a terminal/command prompt and run the correct cmd version for your
                    OS e.g. (<code>cmd_windows.bat</code>, <code>./cmd_linux.sh</code>, <code>cmd_macos.sh</code> or
                    <code>cmd_wsl.bat</code>) and then you can type <code>python extensions\alltalk_tts\script.py</code>
                    and see if AllTalk starts up correctly. If it does then something else is interfering.
                </p>
            </li>
            <li>
                <p>Something else is already loaded into your VRAM or there is a crashed python process. Either check
                    your task manager for erroneous Python processes or restart your machine and try again.</p>
            </li>
            <li>
                <p>You are running DeepSpeed on a Linux machine and although you are starting with
                    <code>./start_linux.sh</code> AllTalk is failing there on starting. This is because
                    text-generation-webui will overwrite some environment variables when it loads its python
                    environment. To see if this is the problem, from a terminal go into your text-generation-webui
                    folder and <code>./cmd_linux.sh</code> then set your environment variable again e.g.
                    <code>export CUDA_HOME=/usr/local/cuda</code> (this may vary depending on your OS, but this is the
                    standard one for Linux, and assuming you have installed the CUDA toolkit), then
                    <code>python server.py</code> and see if it starts up. If you want to edit the environment
                    permanently you can do so, I have not managed to write full instructions yet, but here is the conda
                    guide <a
                        href="https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#set-env-vars">here</a>.
                </p>
            </li>
            <li>
                <p>You have built yourself a custom Python environment and something is funky with it. This is very hard
                    to diagnose as its not a standard environment. You may want to updating text-generation-webui and re
                    installing its requirements file (whichever one you use that comes down with text-generation-webui).
                </p>
    </details>
    </li>
    </ol>
    <details>
        <summary>🟨 I have multiple GPU's and I have problems running Finetuning</summary>

        <p>Finetuning pulls in various other scripts and some of those scripts can have issues with multiple Nvidia
            GPU&#39;s being present. Until the people that created those other scripts fix up their code, there is a
            workaround to temporarily tell your system to only use the 1x of your Nvidia GPU&#39;s. To do this:</p>
        <ul>
            <li>
                <p><strong>Windows</strong> - You will start the script with
                    <code>set CUDA_VISIBLE_DEVICES=0 &amp;&amp; python finetune.py</code><br>
                    After you have completed training, you can reset back with
                    <code>set CUDA_VISIBLE_DEVICES=</code><br>
                </p>
            </li>
            <li>
                <p><strong>Linux</strong> - You will start the script with
                    <code>CUDA_VISIBLE_DEVICES=0 python finetune.py</code><br>
                    After you have completed training, you can reset back with
                    <code>unset CUDA_VISIBLE_DEVICES</code><br>
                </p>
            </li>
        </ul>
        <p>Rebooting your system will also unset this. The setting is only applied temporarily.</p>
        <p>Depending on which of your Nvidia GPU&#39;s is the more powerful one, you can change the <code>0</code> to
            <code>1</code> or whichever of your GPU&#39;s is the most powerful.
        </p>
    </details>

    <details>
        <summary>🟨 Firefox - Streaming Audio doesnt work on Firefox</summary>

        <p>This is a long standing issue with Firefox and one I am unable to resolve. The solution is to use another web
            browser if you want to use Streaming audio. For details of my prior invesitigation please look at this <a
                href="https://github.com/erew123/alltalk_tts/issues/143">ticket</a></p>
    </details>

    <h4 id="application-specific-issues">Application Specific Issues</h4>
    <details>
        <summary>🟨 SillyTavern - I changed my IP address and now SillyTavern wont connect with AllTalk</summary>

        <p>SillyTavern checks the IP address when loading extensions, saving the IP to its configuration only if the
            check
            succeeds. For whatever reason, SillyTavern's checks dont always allow changing its IP address a second
            time.</p>

        <p>To manually change the IP address:</p>
        <ol>
            <li>Navigate to the SillyTavern Public folder located at <code>/sillytavern/public/</code>.</li>
            <li>Open the <code>settings.json</code> file.</li>
            <li>Look for the AllTalk section and find the <code>provider_endpoint</code> entry.</li>
            <li>Replace <code>localhost</code> with your desired IP address, for example, <code>192.168.1.64</code>.
            </li>
        </ol>
        <p><img src="/static/at_admin/4abed1ca.jpg" alt="image"></p>
    </details>

    <h4 id="tts-generation-issues--questions">TTS Generation Issues &amp; Questions</h4>
    <details>
        <summary>🟨 XTTS - Does the XTTS AI Model Support Emotion Control or Singing?</summary>

        <p>No, the XTTS AI model does not currently support direct control over emotions or singing capabilities. While
            XTTS infuses generated speech with a degree of emotional intonation based on the context of the text, users
            cannot explicitly control this aspect. It&#39;s worth noting that regenerating the same line of TTS may
            yield slightly different emotional inflections, but there is no way to directly control it with XTTS.</p>
    </details>
    <details>
        <summary>🟨 XTTS - Skips, repeats or pronunciation Issues</summary>

        <p>Firstly, it&#39;s important to clarify that the development and maintenance of the XTTS AI models and core
            scripts are handled by <a href="https://docs.coqui.ai/en/latest/index.html">Coqui</a>, with additional
            scripts and libraries from entities like <a
                href="https://huggingface.co/docs/transformers/en/index">huggingface</a> among many other Python scripts
            and libraries used by AllTalk. </p>
        <p>AllTalk is designed to be a straightforward interface that simplifies setup and interaction with AI TTS
            models like XTTS. Currently, AllTalk supports the XTTS model, with plans to include more models in the
            future. Please understand that the deep inner workings of XTTS, including reasons why it may skip, repeat,
            or mispronounce, along with 3rd party scripts and libraries utilized, are ultimately outside my control.</p>
        <p>Although I ensure the text processed through AllTalk is accurately relayed to the XTTS model speech
            generation process, and I have aimed to mitigate as many issues as much as possible; skips, repeats and bad
            pronounciation can still occur.</p>
        <p>Certain aspects I have not been able to investigate due to my own time limitations, are:<br></p>
        <ul>
            <li>The impact of DeepSpeed on TTS quality. Is this more likely to cause skips or repetition?</li>
            <li>Comparative performance between different XTTS model versions (e.g., 2.0.3 vs. 2.0.2) regarding audio
                quality and consistency.</li>
        </ul>
        <p><strong>From my experience and anecdotally gained knowledge:</strong><br></p>
        <ul>
            <li>Lower quality voice samples tend to produce more anomalies in generated speech.</li>
            <li>Model finetuning with high-quality voice samples significantly reduces such issues, enhancing overall
                speech quality.</li>
            <li>Unused/Excessive punctuation causes issues e.g. asterisks <code>*</code>, hashes <code>#</code>,
                brackets <code>(</code> <code>)</code> etc. Many of these AllTalk will filter out.</li>
        </ul>
        <p>So for example, the <code>female_01.wav</code> file that is provided with AllTalk is a studio quality voice
            sample, which the XTTS model was trained on. Typically you will find it unlikely that anomolies occur with
            TTS generation when using this voice sample. Hence good quality samples and finetuning, generally improve
            results with XTTS.</p>
        <p>If you wish to try out the XTTS version 2.0.3 model and see if it works better, you can download it from <a
                href="https://huggingface.co/coqui/XTTS-v2/tree/v2.0.3">here</a>, replacing all the files within your
            <code>/alltalk_tts/models/xttsv2_2.0.2</code> folder. This is on my list to both test version 2.0.3 more,
            but also build a more flexible TTS models downloader, that will not only accomdating other XTTS models, but
            also other TTS engines. If you try the XTTS version 2.0.3 model and gleen any insights, please let me know.
        </p>
    </details>
    <p><a href="#toc">Back to top of page</a></p>
    <hr>
    <h3 id="⚫-finetuning-a-model">⚫ Finetuning a model</h3>
    <p>If you have a voice that the model doesnt quite reproduce correctly, or indeed you just want to improve the
        reproduced voice, then finetuning is a way to train your &quot;XTTSv2 local&quot; model <strong>(stored in
            <code>/alltalk_tts/models/xxxxx/</code>)</strong> on a specific voice. For this you will need:</p>
    <ul>
        <li>An Nvidia graphics card. (Please see the help section <a
                href="https://github.com/erew123/alltalk_tts/edit/main/README.md#performance-and-compatibility-issues">note</a>
            if you have multiple Nvidia GPU&#39;s).</li>
        <li>18GB of disk space free (most of this is used temporarily)</li>
        <li>At least 2 minutes of good quality speech from your chosen speaker in mp3, wav or flacc format, in one or
            more files (have tested as far as 20 minutes worth of audio).</li>
        <li>As a side note, many people seem to think that the Whisper v2 model (used on Step 1) is giving better
            results at generating training datasets, so you may prefer to try that, as opposed to the Whisper 3 model.
        </li>
    </ul>
    <h4 id="⚫-how-will-this-workhow-complicated-is-it">⚫ How will this work/How complicated is it?</h4>
    <p>Everything has been done to make this as simple as possible. At its simplest, you can literally just download a
        large chunk of audio from an interview, and tell the finetuning to strip through it, find spoken parts and build
        your dataset. You can literally click 4 buttons, then copy a few files and you are done. At it&#39;s more
        complicated end you will clean up the audio a little beforehand, but its still only 4x buttons and copying a few
        files.</p>
    <h4 id="⚫-the-audio-you-will-use">⚫ The audio you will use</h4>
    <p>I would suggest that if its in an interview format, you cut out the interviewer speaking in audacity or your
        chosen audio editing package. You dont have to worry about being perfect with your cuts, the finetuning Step 1
        will go and find spoken audio and cut it out for you. Is there is music over the spoken parts, for best quality
        you would cut out those parts, though its not 100% necessary. As always, try to avoid bad quality audio with
        noises in it (humming sounds, hiss etc). You can try something like <a
            href="https://audioenhancer.ai/">Audioenhancer</a> to try clean up noisier audio. There is no need to
        down-sample any of the audio, all of that is handled for you. Just give the finetuning some good quality audio
        to work with. </p>
    <h4 id="⚫-can-i-finetune-a-model-more-than-once-on-more-than-one-voice">⚫ Can I Finetune a model more than once on
        more than one voice</h4>
    <p>Yes you can. You would do these as multiple finetuning&#39;s, but its absolutely possible and fine to do.
        Finetuning the XTTS model does not restrict it to only being able to reproduce that 1x voice you trained it on.
        Finetuning is generally nuding the model in a direction to learn the ability to sound a bit more like a voice
        its not heard before. </p>
    <h4 id="⚫-a-note-about-anonymous-training-telemetry-information--disabling-it">⚫ A note about anonymous training
        Telemetry information &amp; disabling it</h4>
    <p>Portions of Coqui&#39;s TTS trainer scripts gather anonymous training information which you can disable. Their
        statement on this is listed <a
            href="https://github.com/coqui-ai/Trainer?tab=readme-ov-file#anonymized-telemetry">here</a>. If you start
        AllTalk Finetuning with <code>start_finetuning.bat</code> or <code>./start_finetuning.sh</code> telemetry will
        be disabled. If you manually want to disable it, please expand the below:</p>
    <details>
        <summary>Manually disable telemetry</summary><br>

        <p>Before starting finetuning, run the following in your terminal/command prompt:</p>
        <ul>
            <li>On Windows by typing <code>set TRAINER_TELEMETRY=0</code></li>
            <li>On Linux &amp; Mac by typing <code>export TRAINER_TELEMETRY=0</code></li>
        </ul>
        <p>Before you start <code>finetune.py</code>. You will now be able to finetune offline and no anonymous training
            data will be sent.</p>
    </details>

    <h4 id="⚫-prerequisites-for-fine-tuning-with-nvidia-cuda-development-toolkit-118">⚫ Prerequisites for Fine-tuning
        with Nvidia CUDA Development Toolkit 11.8</h4>
    <p>All the requirements for Finetuning will be installed by using the atsetup utility and installing your correct
        requirements (Standalone or for Text-generation-webui). The legacy manual instructions are stored below, however
        these shouldnt be required.</p>
    <details>
        <summary>Legacy manual instructions for installing Nvidia CUDA Development Toolkit 11.8</summary><br>
        - To perform fine-tuning, a specific portion of the **Nvidia CUDA Development Toolkit v11.8** must be installed.
        This is crucial for step 1 of fine-tuning. The objective is to minimize the installation footprint by installing
        only the essential components.<br>
        - The **Nvidia CUDA Development Toolkit v11.8** operates independently from your graphics card drivers and the
        CUDA version utilized by your Python environment.<br>
        - This installation process aims to keep the download and install size as minimal as possible, however a full
        install of the tookit requires 3GB's of disk space.<br>
        - When running Finetuning it will require upto 20GB's of temporary disk space, so please ensure you have this
        space available and preferably use a SSD or NVME drive.

        <ol>
            <li>
                <p><strong>Download the Toolkit</strong>:</p>
                <ul>
                    <li>Obtain the <strong>network install</strong> version of the Nvidia CUDA Development Toolkit 11.8
                        from <a href="https://developer.nvidia.com/cuda-11-8-0-download-archive">Nvidia&#39;s
                            Archive</a>.</li>
                </ul>
            </li>
            <li>
                <p><strong>Run the Installer</strong>:</p>
                <ul>
                    <li>Choose <strong>Custom (Advanced)</strong> installation.</li>
                    <li>Deselect all options initially.</li>
                    <li>Select the following components:<ul>
                            <li><code>CUDA</code> &gt; <code>Development</code> &gt; <code>Compiler</code> &gt;
                                <code>nvcc</code>
                            </li>
                            <li><code>CUDA</code> &gt; <code>Development</code> &gt; <code>Libraries</code> &gt;
                                <code>CUBLAS</code> (<strong>both</strong> development and runtime)
                            </li>
                        </ul>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Configure Environment Search Path</strong>:</p>
                <ul>
                    <li>
                        <p>It&#39;s essential that <code>nvcc</code> and CUDA 11.8 library files are discoverable in
                            your environment&#39;s search path. Adjustments can be reverted post-fine-tuning if desired.
                        </p>
                        <p><strong>For Windows</strong>:</p>
                        <ul>
                            <li>Edit the <code>Path</code> environment variable to include
                                <code>C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin</code>.
                            </li>
                            <li>Add <code>CUDA_HOME</code> and set its path to
                                <code>C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8.</code>
                            </li>
                        </ul>
                        <p><strong>For Linux</strong>:</p>
                        <ul>
                            <li>The path may vary by Linux distribution. Here&#39;s a generic setup:<ul>
                                    <li>
                                        <p><code>export CUDA_HOME=/usr/local/cuda</code></p>
                                    </li>
                                    <li>
                                        <p><code>export PATH=${CUDA_HOME}/bin:${PATH}</code></p>
                                    </li>
                                    <li>
                                        <p><code>export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH</code></p>
                                    </li>
                                    <li>
                                        <p>Consider adding these to your <code>~/.bashrc</code> for permanence, or apply
                                            temporarily for the current session by running the above commands each time
                                            you start your Python environment.</p>
                                    </li>
                                </ul>
                            </li>
                        </ul>
                        <p><strong>Note</strong>: If using Text-generation-webui, its best to set these temporarily.</p>
                    </li>
                </ul>
            </li>
            <li>
                <p><strong>Verify Installation</strong>:</p>
                <ul>
                    <li>Open a <strong>new</strong> terminal/command prompt to refresh the search paths.</li>
                    <li>In a terminal or command prompt, execute <code>nvcc --version</code>.</li>
                    <li>Success is indicated by a response of <code>Cuda compilation tools, release 11.8.</code>
                        Specifically, ensure it is version 11.8.</li>
                </ul>
            </li>
            <li>
                <p><strong>Troubleshooting</strong>:</p>
                <ul>
                    <li>If the correct version isn&#39;t reported, recheck your environment path settings for accuracy
                        and potential conflicts with other CUDA versions.
    </details>
    </li>
    </ul>
    </li>
    </ol>
    <h4 id="additional-note-on-torch-and-torchaudio">Additional Note on Torch and Torchaudio:</h4>
    <ul>
        <li>Ensure Torch and Torchaudio are CUDA-enabled (any version), which is separate from the CUDA Toolkit
            installation. CUDA 11.8 corresponds to <code>cu118</code> and CUDA 12.1 to <code>cu121</code> in AllTalk
            diagnostics.</li>
        <li>Failure to install CUDA for Torch and Torchaudio will result in Step 2 of fine-tuning failing. These
            requirements are distinct from the CUDA Toolkit installation, so avoid conflating the two.<br></li>
    </ul>
    <h4 id="⚫-starting-fine-tuning">⚫ Starting Fine-tuning</h4>
    <p><strong>NOTE:</strong> Ensure AllTalk has been launched at least once after any updates to download necessary
        files for fine-tuning.</p>
    <ol>
        <li>
            <p><strong>Close Resource-Intensive Applications</strong>:</p>
            <ul>
                <li>Terminate any applications that are using your GPU/VRAM to ensure enough resources for fine-tuning.
                </li>
            </ul>
        </li>
        <li>
            <p><strong>Organize Voice Samples</strong>:</p>
            <ul>
                <li>Place your audio samples into the following directory:
                    <code>/alltalk_tts/finetune/put-voice-samples-in-here/</code>
                </li>
            </ul>
        </li>
    </ol>
    <p>Depending on your setup (Text-generation-webui or Standalone AllTalk), the steps to start the Python environment
        vary:</p>
    <ul>
        <li>
            <p><strong>For Standalone AllTalk Users</strong>:</p>
            <ul>
                <li>Navigate to the <code>alltalk_tts</code> folder:<ul>
                        <li><code>cd alltalk_tts</code></li>
                    </ul>
                </li>
                <li>Start the Python environment:<ul>
                        <li>Windows: <code>start_finetune.bat</code></li>
                        <li>Linux: <code>./start_finetune.sh</code></li>
                    </ul>
                </li>
            </ul>
        </li>
        <li>
            <p><strong>For Text-generation-webui Users</strong>:</p>
            <ul>
                <li>Navigate to the Text-generation-webui directory:<ul>
                        <li><code>cd text-generation-webui</code></li>
                    </ul>
                </li>
                <li>Start the Python environment suitable for your OS:<ul>
                        <li>Windows: <code>cmd_windows.bat</code></li>
                        <li>Linux: <code>./cmd_linux.sh</code></li>
                        <li>macOS: <code>cmd_macos.sh</code></li>
                        <li>WSL (Windows Subsystem for Linux): <code>cmd_wsl.bat</code></li>
                    </ul>
                </li>
                <li>Move into the AllTalk directory:<ul>
                        <li><code>cd extensions/alltalk_tts</code></li>
                    </ul>
                </li>
                <li><strong>Linux</strong> users only need to run this command:<ul>
                        <li><code> export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`</code></li>
                    </ul>
                </li>
                <li>Start the fine-tuning process with the command:<ul>
                        <li><code>python finetune.py</code><br><br></li>
                    </ul>
                </li>
            </ul>
            <blockquote>
                <p>If you&#39;re unfamiliar with Python environments and wish to learn more, consider reviewing
                    <strong>Understanding Python Environments Simplified</strong> in the Help section.
                </p>
            </blockquote>
        </li>
    </ul>
    <ol start="3">
        <li>
            <p><strong>Pre-Flight Checklist</strong>:</p>
            <ul>
                <li>Go through the pre-flight checklist to ensure readiness. Address any issues flagged as
                    &quot;Fail&quot;.</li>
            </ul>
        </li>
        <li>
            <p><strong>Post Fine-tuning Actions</strong>:</p>
            <ul>
                <li>Upon completing fine-tuning, the final tab will guide you on managing your files and relocating your
                    newly trained model to the appropriate directory.</li>
            </ul>
        </li>
    </ol>
    <p>These steps guide you through the initial preparations, starting the Python environment based on your setup, and
        the fine-tuning process itself. Ensure all prerequisites are met to facilitate a smooth fine-tuning experience.
    </p>
    <h4 id="⚫-how-many-epochs-etc-is-the-right-amount">⚫ How many Epochs etc is the right amount?</h4>
    <p>In finetuning the suggested/recommended amount of epochs, batch size, evaluation percent etc is already set.
        However, there is no absolutely correct answer to what the settings should be, it all depends on what you are
        doing. </p>
    <ul>
        <li>If you just want to train a normal human voice that is in an existing language, for most people’s needs, the
            base settings would work fine. You may choose to increase the epochs up to maybe 20, or run a second round
            of training if needed.</li>
        <li>If you were training an entirely new language, you would need a huge amount of training data and it requires
            around 1000 epochs (based on things I can find around the internet of people who tried this).</li>
        <li>If you are training a cartoon style voice in an existing language, it may need well upwards of 40 epochs
            until it can reproduce that voice with some success.</li>
    </ul>
    <p>There are no absolute correct settings, as there are too many variables, ranging from the amount of samples you
        are using (5 minutes worth? 4 hours worth? etc), if they are similar samples to what the AI model already
        understands, so on and so forth. Coqui whom originally trained the model usually say something along the lines
        of, once you’ve trained it X amount, if it sounds good then you are done and if it doesn’t, train it more.</p>
    <h4 id="⚫-evaluation-data-percentage">⚫ Evaluation Data Percentage</h4>
    <p>In the process of finetuning, it&#39;s crucial to balance the data used for training the model against the data
        reserved for evaluating its performance. Typically, a portion of the dataset is set aside as an &#39;evaluation
        set&#39; to assess the model&#39;s capabilities in dealing with unseen data. On Step 1 of finetuning you have
        the option to adjust this evaluation data percentage, offering more control over your model training
        process.<br><br>
        <strong>Why Adjust the Evaluation Percentage?</strong><br><br>
        Adjusting the evaluation percentage <strong>can</strong> be beneficial in scenarios with limited voice samples.
        When dealing with a smaller dataset, allocating a slightly larger portion to training could enhance the
        model&#39;s ability to learn from these scarce samples. Conversely, with abundant data, a higher evaluation
        percentage might be more appropriate to rigorously test the model&#39;s performance. There are currently no
        absolutely optimal split percentages as it varies by dataset.
    </p>
    <ul>
        <li>
            <p><strong>Default Setting:</strong> The default evaluation percentage is set at 15%, which is a balanced
                choice for most datasets.</p>
        </li>
        <li>
            <p><strong>Adjustable Range:</strong> Users can now adjust this percentage, but it’s generally recommend
                keeping it between 5% and 30%.</p>
            <ul>
                <li><strong>Lower Bound:</strong> A minimum of 5% ensures that there&#39;s enough data to evaluate model
                    performance.</li>
                <li><strong>Upper Bound:</strong> Its suggested not exceeding 30% for evaluation to avoid limiting the
                    amount of data available for training.</li>
            </ul>
        </li>
        <li>
            <p><strong>Understanding the Impact:</strong> Before adjusting this setting, it&#39;s important to
                understand its impact on model training and evaluation. Incorrect adjustments can lead to suboptimal
                model performance.</p>
        </li>
        <li>
            <p><strong>Gradual Adjustments:</strong> For those unfamiliar with the process, we recommend reading up on
                training data and training sets, then making small, incremental changes and observing their effects.</p>
        </li>
        <li>
            <p><strong>Data Quality:</strong> Regardless of the split, the quality of the audio data is paramount.
                Ensure that your datasets are built from good quality audio with enough data within them.</p>
        </li>
    </ul>
    <h4 id="⚫-using-a-finetuned-model-in-text-generation-webui">⚫ Using a Finetuned model in Text-generation-webui</h4>
    <p>At the end of the finetune process, you will have an option to
        <code>Compact and move model to /trainedmodel/</code> this will compact the raw training file and move it to
        <code>/model/trainedmodel/</code>. When AllTalk starts up within Text-generation-webui, if it finds a model in
        this location a new loader will appear in the interface for <code>XTTSv2 FT</code> and you can use this to load
        your finetuned model. <br><br><strong>Be careful</strong> not to train a new model from the base model, then
        overwrite your current <code>/model/trainedmodel/</code> <strong>if</strong> you want a seperately trained
        model. This is why there is an <code>OPTION B</code> to move your just trained model to
        <code>/models/lastfinetuned/</code>.
    </p>
    <h4 id="⚫-training-one-model-with-multiple-voices">⚫ Training one model with multiple voices</h4>
    <p>At the end of the finetune process, you will have an option to
        <code>Compact and move model to /trainedmodel/</code> this will compact the raw training file and move it to
        <code>/model/trainedmodel/</code>. This model will become available when you start up finetuning. You will have
        a choice to train the Base Model or the <code>Existing finetuned model</code> (which is the one in
        <code>/model/trainedmodel/</code>). So you can use this to keep further training this model with additional
        voices, then copying it back to <code>/model/trainedmodel/</code> at the end of training.
    </p>
    <h4 id="⚫-do-i-need-to-keep-the-raw-training-datamodel">⚫ Do I need to keep the raw training data/model?</h4>
    <p>If you&#39;ve compacted and moved your model, its highly unlikely you would want to keep that data, however the
        choice is there to keep it if you wish. It will be between 5-10GB in size, so most people will want to delete
        it.</p>
    <p><a href="#toc">Back to top of page</a></p>
    <hr>
    <h3 id="⬜-alltalk-tts-generator">⬜ AllTalk TTS Generator</h3>
    <ul>
        <li><a href="http://{{ params.ip_address }}:{{ params.port_number }}/static/tts_generator/tts_generator.html"
                target="_blank" rel="noopener">AllTalk TTS Generator Link</a></li>
    </ul>

    <p>AllTalk TTS Generator is the solution for converting large volumes of text into speech using the voice of your
        choice. Whether you&#39;re creating audio content or just want to hear text read aloud, the TTS Generator is
        equipped to handle it all efficiently. Please see here for a quick <a
            href="https://www.youtube.com/watch?v=hunvXn0mLzc">demo</a><br><br>The link to open the TTS generator can be
        found on the built-in Settings and Documentation page.<br><br><strong>DeepSpeed</strong> is
        <strong>highly</strong> recommended to speed up generation. <strong>Low VRAM</strong> would be best turned off
        and your LLM model unloaded from your GPU VRAM (unload your model). <strong>No Playback</strong> will reduce
        memory overhead on very large generations (15,000 words or more). Splitting <strong>Export to Wav</strong> into
        smaller groups will also reduce memory overhead at the point of exporting your wav files (so good for low memory
        systems).
    </p>
    <h4 id="⬜-estimated-throughput">⬜ Estimated Throughput</h4>
    <p>This will vary by system for a multitude of reasons, however, while generating a 58,000 word document to TTS,
        with DeepSpeed enabled, LowVram disabled, splitting size 2 and on an Nvidia RTX 4070, throughput was around
        1,000 words per minute. Meaning, this took 1 hour to generate the TTS. Exporting to combined wavs took about 2-3
        minutes total.</p>
    <h4 id="⬜-quick-start">⬜ Quick Start</h4>
    <ul>
        <li><strong>Text Input:</strong> Enter the text you wish to convert into speech in the &#39;Text Input&#39; box.
        </li>
        <li><strong>Generate TTS:</strong> Hit this to start the text-to-speech conversion.</li>
        <li><strong>Pause/Resume:</strong> Used to pause and resume the playback of the initial generation of wavs or
            the stream.</li>
        <li><strong>Stop Playback:</strong> This will stop the current audio playing back. It does not stop the text
            from being generated however.
            Once you have sent text off to be generated, either as a stream or wav file generation, the TTS server will
            remain busy until this process has competed. As such, think carefully as to how much you want to send to the
            server.
            If you are generating wav files and populating the queue, you can generate one lot of text to speech, then
            input your next lot of text and it will continue adding to the list.</li>
    </ul>
    <h4 id="⬜-customization-and-preferences">⬜ Customization and Preferences</h4>
    <ul>
        <li><strong>Character Voice:</strong> Choose the voice that will read your text.</li>
        <li><strong>Language:</strong> Select the language of your text.</li>
        <li><strong>Chunk Sizes:</strong> Decide the size of text chunks for generation. Smaller sizes are recommended
            for better TTS quality.</li>
    </ul>
    <h4 id="⬜-interface-and-accessibility">⬜ Interface and Accessibility</h4>
    <ul>
        <li><strong>Dark/Light Mode:</strong> Switch between themes for your visual comfort.</li>
        <li><strong>Word Count and Generation Queue:</strong> Keep track of the word count and the generation progress.
        </li>
    </ul>
    <h4 id="⬜-tts-generation-modes">⬜ TTS Generation Modes</h4>
    <ul>
        <li><strong>Wav Chunks:</strong> Perfect for creating audio books, or anything you want to keep long term.
            Breaks down your text into manageable wav files and queues them up. Generation begins automatically, and
            playback will start after a few chunks have been prepared ahead. You can set the volume to 0 if you don’t
            want to hear playback. With Wav chunks, you can edit and/or regenerate portions of the TTS as needed.</li>
        <li><strong>Streaming:</strong> For immediate playback without the ability to save. Ideal for on-the-fly speech
            generation and listening. This will not generate wav files and it will play back through your browser. You
            cannot stop the server generating the TTS once it has been sent.<br><br>
            With wav chunks you can either playback “In Browser” which is the web page you are on, or “On Server” which
            is through the console/terminal where AllTalk is running from, or &quot;No Playback&quot;. Only generation
            “In Browser” can play back smoothly and populate the Generated TTS List. Setting the Volume will affect the
            volume level played back both “In Browser” and “On Server”.<br><br>
            For generating <strong>large amounts of TTS</strong>, it&#39;s recommended to select the <strong>No
                Playback</strong> option. This setting minimizes the memory usage in your web browser by avoiding the
            loading and playing of audio files directly within the browser, which is particularly beneficial for
            handling extensive audio generations. The definition of large will vary depending on your system RAM
            availability (will update when I have more information as to guidelines). Once the audio is generated, you
            can export your list to JSON (for safety) and use the <strong>Play List</strong> option to play back your
            audio.</li>
    </ul>
    <h4 id="⬜-playback-and-list-management">⬜ Playback and List Management</h4>
    <ul>
        <li><strong>Playback Controls:</strong> Utilize &#39;Play List&#39; to start from the beginning or &#39;Stop
            Playback&#39; to halt at any time.</li>
        <li><strong>Custom Start:</strong> Jump into your list at a specific ID to hear a particular section.</li>
        <li><strong>Regeneration and Editing:</strong> If a chunk isn&#39;t quite right, you can opt to regenerate it or
            edit the text directly. Click off the text to save changes and hit regenerate for the specific line.</li>
        <li><strong>Export/Import List:</strong> Save your TTS list as a JSON file or import one. Note: Existing wav
            files are needed for playback. Exporting is handy if you want to take your files away into another program
            and have a list of which wav is which, or if you keep your audio files, but want to come back at a later
            date, edit one or two lines, regenerate the speech and re-combine the wav’s into one new long wav.</li>
    </ul>
    <h4 id="⬜-exporting-your-audio">⬜ Exporting Your Audio</h4>
    <ul>
        <li><strong>Export to WAV:</strong> Combine all generated TTS from the list, into one single WAV file for easy
            download and distribution. Its always recommended to export your list to a JSON before exporting, so that
            you have a backup, should something go wrong. You can simply re-import the list and try exporting
            again.<br><br>When exporting, there is a file size limit of 1GB and as such you have the option to choose
            how many files to include in each block of audio exported. 600 is just on the limit of 1GB, depending on the
            average file size, so 500 or less is a good amount to work with. You can combine the generated files after
            if you wish, in Audacity or similar.<br><br>Additionally, lower export batches will lower the memory
            requirements, so if your system is low on memory (maybe 8 or 16GB system), you can use smaller export
            batches to keep the memory requirement down.</li>
    </ul>
    <h4 id="⬜-exporting-subtitles-srt-file">⬜ Exporting Subtitles (SRT file)</h4>
    <ul>
        <li><strong>Export SRT:</strong> This will scan through all wav files in your list and generate a subtitles file
            that will match your exported wav file.</li>
    </ul>
    <h4 id="⬜-analyzing-generated-tts-for-errors">⬜ Analyzing generated TTS for errors</h4>
    <ul>
        <li><strong>Analyze TTS:</strong> This will scan through all wav files comparing each ID&#39;s orignal text with
            the TTS generated for that ID and then flag up inconsistences. Its important to understand this is a
            <strong>best effort</strong> process and <strong>not 100% perfect</strong>, for example:<br><br>
            <ul>
                <li>Your text may have the word <code>their</code> and the automated routine that listens to your
                    generated TTS interprets the word as <code>there</code>, aka a spelling difference.</li>
                <li>Your text may have <code>Examples are:</code> (note the colon) and the automated routine that
                    listens to your generated TTS interprets the word as &quot;Examples are` (note no colon as you
                    cannot sound out a colon in TTS), aka a punctuation difference.</li>
                <li>Your text may have <code>There are 100 items</code> and the automated routine that listens to your
                    generated TTS interprets the word as <code>There are one hundred items</code>, aka numbers vs the
                    number written out in words.</li>
                <li>There will be other examples such as double quotes. As I say, please remember this is a <strong>best
                        effort</strong> to help you identify issues.<br></li>
            </ul>
        </li>
    </ul>
    <p>As such, there is a <code>% Accuracy</code> setting. This uses a couple of methods to try find things that are
        similar e.g. taking the <code>their</code> and <code>there</code> example from above, it would identify that
        they both sound the same, so even if the text says <code>their</code> and the AI listening to the generated TTS
        interprets the word as <code>there</code>, it will realise that both sound the same/are similar so there is no
        need to flag that as an error. However, there are limits to this and some things may slip through or get picked
        up when you would prefer them not to be flagged.</p>
    <p>The higher the accuracy you choose, the more things it will flag up, however you may get more unwanted
        detections. The lower the less detections. Based on my few tests, accuracy settings between 96 to 98 seem to
        generally give the best results. However, I would highly recommend you test out a small 10-20 line text and test
        out the <strong>Analyze TTS</strong> button to get a feel for how it responds to different settings, as well as
        things it flags up.</p>
    <p>You will be able to see the ID&#39;s and Text (orignal and as interpreted) by looking at the terminal/command
        prompt window.</p>
    <p>The Analyze TTS feature uses the Whisper Larger-v2 AI engine, which will download on first use if necessary. This
        will require about 2.5GB&#39;s of disk space and could take a few minutes to download, depending on your
        internet connection.</p>
    <h4 id="⬜-tricks-to-get-the-model-to-say-things-correctly">⬜ Tricks to get the model to say things correctly</h4>
    <p>Sometimes the AI model won’t say something the way that you want it to. It could be because it’s a new word, an
        acronym or just something it’s not good at for whatever reason. There are some tricks you can use to improve the
        chances of it saying something correctly.</p>
    <p><strong>Adding pauses</strong><br>
        You can use semi-colons &quot;;&quot; and colons &quot;:&quot; to create a pause, similar to a period
        &quot;.&quot; which can be helpful with some splitting issues.</p>
    <p><strong>Acronyms</strong><br>
        Not all acronyms are going to be pronounced correctly. Let’s work with the word <code>ChatGPT</code>. We know it
        is pronounced <code>&quot;Chat G P T&quot;</code> but when presented to the model, it doesn’t know how to break
        it down correctly. So, there are a few ways we could get it to break out &quot;Chat&quot; and the G P and T.
        e.g.</p>
    <p><code>Chat G P T.</code>
        <code>Chat G,P,T.</code>
        <code>Chat G.P.T.</code>
        <code>Chat G-P-T.</code>
        <code>Chat gee pee tea</code>
    </p>
    <p>All bar the last one are using ways within the English language to split out &quot;Chat&quot; into one word being
        pronounced and then split the G, P and T into individual letters. The final example, which is to use Phonetics
        will sound perfectly fine, but clearly would look wrong as far as human readable text goes. The phonetics method
        is very useful in edge cases where pronunciation difficult.</p>
    <h4 id="⬜-notes-on-usage">⬜ Notes on Usage</h4>
    <ul>
        <li>For seamless TTS generation, it&#39;s advised to keep text chunks under 250 characters, which you can
            control with the Chunk sizes.</li>
        <li>Generated audio can be played back from the list, which also highlights the currently playing chunk.</li>
        <li>The TTS Generator remembers your settings, so you can pick up where you left off even after refreshing the
            page.</li>
    </ul>
    <p><a href="#toc">Back to top of page</a></p>
    <hr>
    <h3 id="🟠-api-suite-and-json-curl">🟠 API Suite and JSON-CURL</h3>
    <h3 id="🟠overview">🟠Overview</h3>
    <p>The Text-to-Speech (TTS) Generation API allows you to generate speech from text input using various configuration
        options. This API supports both character and narrator voices, providing flexibility for creating dynamic and
        engaging audio content.</p>
    <h4 id="🟠-ready-endpoint">🟠 Ready Endpoint<br></h4>
    <p>Check if the Text-to-Speech (TTS) service is ready to accept requests.</p>
    <ul>
        <li>
            <p>URL: <code>http://127.0.0.1:7851/api/ready</code><br> - Method: <code>GET</code><br> </p>
            <p> <code>curl -X GET &quot;http://127.0.0.1:7851/api/ready&quot;</code></p>
            <p>Response: <code>Ready</code></p>
        </li>
    </ul>
    <h4 id="🟠-voices-list-endpoint">🟠 Voices List Endpoint<br></h4>
    <p>Retrieve a list of available voices for generating speech.</p>
    <ul>
        <li>
            <p>URL: <code>http://127.0.0.1:7851/api/voices</code><br> - Method: <code>GET</code><br></p>
            <p> <code>curl -X GET &quot;http://127.0.0.1:7851/api/voices&quot;</code></p>
            <p> JSON return:
                <code>{&quot;voices&quot;: [&quot;voice1.wav&quot;, &quot;voice2.wav&quot;, &quot;voice3.wav&quot;]}</code>
            </p>
        </li>
    </ul>
    <h4 id="🟠-current-settings-endpoint">🟠 Current Settings Endpoint<br></h4>
    <p>Retrieve a list of available voices for generating speech.</p>
    <ul>
        <li>
            <p>URL: <code>http://127.0.0.1:7851/api/currentsettings</code><br> - Method: <code>GET</code><br></p>
            <p> <code>curl -X GET &quot;http://127.0.0.1:7851/api/currentsettings&quot;</code></p>
            <p> JSON return:
                <code>{&quot;models_available&quot;:[{&quot;name&quot;:&quot;Coqui&quot;,&quot;model_name&quot;:&quot;API TTS&quot;},{&quot;name&quot;:&quot;Coqui&quot;,&quot;model_name&quot;:&quot;API Local&quot;},{&quot;name&quot;:&quot;Coqui&quot;,&quot;model_name&quot;:&quot;XTTSv2 Local&quot;}],&quot;current_model_loaded&quot;:&quot;XTTSv2 Local&quot;,&quot;deepspeed_available&quot;:true,&quot;deepspeed_status&quot;:true,&quot;low_vram_status&quot;:true,&quot;finetuned_model&quot;:false}</code>
            </p>
            <p><code>name &amp; model_name</code> = listing the currently available models.<br>
                <code>current_model_loaded</code> = what model is currently loaded into VRAM.<br>
                <code>deepspeed_available</code> = was DeepSpeed detected on startup and available to be activated.<br>
                <code>deepspeed_status</code> = If DeepSpeed was detected, is it currently activated.<br>
                <code>low_vram_status</code> = Is Low VRAM currently enabled.<br>
                <code>finetuned_model</code> = Was a finetuned model detected. (XTTSv2 FT).<br>
            </p>
        </li>
    </ul>
    <h4 id="🟠-preview-voice-endpoint">🟠 Preview Voice Endpoint</h4>
    <p>Generate a preview of a specified voice with hardcoded settings.</p>
    <ul>
        <li>
            <p>URL: <code>http://127.0.0.1:7851/api/previewvoice/</code><br> - Method: <code>POST</code><br> -
                Content-Type: <code>application/x-www-form-urlencoded</code><br></p>
            <p> <code>curl -X POST &quot;http://127.0.0.1:7851/api/previewvoice/&quot; -F &quot;voice=female_01.wav&quot;</code>
            </p>
            <p> Replace <code>female_01.wav</code> with the name of the voice sample you want to hear.</p>
            <p> JSON return:
                <code>{&quot;status&quot;: &quot;generate-success&quot;, &quot;output_file_path&quot;: &quot;/path/to/outputs/api_preview_voice.wav&quot;, &quot;output_file_url&quot;: &quot;http://127.0.0.1:7851/audio/api_preview_voice.wav&quot;}</code>
            </p>
        </li>
    </ul>
    <h4 id="🟠-switching-model-endpoint">🟠 Switching Model Endpoint<br></h4>
    <ul>
        <li>
            <p>URL: <code>http://127.0.0.1:7851/api/reload</code><br> - Method: <code>POST</code><br><br>
                <code>curl -X POST &quot;http://127.0.0.1:7851/api/reload?tts_method=API%20Local&quot;</code><br>
                <code>curl -X POST &quot;http://127.0.0.1:7851/api/reload?tts_method=API%20TTS&quot;</code><br>
                <code>curl -X POST &quot;http://127.0.0.1:7851/api/reload?tts_method=XTTSv2%20Local&quot;</code><br>
            </p>
            <p> Switch between the 3 models respectively.</p>
            <p> <code>curl -X POST &quot;http://127.0.0.1:7851/api/reload?tts_method=XTTSv2%20FT&quot;</code><br></p>
            <p> If you have a finetuned model in <code>/models/trainedmodel/</code> (will error otherwise)</p>
            <p> JSON return <code>{&quot;status&quot;: &quot;model-success&quot;}</code></p>
        </li>
    </ul>
    <h4 id="🟠-switch-deepspeed-endpoint">🟠 Switch DeepSpeed Endpoint<br></h4>
    <ul>
        <li>
            <p>URL: <code>http://127.0.0.1:7851/api/deepspeed</code><br> - Method: <code>POST</code><br><br>
                <code>curl -X POST &quot;http://127.0.0.1:7851/api/deepspeed?new_deepspeed_value=True&quot;</code>
            </p>
            <p> Replace True with False to disable DeepSpeed mode.</p>
            <p> JSON return <code>{&quot;status&quot;: &quot;deepspeed-success&quot;}</code></p>
        </li>
    </ul>
    <h4 id="🟠-switching-low-vram-endpoint">🟠 Switching Low VRAM Endpoint<br></h4>
    <ul>
        <li>
            <p>URL: <code>http://127.0.0.1:7851/api/lowvramsetting</code><br> - Method: <code>POST</code><br><br>
                <code>curl -X POST &quot;http://127.0.0.1:7851/api/lowvramsetting?new_low_vram_value=True&quot;</code>
            </p>
            <p> Replace True with False to disable Low VRAM mode.</p>
            <p> JSON return <code>{&quot;status&quot;: &quot;lowvram-success&quot;}</code></p>
        </li>
    </ul>
    <h3 id="🟠-tts-generation-endpoint-standard-generation">🟠 TTS Generation Endpoint (Standard Generation)</h3>
    <p>Streaming endpoint details are further down the page.</p>
    <ul>
        <li>URL: <code>http://127.0.0.1:7851/api/tts-generate</code><br> - Method: <code>POST</code><br> - Content-Type:
            <code>application/x-www-form-urlencoded</code><br>
        </li>
    </ul>
    <h3 id="🟠-example-command-lines-standard-generation">🟠 Example command lines (Standard Generation)</h3>
    <p>Standard TTS generation supports Narration and will generate a wav file/blob. Standard TTS speech Example
        (standard text) generating a time-stamped file<br></p>
    <p><code>curl -X POST &quot;http://127.0.0.1:7851/api/tts-generate&quot; -d &quot;text_input=All of this is text spoken by the character. This is text not inside quotes, though that doesnt matter in the slightest&quot; -d &quot;text_filtering=standard&quot; -d &quot;character_voice_gen=female_01.wav&quot; -d &quot;narrator_enabled=false&quot; -d &quot;narrator_voice_gen=male_01.wav&quot; -d &quot;text_not_inside=character&quot; -d &quot;language=en&quot; -d &quot;output_file_name=myoutputfile&quot; -d &quot;output_file_timestamp=true&quot; -d &quot;autoplay=true&quot; -d &quot;autoplay_volume=0.8&quot;</code><br>
    </p>
    <p>Narrator Example (standard text) generating a time-stamped file</p>
    <p><code>curl -X POST &quot;http://127.0.0.1:7851/api/tts-generate&quot; -d &quot;text_input=*This is text spoken by the narrator* \&quot;This is text spoken by the character\&quot;. This is text not inside quotes.&quot; -d &quot;text_filtering=standard&quot; -d &quot;character_voice_gen=female_01.wav&quot; -d &quot;narrator_enabled=true&quot; -d &quot;narrator_voice_gen=male_01.wav&quot; -d &quot;text_not_inside=character&quot; -d &quot;language=en&quot; -d &quot;output_file_name=myoutputfile&quot; -d &quot;output_file_timestamp=true&quot; -d &quot;autoplay=true&quot; -d &quot;autoplay_volume=0.8&quot;</code><br>
    </p>
    <p>Note that if your text that needs to be generated contains double quotes you will need to escape them with
        <code>\&quot;</code> (Please see the narrator example).
    </p>
    <h3 id="🟠-request-parameters">🟠 Request Parameters</h3>
    <p>🟠 <strong>text_input</strong>: The text you want the TTS engine to produce. Use escaped double quotes for
        character speech and asterisks for narrator speech if using the narrator function. Example:</p>
    <p><code>-d &quot;text_input=*This is text spoken by the narrator* \&quot;This is text spoken by the character\&quot;. This is text not inside quotes.&quot;</code>
    </p>
    <p>🟠 <strong>text_filtering</strong>: Filter for text. Options:</p>
    <ul>
        <li><strong>none</strong> No filtering. Whatever is sent will go over to the TTS engine as raw text, which may
            result in some odd sounds with some special characters.<br></li>
        <li><strong>standard</strong> Human-readable text and a basic level of filtering, just to clean up some special
            characters.<br></li>
        <li><strong>html</strong> HTML content. Where you are using HTML entity&#39;s like &quot;<br></li>
    </ul>
    <p><code>-d &quot;text_filtering=none&quot;</code><br>
        <code>-d &quot;text_filtering=standard&quot;</code><br>
        <code>-d &quot;text_filtering=html&quot;</code><br>
    </p>
    <p>Example:</p>
    <ul>
        <li><strong>Standard Example</strong>:
            <code>*This is text spoken by the narrator* &quot;This is text spoken by the character&quot; This is text not inside quotes.</code><br>
        </li>
        <li><strong>HTML Example</strong>:
            <code>&amp;ast;This is text spoken by the narrator&amp;ast; &amp;quot;This is text spoken by the character&amp;quot; This is text not inside quotes.</code><br>
        </li>
        <li><strong>None</strong>: <code>Will just pass whatever characters/text you send at it.</code><br></li>
    </ul>
    <p>🟠 <strong>character_voice_gen</strong>: The WAV file name for the character&#39;s voice.<br></p>
    <p><code>-d &quot;character_voice_gen=female_01.wav&quot;</code></p>
    <p>🟠 <strong>narrator_enabled</strong>: Enable or disable the narrator function. If true, minimum text filtering is
        set to standard. Anything between double quotes is considered the character&#39;s speech, and anything between
        asterisks is considered the narrator&#39;s speech.</p>
    <p><code>-d &quot;narrator_enabled=true&quot;</code><br>
        <code>-d &quot;narrator_enabled=false&quot;</code>
    </p>
    <p>🟠 <strong>narrator_voice_gen</strong>: The WAV file name for the narrator&#39;s voice.</p>
    <p><code>-d &quot;narrator_voice_gen=male_01.wav&quot;</code></p>
    <p>🟠 <strong>text_not_inside</strong>: Specify the handling of lines not inside double quotes or asterisks, for the
        narrator feature. Options:</p>
    <ul>
        <li><strong>character</strong>: Treat as character speech.<br></li>
        <li><strong>narrator</strong>: Treat as narrator speech.<br></li>
    </ul>
    <p><code>-d &quot;text_not_inside=character&quot;</code><br>
        <code>-d &quot;text_not_inside=narrator&quot;</code>
    </p>
    <p>🟠 <strong>language</strong>: Choose the language for TTS. Options:</p>
    <p><code>ar</code> Arabic<br>
        <code>zh-cn</code> Chinese (Simplified)<br>
        <code>cs</code> Czech<br>
        <code>nl</code> Dutch<br>
        <code>en</code> English<br>
        <code>fr</code> French<br>
        <code>de</code> German<br>
        <code>hi</code> Hindi<br>
        <code>hu</code> Hungarian<br>
        <code>it</code> Italian<br>
        <code>ja</code> Japanese<br>
        <code>ko</code> Korean<br>
        <code>pl</code> Polish<br>
        <code>pt</code> Portuguese<br>
        <code>ru</code> Russian<br>
        <code>es</code> Spanish<br>
        <code>tr</code> Turkish<br>
    </p>
    <p><code>-d &quot;language=en&quot;</code><br></p>
    <p>🟠 <strong>output_file_name</strong>: The name of the output file (excluding the .wav extension).</p>
    <p><code>-d &quot;output_file_name=myoutputfile&quot;</code><br></p>
    <p>🟠 <strong>output_file_timestamp</strong>: Add a timestamp to the output file name. If true, each file will have
        a unique timestamp; otherwise, the same file name will be overwritten each time you generate TTS.</p>
    <p><code>-d &quot;output_file_timestamp=true&quot;</code><br>
        <code>-d &quot;output_file_timestamp=false&quot;</code>
    </p>
    <p>🟠 <strong>autoplay</strong>: Enable or disable playing the generated TTS to your standard sound output device at
        time of TTS generation.</p>
    <p><code>-d &quot;autoplay=true&quot;</code><br>
        <code>-d &quot;autoplay=false&quot;</code>
    </p>
    <p>🟠 <strong>autoplay_volume</strong>: Set the autoplay volume. Should be between 0.1 and 1.0. Needs to be
        specified in the JSON request even if autoplay is false.</p>
    <p><code>-d &quot;autoplay_volume=0.8&quot;</code></p>
    <h3 id="🟠-tts-generation-response">🟠 TTS Generation Response</h3>
    <p>The API returns a JSON object with the following properties:</p>
    <ul>
        <li><strong>status</strong> Indicates whether the generation was successful (generate-success) or failed
            (generate-failure).<br></li>
        <li><strong>output_file_path</strong> The on-disk location of the generated WAV file.<br></li>
        <li><strong>output_file_url</strong> The HTTP location for accessing the generated WAV file for browser
            playback.<br></li>
        <li><strong>output_cache_url</strong> The HTTP location for accessing the generated WAV file as a pushed
            download.<br></li>
    </ul>
    <p>Example JSON TTS Generation Response:</p>
    <p><code>{&quot;status&quot;:&quot;generate-success&quot;,&quot;output_file_path&quot;:&quot;C:\\text-generation-webui\\extensions\\alltalk_tts\\outputs\\myoutputfile_1704141936.wav&quot;, &quot;output_file_url&quot;:&quot;http://127.0.0.1:7851/audio/myoutputfile_1704141936.wav&quot;, &quot;output_cache_url&quot;:&quot;http://127.0.0.1:7851/audiocache/myoutputfile_1704141936.wav&quot;}</code>
    </p>
    <h3 id="🟠-tts-generation-endpoint-streaming-generation">🟠 TTS Generation Endpoint (Streaming Generation)</h3>
    <p>Streaming TTS generation does NOT support Narration and will generate an audio stream. Streaming TTS speech
        JavaScript Example:<br></p>
    <ul>
        <li>URL: <code>http://localhost:7851/api/tts-generate-streaming</code><br> - Method: <code>POST</code><br> -
            Content-Type: <code>application/x-www-form-urlencoded</code><br><br></li>
    </ul>
    <pre><code>// Example parameters
        const text = &quot;Here is some text&quot;;
        const voice = &quot;female_01.wav&quot;;
        const language = &quot;en&quot;;
        const outputFile = &quot;stream_output.wav&quot;;
        // Encode the text for URL
        const encodedText = encodeURIComponent(text);
        // Create the streaming URL
        const streamingUrl = `http://localhost:7851/api/tts-generate-streaming?text=${encodedText}&amp;voice=${voice}&amp;language=${language}&amp;output_file=${outputFile}`;
        // Create and play the audio element
        const audioElement = new Audio(streamingUrl);
        audioElement.play(); // Play the audio stream directly
        </code></pre>
    <ul>
        <li><strong>Text (text):</strong> This is the actual text you want to convert to speech. It should be a string
            and must be URL-encoded to ensure that special characters (like spaces and punctuation) are correctly
            transmitted in the URL. Example: <code>Hello World</code> becomes <code>Hello%20World</code> when
            URL-encoded.<br></li>
        <li><strong>Voice (voice):</strong> This parameter specifies the voice type to be used for the TTS. The value
            should match one of the available voice options in AllTalks voices folder. This is a string representing the
            file, like <code>female_01.wav</code>.<br></li>
        <li><strong>Language (language):</strong> This setting determines the language in which the text should be
            spoken. A two-letter language code (like <code>en</code> for English, <code>fr</code> for French, etc.).<br>
        </li>
        <li><strong>Output File (output_file):</strong> This parameter names the output file where the audio will be
            streamed. It should be a string representing the file name, such as <code>stream_output.wav</code>. AllTalk
            will not save this as a file in its outputs folder.<br></li>
    </ul>
    <p><a href="#toc">Back to top of page</a></p>
    <hr>
    <h3 id="references"><strong>Thanks &amp; References</strong></h3>
    <h4>Coqui TTS Engine</h4>
    <ul>
        <li><a href="https://coqui.ai/cpml.txt" target="_blank" rel="noopener">Coqui License</a></li>
        <li><a href="https://github.com/coqui-ai/TTS" target="_blank" rel="noopener">Coqui TTS GitHub Repository</a>
        </li>
    </ul>

    <h4>Extension coded by</h4>
    <ul>
        <li><a href="https://github.com/erew123" target="_blank" rel="noopener">Erew123 GitHub Profile</a></li>
    </ul>

    <h4>Thanks to &amp; Text generation webUI</h4>
    <ul>
        <li><a href="https://github.com/oobabooga/text-generation-webui" target="_blank" rel="noopener">Ooobabooga
                GitHub Repository</a> (Portions of orginal Coquii_TTS extension)</li>
    </ul>

    <h4>Thanks to</h4>
    <ul>
        <li><a href="https://github.com/daswer123" target="_blank" rel="noopener">daswer123 GitHub Profile</a>
            (Assistance with cuda to cpu moving)</li>
        <li><a href="https://github.com/S95Sedan" target="_blank" rel="noopener">S95Sedan GitHub Profile</a>
            (Editing the Microsoft DeepSpeed v11.x installation files so they work)</li>
        <li><a href="https://github.com/kanttouchthis" target="_blank" rel="noopener">kanttouchthis GitHub
                Profile</a> (Portions of orginal Coquii_TTS extension)</li>
        <li><a href="https://github.com/Wuzzooy" target="_blank" rel="noopener">Wuzzooy GitHub Profile</a> (Trying
            out the code while in development)</li>
    </ul>

    <p><a href="#toc">Back to top of page</a></p>

</body>

</html>
