Skip to content

Keep a model loaded to reduce subsequent generation time #185

Answered by eginhard
linguistbro asked this question in Q&A
Discussion options

You must be logged in to vote

This is exactly what the tts = TTS(...) line does. Using:

import logging
from TTS.api import TTS

logging.basicConfig(level=logging.INFO)

# Load the TTS model
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to("cuda")

def test_function(text):
    tts.tts_to_file(
        text=text,
        file_path="tts_output.wav",
        speaker_wav="reference.wav",
        language="en"
    )

test_function('test 1')
test_function('test 2')
test_function('test 2')
test_function('test 3')

I get the following output on an RTX3090:

INFO:TTS.utils.manage:tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded.
INFO:TTS.tts.models:Using model: xtts
INFO:TTS.utils.synthesizer:Text …

Replies: 2 comments 6 replies

Comment options

You must be logged in to vote
4 replies
@linguistbro
Comment options

@eginhard
Comment options

@linguistbro
Comment options

@eginhard
Comment options

Answer selected by eginhard
Comment options

You must be logged in to vote
2 replies
@eginhard
Comment options

@linguistbro
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants