This application is client software for real-time voice conversion that supports various voice conversion models. This document provides a description for voice conversion limited to RVC(Retrieval-based-Voice-Conversion).
From the following, the original Retrieval-based-Voice-Conversion-WebUI is referred to as the original-RVC, RVC-WebUI created by ddPn08 is referred to as ddPn08-RVC.
- Model training must be done separately.
- If you want to learn by yourself, please go to original-RVC or ddPn08RVC.
- Recording app on Github Pages is convenient for preparing voice for learning on the browser.
- [Commentary video] (https://youtu.be/s_GirFEGvaA)
- TIPS for training has been published, so please refer to it.
Unzip the downloaded zip file and run start_http.bat
.
After extracting the download file, execute startHttp.command
. If it shows that the developer cannot be verified, press the control key again and click to execute (or right-click to execute).
When connecting remotely, please use .bat
file (win) and .command
file (mac) where http is replaced with https.
When you run a .bat file (Windows) or .command file (Mac), a screen like the following will be displayed and various data will be downloaded from the Internet at the initial start-up. Depending on your environment, it may take 1-2 minutes in many cases.
Once the download of required data for launching is complete, a Launcher screen like the following will appear. Please select RVC from this screen.
At startup, you can immediately perform voice conversion using the data downloaded. Select the microphone and speakers in (1) of the figure below, then press the start button in (2). After a few seconds of data loading, the voice conversion will start. For those who are not used to it, it is recommended to select client device in (1) to select the microphone and speakers. (The difference between server device will be described later.)
The items that can be set with the GUI are divided into sections like the figure below. Each section can be opened and closed by clicking the title.
Icons are links.
Icon | To |
---|---|
Octocat | github repository |
question | manual |
spanner | tools |
coffee | donation |
Initialize configuration.
Reload the window.
Return to launcher.
start
starts the server, stop
stops the server
Indicates the status of real-time conversion.
The lag from voicing to conversion is buf + res seconds
. Adjust so that the buf time is longer than res.
If you are using the device in server device mode, this display will not be shown. It will be displayed on the console side.
This is the volume after voice conversion.
It is the length (ms) of one section to cut the audio. Shortening the Input Chunk reduces this number.
This is the time it takes to convert data that is the sum of Input Chunk and Extra Data Length. Shortening both Input Chunk and Extra Data Length will reduce the number.
You can switch between uploaded models. Information about the model is shown in [] under the name
- Is the model considering f0(=pitch)?
- f0: consider
- nof0: don't consider
- Sampling rate used to train the model
- Number of feature channels used by the model
- Clients used for learning
- org: This is the model trained in orginal-RVC.
- webui: The model trained on ddPn08-RVC.
A button is placed to perform operations on the model and server. and server.
We can output an ONNX model. Converting a PyTorch model to an ONNX model can sometimes speed up inference.
Download the model. It is mainly used to get the results of model merging.
You can choose which frame to set the model in. The set model can be switched with Switch Model in Server Control.
When setting up the model, you can choose to either load the file or download it from the internet. Depending on your choice, the available settings will change.
- file: Select a local file to load the model.
- from net: Download the model from the internet.
If you set it to load from a file, it will be displayed.
Specify the trained model here. Required fields. You can choose either ONNX format (.onnx) or PyTorch format (.pth).
- If trained with orginal-RVC, it is in
/logs/weights
. - If trained with ddPn08-RVC, it is in
/models/checkpoints
.
If you set it to load from a file, it will be displayed.
This is an additional function that brings the features extracted by HuBERT closer to the training data. Used in pairs with feature(.npy).
- If trained with orginal-RVC, it is in
/logs/your-expetiment-name/total_fea.npy
. - If trained with ddPn08-RVC, it is in
/models/checkpoints/your-model-name_index/your-model-name.0.big.npy
.
If you choose to download from the internet, you will see the model to download. Please check the link to the terms of use before using it.
Enter the default value for how much the pitch of the voice should be converted. You can also convert during inference. Below is a guideline for the settings.
- +12 for male voice to female voice conversion
- -12 for female voice to male voice conversion
After setting the above items, press to make the model ready for use.
When you set the option to download from the internet, the items above will be displayed. After setting the items above, press to activate the model.
Adjust the pitch of your voice. Below is a guideline for the settings.
- +12 for male voice to female voice conversion
- -12 for female voice to male voice conversion
Specify the ratio to shift to the features used in training. Effective when both feature and index are set in Model Setting. 0 uses the output of HuBERT as it is, 1 brings it all back to the original features. If the index ratio is greater than 0, the search may take a long time.
The volume threshold for audio conversion. If the rms is smaller than this value, no voice conversion is performed and silence is returned. (In this case, the conversion process is skipped, so the load is less.)
Decide how much length to cut and convert in one conversion. The higher the value, the more efficient the conversion, but the larger the buf value, the longer the maximum time before the conversion starts. The approximate time is displayed in buff:.
Determines how much past audio to include in the input when converting audio. The longer the past voice is, the better the accuracy of the conversion, but the longer the res is, the longer the calculation takes. (Probably because Transformer is a bottleneck, the calculation time will increase by the square of this length)
Detail is here
If you have 2 or more GPUs, you can choose your GPU here.
Choose between client device mode and server device mode. You can only change it when the voice conversion is stopped.
For more details on each mode, please see here.
Choose an input device
Choose an output terminal
It will only be displayed when in client device mode.
Audio is recorded from when you press start until you press stop. Pressing this button does not start real-time conversion. Press Server Control for real-time conversion
You can do model merging. Set the component amounts for each source model for the merge. Create a new model according to the ratio of the component amounts.
On/Off of the browser's built-in noise removal function.
- input: Increase or decrease the volume of the input audio to the model. 1 is the default value
- output: Increase or decrease the volume of the output audio from the model. 1 is the default value
Choose an algorithm for extracting the pitch. You can choose from the following two types.
- Lightweight
dio
- Highly accurate
harvest
- Middle accurate with gpu
crepe
Record input and output on the server side. As for the input, the sound of the microphone is sent to the server and recorded as it is. It can be used to check the communication path from the microphone to the server. For output, the data output from the model is recorded in the server. You can see how the model behaves (once you've verified that your input is correct).