Unofficial Pytorch(1.0+) implementation of nips paper Universal Style Transfer via Feature Transforms.
Original torch implementation from the author can be found here.
Other implementations such as Pytorch_implementation1 , Pytorch_implementation2 or Pytorch_implementation3 are also available.
This repository provides a pre-trained model for you to generate your own image given content image and style image.
If you have any question, please feel free to contact me. (Language in English/Japanese/Chinese will be ok!)
I propose a structure-emphasized multimodal style transfer(SEMST), feel free to use it here.
- Python 3.7
- PyTorch 1.0+
- TorchVision
- Pillow
Anaconda environment recommended here!
(optional)
- GPU environment
-
Clone this repository
git clone https://github.com/irasin/Pytorch_WCT cd Pytorch_WCT
-
Prepare your content image and style image. I provide some in the
content
andstyle
and you can try to use them easily. -
Download the pretrained model here and put them under the directory named
model_state
-
Generate the output image. A transferred output image and a content_output_pair image and a NST_demo_like image will be generated.
python test.py -c content_image_path -s style_image_path
usage: test.py [-h] [--content CONTENT] [--style STYLE] [--output_name OUTPUT_NAME] [--alpha ALPHA] [--gpu GPU] [--model_state_path MODEL_STATE_PATH]
If output_name is not given, it will use the combination of content image name and style image name.
Some results of content image and my cat (called Sora) will be shown here.