A tensorflow implementation for fast neural style! Convert the model file to a pb file then deploy it into a android app!
A tensorflow implementation for Perceptual Losses for Real-Time Style Transfer and Super-Resolution.
The python code here is forked from hzy46/fast-neural-style-tensorflow.
The eval.py file was edited to product a .pb model file while converting the image.
Then we deployed the .pb model into Camera_transfer.py, using opencv and a camera to make a real-time style-transfer.
The Create_pb_file_4Android.py allows you to create the .pb model used in AG_Group_tensorflow4Android project, this project was build with Android Studio.
project | style | sample |
---|---|---|
Real-time style transfer | ||
Android_project |
- Python 2.7.x or Python 3.5.x
- Tensorflow >= 1.0
- python-opencv
- pyyaml
- Android Studio
- JDK
You can download all the 7 trained models from Baidu Drive.
To generate a sample from the model "wave.ckpt-done", run:
cd fast-neural-style-train&test
python eval.py --model_file <your path to wave.ckpt-done> --image_file img/test.jpg
Then check out generated/res.jpg for the transfer result and models/wave_small1.pb for .pb model file.
To deploy the real-time style transfer, replace your model file path in line 16 in Camera_transfer.py, then run:
cd fast-neural-style-train&test
python Camera_transfer.py
To train a model from scratch, you should first download VGG16 model from Tensorflow Slim. Extract the file vgg_16.ckpt. Then copy it to the folder pretrained/ :
cd fast-neural-style-train&test
mkdir pretrained
cp <your path to vgg_16.ckpt> pretrained/
Then download the COCO dataset. Please unzip it, and you will have a folder named "train2014" with many raw images in it. Then create a symbol link to it:
cd <this repo>
ln -s <your path to the folder "train2014"> train2014
Train the model of "wave":
python train.py -c conf/wave.yml
(Optional) Use tensorboard:
tensorboard --logdir models/wave/
Checkpoints will be written to "models/wave/".
View the configuration file for details.
To compiling the android project, just open the project AG_Group_tensorflow4Android, then replace your sdk path, click run. If you want to use your own images or photos as style images, train the .ckpt model file, then run:
cd fast-neural-style-train&test
python Create_pb_file_4Android.py --model_file <your path to .ckpt> --image_file <path to any .jpg or .png picture>
Then put the .pb model file into AG_Group_tensorflow4Android/app/src/main/assets/, add your button and java code in MainActivity.java.
Remember you will need to put a corresponding picture for your own button.
The left AG logo button allows you switch the camera and reset the app after transfer done.
In line 56 of MainActivity.java, I give a 90 rotatation to the output, some phone may not need this rotatation, just delete this line, or change the parameter to adapt to your phone.
If you like our project, please give us a star, thx!
Tips: This project was made for dear Sansa, my dear lover! (you can also change the text shown in app in line 13 of activity_main.xml, give a suprise to your girl friend XD)