Arbitrary style transformation of an input image, video, or camera source.
This code base has been developed by ZKM | Hertz-Lab as part of the project »The Intelligent Museum«.
Please raise issues, ask questions, throw in ideas or submit code, as this repository is intended to be an open platform to collaboratively improve this project.
Copyright (c) 2022 ZKM | Karlsruhe.
Copyright (c) 2022 Dan Wilcox.
GPL v3 License.
Parts adapted from ofxTensorFlow2 example_style_transfer_arbitrary
under a BSD Simplified License.
- openFrameworks
- openFrameworks addons:
- CLI11 parser (included in
src
) - pre-trained Arbitrary Style Transfer model (download separately)
src/
: contains the openFrameworks C++ codebin/data/model
: contains the model trained with TensorFlow2bin/data/style
: input style imagesbin/data/image
: input imagesbin/data/video
: input videosbin/data/output
: saved output images
Overview:
- Download the required TF2 model
- Follow the steps in the ofxTensorFlow2 "Installation & Build" section for you platform
- Generate the project files for this folder using the OF ProjectGenerator
- Build for your platform
A pre-trained style transfer model can be downloaded as a model_style_transfer_arbitrary.zip
file from the public shared link here:
https://cloud.zkm.de/index.php/s/gfWEjyEr9X4gyY6
Unzip the file and place the model
directory into the Styler bin/data
directory.
To make this quick, a script is provided to download and install the model (requires a Unix shell, curl, and unzip):
./scripts/download_model.sh
Project files are not included so you will need to generate the project files for your operating system and development environment using the OF ProjectGenerator which is included with the openFrameworks distribution.
To (re)generate project files for an existing project:
- Click the "Import" button in the ProjectGenerator
- Navigate to the project's parent folder ie. "apps/myApps", select the base folder for the example project ie. "Styler", and click the Open button
- Click the "Update" button
If everything went Ok, you should now be able to open the generated project and build/run the example.
Note: Some platforms require additional steps before / after project file generation. See following sections.
On macOS, a couple of additional manual steps are required to use ofxTensorflow2:
- Enable C++14 (minimum) or C++17 in openFrameworks (only once, Xcode + Makefile). See the detailed steps in the ofxTensorflow2 readme.
- Generate Styler project files with the ProjectGenerator
- Close the Styler project in Xcode if it's open, then run
configure_xcode.sh
to configure the Xcode project (after every project regeneration, Xcode only)*:
scripts/configure_xcode.sh .
*Note: the .
is important, it means "this directory."
For an Xcode build, open the Xcode project, select the "Styler Debug" scheme, and hit "Run".
For a Makefile build, build and run on the terminal:
make ReleaseTF2
make RunRelease
An additional step is required before generating the project files:
- Run
configure_makefile.sh
to configure the project (only needed once)*:
scripts/configure_makefile.sh .
- Generate Styler project files with the ProjectGenerator
*Note: the .
is important, it means "this directory."
Next, build and run on the terminal:
make Release
make RunReleaseTF2
Styler applies a given style image onto an input image. The input image can come from one of three input sources: static image(s), video frames, or camera frames.
Styler starts in windowed-mode and uses the camera source by default.
Styler lists image and video paths automatically on start from the following directories:
bin/data/style
: style images, jpg or png (required)bin/data/image
: input images, jpg or png, all paths added to playlistbin/data/video
: input video files, mov or mp4 or avi, all paths added to playlist
Simply add/remove files from each and restart the application. Order is sorted by filename.
Note: a minimum of 1 image must be in the style directory, otherwise Styler will exit on start due to missing input. If the input image or video directories are empty, the respective source will be disabled.
While running, any images drag & dropped onto the Styler window will be loaded as a new style.
Styler can also accept new style images from the current input source or, optionally, a second live camera input. See The "Commandline Options" section for info on enabling the style camera.
If style saving is enabled, either via the commandline option or key command, the style input image is saved to the bin/data/output-style
directory when a new style frame is taken. A red indicator is drawn in the upper right corner if style saving is on.
d
: toggle debug mode, shows on-screen helpv
: video inputc
: camera inputi
: image inputm
: mirror camera / (shift) style cameran
: flip camera / (shift) style camerar
: restart videof
: toggle fullscreens
: save output image tobin/data/output
/ (shift) toggle style save tobin/data/output-style
k
: toggle style input modep
: toggle style input pip (picture in picture)a
: toggle auto style change after last frameLEFT
: previous styleRIGHT
: next styleSPACE
: toggle playback / take style imageUP
: next frame, when paused / (shift) next videoDOWN
: previous frame, when paused / (shift) prev video
Example: inputting style from main camera
- Press
c
to choose camera input - Press
k
to enter input mode, camera feed will no longer be styled - Present object or image to camera, try to cover most of the camera's view
- Press
SPACE
to take snapshot as style image - (Optional) Press
s
to save current styled output image
When enabled, automatic style change will go to the next style based on the input source:
- camera: every 20 seconds
- image: when changing from the last image to the first image
- video: when changing from the last frame to the first frame
Additional run time settings are available via commandline options as shown via the --help
flag output:
% bin/Styler --help
arbitrary style transformation of an input image, video, or camera source
Usage: Styler [OPTIONS]
Options:
-h,--help Print this help message and exit
-f,--fullscreen start in fullscreen
-a,--auto enable auto style change
--auto-time FLOAT set camera auto style change time in s, default 20
-p,--port INT OSC listen port, default none
-l,--list list camera devices and exit
-d,--dev INT camera device number, default 0
-r,--rate INT desired camera framerate, default 30
-s,--size TEXT desired camera size, default 640x480
--mirror mirror camera horizontally
--flip flip camera vertically
--static-size disable dynamic input -> output size handling
--style-dev INT optional second style camera device number
--style-rate INT desired style camera framerate, default 30
--style-size TEXT desired style camera size, default 640x480
--style-mirror mirror style camera horizontally
--style-flip flip style camera vertically
--style-save save style images when taking
--style-pip show style picture in picture
-v,--verbose verbose printing
--version print version and exit
For macOS, the application binary can be invoked from within the .app bundle to pass commandline arguments:
bin/Styler.app/Contents/MacOS/Styler -h
or via the system open
command using the --args
flag:
open bin/Styler.app --args --device 1
Note: open
will launch the app without a Terminal window, so console output cannot be read.
This approach can also be wrapped up into a shell alias to be added to the account's ~/.bash_profile
or ~/.zshrc
file:
alias styler="/Applications/Styler.app/Contents/MacOS/Styler"
Close and reopen the shell. The application can now be invoked via:
styler -v --device 1
Another option is to use a wrapper script, such as the styler.sh
script included with this repo:
./styler.sh -v --device 1
Note: The styler.sh
script uses the release build "Styler" .app naming. If you are testing with the debug build, edit the APP
variable name to "StylerDebug".
If Styler is started with an OSC port via the -p
or --port
flags, it will receive OSC messages.
Message specification:
- /style/take: take current style if in style input mode or using style camera
- /style/save: save current style image
- /output/save: save current output image
For example, a hardware style take button can be made using an Arduino microcontroller and some Python scripting. When the button is pressed, a character is sent over the serial connection, and triggers sending an OSC message. Using the serial-button-osc project and it's included Arduino sketch, run the Python script with the serial device path an desired OSC address and port 5005:
Terminal 1:
./styler.sh -p 5005
Terminal 2:
cd serial-button-osc
./serial-button-osc -p 5005 /dev/tty.usbserial-310 /style/take
When Styler is in style input mode, pressing the button will take the current image as the new style image.
- Update changelog
- Update app version in src/config.h define and openFrameworks-Info.plist
- Tag version commit, ala "0.3.0"
- Push commit and tags to server:
git commit push
git commit push --tags
- Set Signing & Capabilities Team
- Enable the Hardened Runtime capability, then check "Disable Library Validation" and "Camera"
- Notarize app
- Bundle app and data into Styler-version distribution folder
- Compress distribution folder into Styler-version.dmg disk image
- Sign disk image
Once steps 1-2 are done, the remaining steps are automated via Makefile-mac-dist.mk
using the scripts/release_dmg.sh
wrapper script:
./scripts/release_dmg.sh
If the build fails in Xcode 12 or 13 with a "The Legacy Build System will be removed in a future release." error, disable this warning via:
- Open File->Project Settings
- Check "Do not show a diagnostic issue about build system deprecation"
- Click "Done"
This is likely to be fixed via OF version 0.12.
When building for Debug, only the architecture of the system is built. When building for Release, multiple architectures will be targeted (Intel and Arm), however since libtensorflow builds are currently single arch only, the Release build will fail to link due to missing architectures.
The quick fix is to disable building for the non-system architecture, for example on an M1 system (arm64) we disable building for Intel (x86_64) and vice-versa in the Xcode project:
- Click on the project in the top left of the project tree
- Click on the project Target, then the Build Settings tab, make sure "All" is selected
- Double-click on Excluded Architectures, and enter the non-system arch, ie. "x86_64" for an M1 "arm64" system, etc.
Now a Release rebuild should hopefully finish.
An artistic-curatorial field of experimentation for deep learning and visitor participation
The ZKM | Center for Art and Media and the Deutsches Museum Nuremberg cooperate with the goal of implementing an AI-supported exhibition. Together with researchers and international artists, new AI-based works of art will be realized during the next four years (2020-2023). They will be embedded in the AI-supported exhibition in both houses. The Project „The Intelligent Museum” is funded by the Digital Culture Programme of the Kulturstiftung des Bundes (German Federal Cultural Foundation) and funded by the Beauftragte der Bundesregierung für Kultur und Medien (Federal Government Commissioner for Culture and the Media).
As part of the project, digital curating will be critically examined using various approaches of digital art. Experimenting with new digital aesthetics and forms of expression enables new museum experiences and thus new ways of museum communication and visitor participation. The museum is transformed to a place of experience and critical exchange.