-
Notifications
You must be signed in to change notification settings - Fork 129
CycleGAN
CycleGAN is a method that can capture the characteristics of one image domain and figure out how these characteristics could be translated into another image domain, all in the absence of any paired training examples (ie transform a horse into zebra or apples into oranges). While CycleGAN can potentially be used for any type of image-to-image translation, we illustrate that it can be used to predict what a fluorescent label would look like when imaged using another imaging modalities.
Our cycleGAN notebook is based on the following paper:
The source code of the CycleGAN PyTorch implementation can be found in: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
Please also cite this original paper when using or developing our notebook.
To train CycleGAN, you only need two folders containing PNG images. The images do not need to be paired. The provided training dataset is already split in two folders called Training_source and Training_target.
-
While you do not need paired images to train CycleGAN, if possible, we strongly recommend that you generate a paired dataset. This means that the same image needs to be acquired in the two conditions. These images can be used to assess the quality of your trained model (Quality control dataset). The quality control assessment can be done directly in the notebook.
-
Please note that you currently can only use .PNG files!
Coming soon...
Network | Link to example training and test dataset | Direct link to notebook in Colab |
---|---|---|
CycleGAN | here |
or:
To train CycleGAN in Google Colab:
-
Download our streamlined ZeroCostDL4Mic notebooks
-
Open Google Colab
-
Once the notebook is open, follow the instructions.
Main:
- Home
- Step by step "How to" guide
- How to contribute
- Tips, tricks and FAQs
- Data augmentation
- Quality control
- Running notebooks locally
- Running notebooks on FloydHub
- BioImage Modell Zoo user guide
- ZeroCostDL4Mic over time
Fully supported networks:
- U-Net
- StarDist
- Noise2Void
- CARE
- Label free prediction (fnet)
- Object Detection (YOLOv2)
- pix2pix
- CycleGAN
- Deep-STORM
Beta notebooks
Other resources: