-
Notifications
You must be signed in to change notification settings - Fork 129
Glossary
Romain F. Laine edited this page Mar 14, 2020
·
9 revisions
Term | Our definition of it |
---|---|
Neural network | An analysis network which structure was inspired by the network found in the brain, namely that of connected nodes and communication channels |
Notebook | A notebook refers to a Jupiter notebook. They allow the running of Python code, especially in the online environment provided by Google Colab |
Epoch | An epoch is a particular step during training of a network. Usually, a sequence of epochs are running sequentially during which a subset of the training dataset is used to improve the network performance. Typically, at the end of each epoch the loss function is calculated on the validation dataset |
Training | The training of a network is the stage at which the network is presented with a so-called training dataset and from which the network learns to perform the task at stake efficiently. During training, the network is able to adjust its own internal parameters and improve its capacity. |
Training dataset | The training dataset is a dataset that allows the network to understand the task (the transformation) that is expected of it. In supervised learning, the training dataset is composed of a paired set of images from the exact same field of view but acquired in both modalities representing the source and target of the transformation. For instance, in a denoising task, the source is a noisy image and the target is the low noise equivalent image. In unsupervised training, the training dataset can simply be made of examples of data that will be fed to the network when performing the task. Often times, the training dataset determines the type task that the network will perform. |
Validation dataset | This dataset is a small subsection of the training dataset (typically 10-15% of the training dataset) and which is used to evaluate the performance of the network output after each epoch. This constitutes data that the network never "sees" during the training and therefore helps assessing how well the network generalises to unseen data. |
Loss function | The loss function is a mathematical function describing how the information from the network output and the expected target are compared quantitatively. This quantitative comparison is essential for the network to propagate the observed errors back through the network and improve its performance. |
Batches | |
Patches | |
Steps |
Main:
- Home
- Step by step "How to" guide
- How to contribute
- Tips, tricks and FAQs
- Data augmentation
- Quality control
- Running notebooks locally
- Running notebooks on FloydHub
- BioImage Modell Zoo user guide
- ZeroCostDL4Mic over time
Fully supported networks:
- U-Net
- StarDist
- Noise2Void
- CARE
- Label free prediction (fnet)
- Object Detection (YOLOv2)
- pix2pix
- CycleGAN
- Deep-STORM
Beta notebooks
Other resources: