-
Notifications
You must be signed in to change notification settings - Fork 117
2. Implemented Methods
All the supported methodolgies can be placed in the following four categories.
We also note our supported methodolgies with the following tags if they have special designs in the corresponding steps, compared to the standard classifier training process.
A Discriminatively Trained Reconstruction Embedding for Surface Anomaly Detection
Method Description
- Pretrain: During this stage, we pretrain a deep convolutional autoencoders (dcae) for anomaly detection.
- Train: During this stage, firstly we load the pretrained dcae into the net, then we proceed to train the model. The output includes net and two hyperparameters C and R which represent center and radius of the hypersphere.
- Test: During this stage, we evaluate our method with metric auroc.
OpenOOD Implementation
train_ad_pipeline.py:
Pretrain the dcaetrain_dsvdd_pipeline.py:
Train our model and finally test our modeldsvdd_net.py:
Define dcae net and dsvdd netdsvdd_trainer.py:
Trainer of dcae and dsvdddsvdd_evaluator.py:
Evaluator of dcae and dsvdd
Script
pretrain dcase:
sh scripts/a_anomaly/0_dsvdd_pretrain.shtrain dsvdd:
sh scripts/a_anomaly/0_dsvdd_train.sh
Result
- Note: In the original code of dsvdd, the train dataset is normalized with special means and stds. For example, when normal dataset is cifar10-3, normalization dict: [-31.7975, -31.7975, -31.7975], [42.8907, 42.8907, 42.8907]. Furthermore, a global_contrast_normalization method is used in the transform. So the ideal result is displayed below.
Normal class | 3 | 3 |
---|---|---|
Method | DCAE | DSVDD |
Expected AUROC | 58.40 | 59.10 |
AUROC | 63.43 | 60.44 |
Multiresolution Knowledge Distillation for Anomaly Detection
Overview:
- train: During the training stage, we introduce two vgg networks, one of which called source network is pretrained. For each training epoch, when id data is input, the differences in special layers between clone nwtwork and source network are obtained and loss is computed. By SGD, the clone network is optimised.
- test: During the testing stage, we evaluate the method of anomaly detection by roc_auc.
Keypoints:
train_ad_pipeline.py:
training stagead_test_pipeline.py:
testing stagekdad_trainer.py:
trainerkdad_evaluator.py:
evaluatorvggnet.py:
source and clone networkkdad_recorder.py:
recorderkdad_losses.py:
define loss function
Script
kdad_train:
sh scripts/a_anomaly/1_kdad_train.shkdad_detection_test:
sh scripts/a_anomaly/1_kdad_test_det.sh
Result
Normal class | 3 |
---|---|
Expected AUROC | 77.02 |
AUROC | 86.08 |
Title: A Discriminatively Trained Reconstruction Embedding for Surface Anomaly Detection
Method Description
- Overview: Dream means discriminatively trained reconstruction anomaly embadding model.
- Model Architecture: Dream is composed from a reconstructive and a discriminative sub-networks.
- The reconstructive sub-network is formulated as an encoder-decoder architecture that converts the local patterns of an input image into patterns closer to the distrubution of normal smaples.
- The discriminative sub-network uses U-net like architecture. The input of discriminative sub-network is the channel wise concatenation of the reconsturctive sub-network and the origin input image.
- Training: Dream also introduced an method to generate anomalous images used for training, which involves noise image generated by Perlin noise generator and various anomaly source.
- Inference:
OpenOOD Implementation
train_ad_pipeline.py:
train pipeline used for Draemtest_ad_pipeline.py:
test pipeline used for Draemdraem_preprocessor.py:
preprocessor for Drame include the new augmentation methoddraem_networks.py:
define both sub-networks for Draemdraem_loss.py:
define the loss functions needed for training Draemdraem_evaluator.py:
define the evaluation method evaluate on both good samples and anomoly samples
Script
sh code
Result
AUROC | AP | |
---|---|---|
bottle | 99.2 / 99.1 | 90.7 / 86.5 |
carpet | 97.0 / 95.5 | 63.8 / 53.5 |
leather | 97.9 / 98.6 | 70.2 / 75.3 |
Title: Towards Total Recall in Industrial Anomaly Detection
Method Description
- Overview: Dream means discriminatively trained reconstruction anomaly embadding model.
- Model Architecture: Dream is composed from a reconstructive and a discriminative sub-networks.
- The reconstructive sub-network is formulated as an encoder-decoder architecture that converts the local patterns of an input image into patterns closer to the distrubution of normal smaples.
- The discriminative sub-network uses U-net like architecture. The input of discriminative sub-network is the channel wise concatenation of the reconsturctive sub-network and the origin input image.
- Training: Dream also introduced an method to generate anomalous images used for training, which involves noise image generated by Perlin noise generator and various anomaly source.
- Inference:
OpenOOD Implementation
train_ad_pipeline.py:
train pipeline used for Draemtest_ad_pipeline.py:
test pipeline used for Draemdraem_preprocessor.py:
preprocessor for Drame include the new augmentation methoddraem_networks.py:
define both sub-networks for Draemdraem_loss.py:
define the loss functions needed for training Draemdraem_evaluator.py:
define the evaluation method evaluate on both good samples and anomoly samples
Script
sh code
Result
AUROC | AP | |
---|---|---|
bottle | 99.2 / 99.1 | 90.7 / 86.5 |
carpet | 97.0 / 95.5 | 63.8 / 53.5 |
leather | 97.9 / 98.6 | 70.2 / 75.3 |
Title: A Discriminatively Trained Reconstruction Embedding for Surface Anomaly Detection
Method Description
- Overview: Dream means discriminatively trained reconstruction anomaly embadding model.
- Model Architecture: Dream is composed from a reconstructive and a discriminative sub-networks.
- The reconstructive sub-network is formulated as an encoder-decoder architecture that converts the local patterns of an input image into patterns closer to the distrubution of normal smaples.
- The discriminative sub-network uses U-net like architecture. The input of discriminative sub-network is the channel wise concatenation of the reconsturctive sub-network and the origin input image.
- Training: Dream also introduced an method to generate anomalous images used for training, which involves noise image generated by Perlin noise generator and various anomaly source.
- Inference:
OpenOOD Implementation
train_ad_pipeline.py:
train pipeline used for Draemtest_ad_pipeline.py:
test pipeline used for Draemdraem_preprocessor.py:
preprocessor for Drame include the new augmentation methoddraem_networks.py:
define both sub-networks for Draemdraem_loss.py:
define the loss functions needed for training Draemdraem_evaluator.py:
define the evaluation method evaluate on both good samples and anomoly samples
Script
sh code
Result
AUROC | AP | |
---|---|---|
bottle | 99.2 / 99.1 | 90.7 / 86.5 |
carpet | 97.0 / 95.5 | 63.8 / 53.5 |
leather | 97.9 / 98.6 | 70.2 / 75.3 |