Skip to content

Supplementary material for the paper "Deep Neural Networks Under Stress", ICIP 2016

Notifications You must be signed in to change notification settings

MicaelCarvalho/DNNsUnderStress

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

THESE FILES BELONG TO THE SUPPLEMENTARY MATERIAL PROVIDED WITH THE ICIP 2016 PAPER
Deep Neural Networks Under Stress


You can find supplementary results and high resolution images on Micael Carvalho's webpage.


Hello !

Thank you for downloading our stress framework. Please feel free to contact us if you have any questions or comments : micael.carvalho[at]lip6.fr

Here we provide a simple implementation of our stresses. We defined a file format that you can use, or you can just create your own format and reimplement the functions inside the common.m file, to work with your format.

GETTING STARTED

Inside the source folder you will find 4 source files and 1 folder, described below:

  • common.m Provides the basic input/output functions to read and save feature vectors;
  • A_generate_dimensionality_reduction.m Generates a .mat file that describes a dimensionality reduction transformation;
  • A_generate_quantization.m Generates a .mat file that describes a quantization transformation;
  • B_apply_transformation.m Using one .mat file generated by the other files, it applies the transformation to feature vectors;
  • extra A folder containing a simple code for an image descriptor.

PREPARING UP MY DATASET

To prepare your dataset, you can start by describing your images with any descriptor available (a deep model, bag-of-visual-words, etc). You have to make sure your feature vector files are in a .mat format with a variable named "feature_vector" inside. If you want, you can use the VGG-M descriptor, provided in the "extra" folder, to do so.

After that, you just have to put your .mat files in a folder.

STRESSING IT

First, you have to decide which stress you are applying:

  1. Dimensionality Reduction;
  2. Quantization;
  3. Both.

FOR (1)

For (1), you have to run the following command in MATLAB:

A_generate_dimensionality_reduction(
output_folder,
experiment_name,
path_to_train_folder_with_feature_vector_files,
number_of_the_experiment,
number_of_dimensions_to_keep,
base_reduction);

Where:

  • output_folder = path to save the transformation files, necessary for applying the transformation later;
  • experiment_name = name of the experiment;
  • path_to_train_folder_with_feature_vector_files = path to the train folder, the transformation will be calculated based on it;
  • number_of_the_experiment = random (1) or PCA-based (2);
  • number_of_dimensions_to_keep = number of desired dimensions to preserve;
  • base_reduction = descriptor of the previous reduction, in order to ensure the new reduction is contained in the previous (optional).

This command will generate a .mat file which describes the desired transformation, which can be latter applied to the feature vectors.

Make sure you give in the third parameter the path to the TRAIN folder, otherwise, you will mix train/test images and contaminate the experiment.

After running the command, you will find in the output_folder a file named <experiment_name>.mat. This is the transformation file, keep it safe for now.

Now, for effectively applying the transformation to your feature vectors you just have to call the following command:

B_apply_transformation(
files_output_folder,
path_to_input_folder_or_single_feature_vector,
full_path_to_the_transformation_file);

Where:

  • files_output_folder = where to save the transformed descriptors;
  • path_to_input_folder_or_single_feature_vector = path to a folder or to a single image descriptor, the transformation will be applied to all files in the folder, or to the indicated image descriptor;
  • full_path_to_the_transformation_file = full path to the transformation file, generated in the step A; it should be a .mat file.

After running this command, you will find the stressed feature vectors in your files_output_folder. And there you go! They're read for your classifier.

Finally : if you are running the random experiment multiple times, we recommend you to gradually remove dimensions. And while generating the transformation files, you can provide an extra parameter with the full path to your previous transformation file. This way, we will make sure to keep a subset of the already selected dimensions. This will be very important to reduce the effects of the randomness, observing a performance drop that reflects better the reduction of dimensionality.


FOR (2)

For (2), you have to run the following command in MATLAB:

A_generate_quantization(
output_folder,
experiment_name,
path_to_train_folder_with_feature_vector_files,
number_of_the_experiment,
number_of_values_to_keep);

Where:

  • output_folder = path to save the transformation files, necessary for applying the transformation later;
  • experiment_name = name of the experiment;
  • path_to_train_folder_with_feature_vector_files = path to the train folder, the transformation will be calculated based on it;
  • number_of_the_experiment = single-dictionary (1) or multiple-dictionaries (2);
  • number_of_values_to_keep = number of desired values in the dictionary.

After running the command, you will find in the output_folder a file named <experiment_name>.mat. This is the transformation file, keep it safe for now.

Now, for effectively applying the transformation to your feature vectors you just have to call the following command:

B_apply_transformation(
files_output_folder,
path_to_input_folder_or_single_feature_vector,
full_path_to_the_transformation_file);

Where:

  • files_output_folder = where to save the transformed descriptors;
  • path_to_input_folder_or_single_feature_vector = path to a folder or to a single image descriptor, the transformation will be applied to all files in the folder, or to the indicated image descriptor;
  • full_path_to_the_transformation_file = full path to the transformation file, generated in the step A; it should be a .mat file.

After running this command, you will find the stressed feature vectors in your files_output_folder. And there you go! They're read for your classifier.


FOR (3)

For (3), you have to start by running the following command:

A_generate_dimensionality_reduction(
output_folder,
experiment_name,
path_to_train_folder_with_feature_vector_files,
number_of_the_experiment,
number_of_dimensions_to_keep,
base_reduction);

Where:

  • output_folder = path to save the transformation files, necessary for applying the transformation later;
  • experiment_name = name of the experiment;
  • path_to_train_folder_with_feature_vector_files = path to the train folder, the transformation will be calculated based on it;
  • number_of_the_experiment = random (1) or PCA-based (2);
  • number_of_dimensions_to_keep = number of desired dimensions to preserve;
  • base_reduction = descriptor of the previous reduction, in order to ensure the new reduction is contained in the previous (optional).

This command will generate a .mat file which describes the desired transformation, which can be latter applied to the feature vectors.

Make sure you give in the third parameter the path to the TRAIN folder, otherwise, you will mix train/test images and contaminate the experiment.

After running the command, you will find in the output_folder a file named <experiment_name>.mat. This is the transformation file, keep it safe for now.

Now, for effectively applying the transformation to your feature vectors you just have to call the following command:

B_apply_transformation(
files_output_folder,
path_to_input_folder_or_single_feature_vector,
full_path_to_the_transformation_file);

Where:

  • files_output_folder = where to save the transformed descriptors;
  • path_to_input_folder_or_single_feature_vector = path to a folder or to a single image descriptor, the transformation will be applied to all files in the folder, or to the indicated image descriptor;
  • full_path_to_the_transformation_file = full path to the transformation file, generated in the step A; it should be a .mat file.

After running this command, you will find the feature vectors stressed with (1) in your files_output_folder. And we have to apply the second transformation now, running the following command:

A_generate_quantization(
output_folder,
experiment_name,
path_to_train_folder_with_feature_vector_files,
number_of_the_experiment,
number_of_values_to_keep).

Where:

  • output_folder = path to save the transformation files, necessary for applying the transformation later;
  • experiment_name = name of the experiment;
  • path_to_train_folder_with_feature_vector_files = path to the train folder with the files that were ALREADY TRANSFORMED BY THE DIMENSIONALITY REDUCTION, the transformation will be calculated based on it;
  • number_of_the_experiment = single-dictionary (1) or multiple-dictionaries (2);
  • number_of_values_to_keep = number of desired values in the dictionary.

After running the command, you will find in the output_folder a file named <experiment_name>.mat. This is the transformation file, keep it safe for now.

Now, for effectively applying the transformation to your feature vectors you just have to call the following command:

B_apply_transformation(
files_output_folder,
path_to_input_folder_or_single_feature_vector,
full_path_to_the_transformation_file);

Where:

  • files_output_folder = where to save the transformed descriptors;
  • path_to_input_folder_or_single_feature_vector = path to a folder or to a single image descriptor ALREADY TRANSFORMED BY THE DIMENSIONALITY REDUCTION, the transformation will be applied to all files in the folder, or to the indicated image descriptor;
  • full_path_to_the_transformation_file = full path to the transformation file, generated in the step A; it should be a .mat file.

After running this command, you will find the stressed feature vectors in your files_output_folder. And there you go! They're read for your classifier.

About

Supplementary material for the paper "Deep Neural Networks Under Stress", ICIP 2016

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages