Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The predicted results are very different between using python and imagej ? #1

Open
WeisongZhao opened this issue Aug 9, 2019 · 7 comments
Assignees
Labels
bug Something isn't working

Comments

@WeisongZhao
Copy link

WeisongZhao commented Aug 9, 2019

Hi, I have trained a network with keras, and exported it to a .h5 file. I used the code in this repo. to generate the .pb model, and try to predict images using imagej with your plug-in. It appears that the python results and imagej predicted results are very different. Can you tell me what the potential mistake is ? Many thanks !

path2network='./model.h5'
K.set_learning_phase(0)
model = keras.models.load_model(path2network)
print(model.summary())
OUTPUT_DIR = "./tf_model"
builder = tf.saved_model.builder.SavedModelBuilder(OUTPUT_DIR)
signature = tf.saved_model.signature_def_utils.predict_signature_def(
            inputs  = {'input':  model.input},
            outputs = {'output': model.output})
signature_def_map = { tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature }
builder.add_meta_graph_and_variables(K.get_session(), [tf.saved_model.tag_constants.SERVING],
                                             signature_def_map=signature_def_map)
builder.save()
start preprocessing 
end preprocessing 0.02025772
Extract Patch 0.172213ms
Build Tensor 7.995988ms
Session feed 0.026279999999999998ms
Session fetch0 0.031226999999999998ms
Session run 154.684742ms
Convert output 4.818864ms
Create ImageJ object 14.75155ms

@WeisongZhao
Copy link
Author

WeisongZhao commented Aug 9, 2019

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_7 (InputLayer)            (None, 128, 128, 1)  0                                            
__________________________________________________________________________________________________
conv2d_43 (Conv2D)              (None, 64, 64, 64)   1088        input_7[0][0]                    
__________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU)      (None, 64, 64, 64)   0           conv2d_43[0][0]                  
__________________________________________________________________________________________________
conv2d_44 (Conv2D)              (None, 32, 32, 128)  131200      leaky_re_lu_22[0][0]             
__________________________________________________________________________________________________
leaky_re_lu_23 (LeakyReLU)      (None, 32, 32, 128)  0           conv2d_44[0][0]                  
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, 32, 32, 128)  512         leaky_re_lu_23[0][0]             
__________________________________________________________________________________________________
conv2d_45 (Conv2D)              (None, 16, 16, 256)  524544      batch_normalization_37[0][0]     
__________________________________________________________________________________________________
leaky_re_lu_24 (LeakyReLU)      (None, 16, 16, 256)  0           conv2d_45[0][0]                  
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, 16, 16, 256)  1024        leaky_re_lu_24[0][0]             
__________________________________________________________________________________________________
conv2d_46 (Conv2D)              (None, 8, 8, 512)    2097664     batch_normalization_38[0][0]     
__________________________________________________________________________________________________
leaky_re_lu_25 (LeakyReLU)      (None, 8, 8, 512)    0           conv2d_46[0][0]                  
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, 8, 8, 512)    2048        leaky_re_lu_25[0][0]             
__________________________________________________________________________________________________
conv2d_47 (Conv2D)              (None, 4, 4, 512)    4194816     batch_normalization_39[0][0]     
__________________________________________________________________________________________________
leaky_re_lu_26 (LeakyReLU)      (None, 4, 4, 512)    0           conv2d_47[0][0]                  
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, 4, 4, 512)    2048        leaky_re_lu_26[0][0]             
__________________________________________________________________________________________________
conv2d_48 (Conv2D)              (None, 2, 2, 512)    4194816     batch_normalization_40[0][0]     
__________________________________________________________________________________________________
leaky_re_lu_27 (LeakyReLU)      (None, 2, 2, 512)    0           conv2d_48[0][0]                  
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, 2, 2, 512)    2048        leaky_re_lu_27[0][0]             
__________________________________________________________________________________________________
conv2d_49 (Conv2D)              (None, 1, 1, 512)    4194816     batch_normalization_41[0][0]     
__________________________________________________________________________________________________
leaky_re_lu_28 (LeakyReLU)      (None, 1, 1, 512)    0           conv2d_49[0][0]                  
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, 1, 1, 512)    2048        leaky_re_lu_28[0][0]             
__________________________________________________________________________________________________
up_sampling2d_22 (UpSampling2D) (None, 2, 2, 512)    0           batch_normalization_42[0][0]     
__________________________________________________________________________________________________
conv2d_50 (Conv2D)              (None, 2, 2, 512)    4194816     up_sampling2d_22[0][0]           
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, 2, 2, 512)    2048        conv2d_50[0][0]                  
__________________________________________________________________________________________________
concatenate_19 (Concatenate)    (None, 2, 2, 1024)   0           batch_normalization_43[0][0]     
                                                                 batch_normalization_41[0][0]     
__________________________________________________________________________________________________
up_sampling2d_23 (UpSampling2D) (None, 4, 4, 1024)   0           concatenate_19[0][0]             
__________________________________________________________________________________________________
conv2d_51 (Conv2D)              (None, 4, 4, 512)    8389120     up_sampling2d_23[0][0]           
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, 4, 4, 512)    2048        conv2d_51[0][0]                  
__________________________________________________________________________________________________
concatenate_20 (Concatenate)    (None, 4, 4, 1024)   0           batch_normalization_44[0][0]     
                                                                 batch_normalization_40[0][0]     
__________________________________________________________________________________________________
up_sampling2d_24 (UpSampling2D) (None, 8, 8, 1024)   0           concatenate_20[0][0]             
__________________________________________________________________________________________________
conv2d_52 (Conv2D)              (None, 8, 8, 512)    8389120     up_sampling2d_24[0][0]           
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, 8, 8, 512)    2048        conv2d_52[0][0]                  
__________________________________________________________________________________________________
concatenate_21 (Concatenate)    (None, 8, 8, 1024)   0           batch_normalization_45[0][0]     
                                                                 batch_normalization_39[0][0]     
__________________________________________________________________________________________________
up_sampling2d_25 (UpSampling2D) (None, 16, 16, 1024) 0           concatenate_21[0][0]             
__________________________________________________________________________________________________
conv2d_53 (Conv2D)              (None, 16, 16, 256)  4194560     up_sampling2d_25[0][0]           
__________________________________________________________________________________________________
batch_normalization_46 (BatchNo (None, 16, 16, 256)  1024        conv2d_53[0][0]                  
__________________________________________________________________________________________________
concatenate_22 (Concatenate)    (None, 16, 16, 512)  0           batch_normalization_46[0][0]     
                                                                 batch_normalization_38[0][0]     
__________________________________________________________________________________________________
up_sampling2d_26 (UpSampling2D) (None, 32, 32, 512)  0           concatenate_22[0][0]             
__________________________________________________________________________________________________
conv2d_54 (Conv2D)              (None, 32, 32, 128)  1048704     up_sampling2d_26[0][0]           
__________________________________________________________________________________________________
batch_normalization_47 (BatchNo (None, 32, 32, 128)  512         conv2d_54[0][0]                  
__________________________________________________________________________________________________
concatenate_23 (Concatenate)    (None, 32, 32, 256)  0           batch_normalization_47[0][0]     
                                                                 batch_normalization_37[0][0]     
__________________________________________________________________________________________________
up_sampling2d_27 (UpSampling2D) (None, 64, 64, 256)  0           concatenate_23[0][0]             
__________________________________________________________________________________________________
conv2d_55 (Conv2D)              (None, 64, 64, 64)   262208      up_sampling2d_27[0][0]           
__________________________________________________________________________________________________
batch_normalization_48 (BatchNo (None, 64, 64, 64)   256         conv2d_55[0][0]                  
__________________________________________________________________________________________________
concatenate_24 (Concatenate)    (None, 64, 64, 128)  0           batch_normalization_48[0][0]     
                                                                 leaky_re_lu_22[0][0]             
__________________________________________________________________________________________________
up_sampling2d_28 (UpSampling2D) (None, 128, 128, 128 0           concatenate_24[0][0]             
__________________________________________________________________________________________________
conv2d_56 (Conv2D)              (None, 128, 128, 1)  2049        up_sampling2d_28[0][0]           
==================================================================================================
Total params: 41,837,185
Trainable params: 41,828,353
Non-trainable params: 8,832

Here is my model summary, and the input should be 0~1. Output is also 0~1.
The libtensorflow_jni-1.13.1.jar and libtensorflow-1.13.1.jar are used, and the model CARE-deconvolution goes well.

@WeisongZhao
Copy link
Author

Hi, I have tried the models exported by TensorFlow-12.0.0, and it seems works well. So the question is possibly caused by the difference between TensorFlow-13.0.1, and 12.0.0 version.

@deepimagej
Copy link
Owner

Thank you @WeisongZhao! Do you mean that your input is float between 0 and 1? We did not test with TensorFlow (TF) 12.0.0. Did you train your model with that version? Or did you train it in TF 13.0.0?

@WeisongZhao
Copy link
Author

WeisongZhao commented Aug 26, 2019

Yes, the 13.1.0 (fail), and 12.0.0 (success) are used, and my input and output is 0~1 float.

@deepimagej
Copy link
Owner

deepimagej commented Aug 29, 2019

We are working in the development of the plugin. Would yo mind to give us some feed back about the plugin and try the current version?
Once you converted your model to tesorflow, you can create the bundled models (similar to https://deepimagej.github.io/deepimagej/models.html) with the plugin for developers (https://github.com/deepimagej/deepimagej-plugin/blob/master/DeepImageJ_developer.zip).
Any comment is more than welcome!

@deepimagej deepimagej reopened this Aug 29, 2019
@deepimagej deepimagej self-assigned this Aug 29, 2019
@deepimagej deepimagej added bug Something isn't working good first issue Good for newcomers and removed good first issue Good for newcomers labels Aug 29, 2019
@WeisongZhao
Copy link
Author

WeisongZhao commented Aug 30, 2019

I have tried with TensorFlow 12.0.0, using libtensorflow_jni-1.13.1/libtensorflow-1.13.1/proto-1.13.1/protobuf-java-3.6.1/DeepImageJ_-.jars. For if use the libtensorflow-1.12.0, the leaklyReLu function is not defined.

It works well for now, and there seems have the 'normalization' problem. In my use age, the small patches (128x128) is working well, and larger patches(256x256) will cause some saturation situation.

I will have a try on the developer plugin.

Some small questions:
I have compiled the source code with ant (It works). For the imagej is built with maven, why deepimagej is not built with maven? What is the difficulty for supporting stack prediction? BTW, looking forwards to your publications.

@deepimagej
Copy link
Owner

Good, thanks! If you have a trained network and an example image, we can put it on the web page and advertise it as well. Just write us to [email protected]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants