-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing the accuracy results #36
Comments
Could you tell me which results are you trying to reproduce in reference to the paper? I don't remember doing anything special for two of the four networks. |
It's table 6 of the paper. I meant that since getAccuracy() was not working in the original code, I tried to implement it myself and I only got two of the networks right. So just want to know how table 6 was produced. Thx! |
Those results were processed by running the network within MPC but reconstructing the out in the clear and then processing them. Some useful scripts should be in the scripts folder. It's great that you reproduced at least two of them using a secure computation getAccuracy() implementation. Can you tell me what is not working in the LeNet/MiniONN results? |
The scripts produce the same results as the new getAccuracy(). Generally, LeNet/MiniONN results are like random with only 10+ correctly classified. |
I am interested in how you edit the code to reproduce the result in SecureML, as I used quite a lot of time but still cannot reproduce the result... Thank you very much! |
Right, then it seems that it is producing random results. Can you narrow down where the error might be? Is it in the getAccuracy() function or the trained network? Would be great if you can help @AndesPooh258 with his question too. |
Any updates? Could you @HuangPZ help point out the part of codes that you edit to get reasonable results for SecureML and Sarda? Below is the accuracy figure for MNIST+SecureML I got according to the current master branch code. |
Hi, it's complicated to make a pull request from my code now since I made other changes, but here's something I did to get the accuracy:
You may have a try on these and let me know if it also works for you. @AndesPooh258 @llCurious |
@HuangPZ Got it! |
Based on my current result, it seems like some overflow errors occur during the secure training, which affect the accuracy (I am still not free to do further debugging). But anyway, thank you for your advice! |
Hi,
So I saw the previous discussion on #9 and I made some edit to the functions that I could get reasonable results for SecureML and Sarda (~96% each). However I still could not get good results for LeNet and MiniONN. So just want to know how the accuracy results in the paper were created? Was there something like quantization scripts? Thanks!
The text was updated successfully, but these errors were encountered: