-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some question about the test result #2
Comments
maybe there should be "model.model.eval()" before it ? Line 107 in ff58514
|
Hi IItaly:
You're totally right. "model.eval()" is necessary because there're BatchNorm and Dropout layer in network. So the result in epoch24 should be right. My fault. I'll fix it in the latest commit. Thanks for your comments! |
Thanks for your work. @yyk-wew
cuz the ff++ dataset is too big to download,I downloaded the celebdf and organized the dataset as readmd to train the model.
And I set the max epoch to 25,then I get the strange result:
[2021-04-18 01:00:02,327][DEBUG] (Test @ epoch 24) auc: 0.9596398896802123, r_acc: 0.8729227761485826, f_acc:0.9508928571428571 [2021-04-18 01:00:14,197][DEBUG] loss: 0.001041474868543446 at step: 40960 [2021-04-18 01:00:26,063][DEBUG] loss: 0.00437780749052763 at step: 41000 [2021-04-18 01:00:37,918][DEBUG] loss: 0.00041091835009865463 at step: 41040 [2021-04-18 01:00:49,778][DEBUG] loss: 0.0002549117198213935 at step: 41080 [2021-04-18 01:01:14,447][DEBUG] (Val @ epoch 24) auc: 0.9573432691169508, r_acc: 0.8555871212121212, f_acc:0.9265625 [2021-04-18 01:01:29,453][DEBUG] (Test @ epoch 24) auc: 0.9628076560536238, r_acc: 0.8797653958944281, f_acc:0.9419642857142857 [2021-04-18 01:01:39,855][DEBUG] loss: 0.00019928392430301756 at step: 41120 [2021-04-18 01:01:51,724][DEBUG] loss: 0.00016936950851231813 at step: 41160 [2021-04-18 01:02:03,599][DEBUG] loss: 0.00015224494563881308 at step: 41200 [2021-04-18 01:02:15,466][DEBUG] loss: 0.0005151446093805134 at step: 41240 [2021-04-18 01:02:33,977][DEBUG] (Test @ epoch 25) auc: 0.4082194351347577, r_acc: 0.458455522971652, f_acc:0.49375
Test at the epoch25,the result is quite different from what we expected.
And I selected the 'Both' to train the model.
I wonder the reason.Or the epoch24‘s result is the final? Thank you.
The text was updated successfully, but these errors were encountered: