-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about the loss #4
Comments
I've never seen a problem like this. Have you changed the code? |
thanks for reply actually I didn't change the code and used the provided data(./kitti/gt_database/train_gt_database_3level_Car.pkl) for training...... |
You can check the log_train.txt in the log_kitti folder. That's the training log for 200 epochs. And I train 65 epochs again to check the code. The problem you mentioned does not come up in these two experiments. You can try to clone the code again. |
sorry to bother you again, I re-download the code without any change and try several times but still require the same result......If possible, could I get the lateset version of code you used yesterday with email [email protected]? Thanks so much. |
I have sent the code to you |
解决了么?我也有同样的问题 |
为什么会有这个问题?我这里训练就没有出现过啊。 |
把train.py里面dataloader那里的num_workers改成32!总之不是0就好,不过原因我不太知道,刚接触pytorch,之前一直tf来着…
|
这是什么原因?num_workers怎么会对训练产生影响。能不能把不收敛的loss的log发个邮件给我,我看看是什么问题。 |
邮箱多少,我把我的log发给你哈。我觉得应该是Loss function对kitti调整的问题,我也还没搞明白。这里我截取两端部分大家看下。box_loss 和center_loss是在累加。
片段二
后来,我修改了eval.py。无论是否使用
当我 可以参考下这个ISSUE, |
之前删掉了,重新训练下再发你;除此之外我写错了,不能是0,已修正 |
我搜了下平num_workers是CPU与GPU的内存访问设置,理论上影响的是训练时间,为什么会影响训练精度? |
不清楚,不过你自己可以试一下,num_workers从0改成32,其他的不变,看一下loss变化;你刚刚发的loss趋势和我之前一样。 |
when training on kitti dataset, two kinds of printed losses akways increased(mean loss and mean box loss), while other types decreased. Have you ever encountered such problem?
Thanks~
The text was updated successfully, but these errors were encountered: