You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The latest experiments were testable via PyTorch Lightning's validation pipeline during training, so we've been able to collect performance metrics via training runs.
Then there was friction when pulling out a "predict" function to plug into the live system.
This wasn't done via Lightning for the 9/22/23 demo, because Lightning's predict call was not working properly. So a predict call was instead hand-crafted, and we hit bumps including not normalizing pixel-wise distances property for the TCN's feature vector, using [tlbr] instead of [xywh] to capture bounding box position & size, etc...
Lesson learned from the 9/22/23 demo:
The latest experiments were testable via PyTorch Lightning's validation pipeline during training, so we've been able to collect performance metrics via training runs.
Then there was friction when pulling out a "predict" function to plug into the live system.
This wasn't done via Lightning for the 9/22/23 demo, because Lightning's predict call was not working properly. So a predict call was instead hand-crafted, and we hit bumps including not normalizing pixel-wise distances property for the TCN's feature vector, using [tlbr] instead of [xywh] to capture bounding box position & size, etc...
But calling any of our trained TCN models via lightning should be possible with one call.
https://stackoverflow.com/questions/65807601/output-prediction-of-pytorch-lightning-model
Let's get that "predict.py" call ready for the next time we need it, ideally in a way that's generalizable to any of our feature vector versions.
The text was updated successfully, but these errors were encountered: