You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I used the Amass dance action dataset and jukebox audio extractor to extract audio features and send them to model training, but the generated results were not very good. May I ask if our model is not suitable for dance audio driven tasks. Is there any other reason?
In addition, I also noticed that when calculating loss, in addition to rot_ Mse, the last few losses were not calculated. Is there any reasonable explanation?
The text was updated successfully, but these errors were encountered:
I think it might be more sensitive to the motion representation and the number of iterations, and this EDGE you're talking about should have already done this, it shouldn't have anything to do with the framework, we're all modifying on top of MDM; also about the losses, I just didn't use the other losses in MDM similar to the feet, velocities, etc.; the losses for each joint are still computed, see the code for details.
Hello, I used the Amass dance action dataset and jukebox audio extractor to extract audio features and send them to model training, but the generated results were not very good. May I ask if our model is not suitable for dance audio driven tasks. Is there any other reason?
In addition, I also noticed that when calculating loss, in addition to rot_ Mse, the last few losses were not calculated. Is there any reasonable explanation?
The text was updated successfully, but these errors were encountered: