Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The generation of dance movements #32

Open
JingYC001 opened this issue Jan 18, 2024 · 1 comment
Open

The generation of dance movements #32

JingYC001 opened this issue Jan 18, 2024 · 1 comment

Comments

@JingYC001
Copy link

Hello, I used the Amass dance action dataset and jukebox audio extractor to extract audio features and send them to model training, but the generated results were not very good. May I ask if our model is not suitable for dance audio driven tasks. Is there any other reason?
In addition, I also noticed that when calculating loss, in addition to rot_ Mse, the last few losses were not calculated. Is there any reasonable explanation?

@YoungSeng
Copy link
Owner

I think it might be more sensitive to the motion representation and the number of iterations, and this EDGE you're talking about should have already done this, it shouldn't have anything to do with the framework, we're all modifying on top of MDM; also about the losses, I just didn't use the other losses in MDM similar to the feet, velocities, etc.; the losses for each joint are still computed, see the code for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants