Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I apply a smoothing to the video results? #33

Open
linxiang4200 opened this issue Aug 7, 2023 · 34 comments
Open

How can I apply a smoothing to the video results? #33

linxiang4200 opened this issue Aug 7, 2023 · 34 comments

Comments

@linxiang4200
Copy link

Thank you for your great work! While running the video demo, I noticed some slight jitter in the movements of the model's limbs. Is this due to the lack of smoothing?

@carlosedubarreto
Copy link

Hello @linxiang4200
I would suggest 2 things.
If you imported it in a 3d software, remove the Z data from the pelvis.

and if you are in blender, you have a smooth option on the graph editor that could help you with this problem.

I noticed that depending on the footage, you get much less jitters.

For example this video.
https://github.com/shubham-goel/4D-Humans/assets/4061130/27c813a6-7dfd-40be-945a-ed9176fcfa64

I had much less jitter, take a look on the video bellow.

blender_hXWWl67Fit.mp4

I'm not sure but I feel that if you have a good contrast between the character on the footage and the background, the result will be better

@linxiang4200
Copy link
Author

Hello @linxiang4200 I would suggest 2 things. If you imported it in a 3d software, remove the Z data from the pelvis.

and if you are in blender, you have a smooth option on the graph editor that could help you with this problem.

I noticed that depending on the footage, you get much less jitters.

For example this video. https://github.com/shubham-goel/4D-Humans/assets/4061130/27c813a6-7dfd-40be-945a-ed9176fcfa64

I had much less jitter, take a look on the video bellow.

blender_hXWWl67Fit.mp4
I'm not sure but I feel that if you have a good contrast between the character on the footage and the background, the result will be better

thanks your excellent advice,but I'm not good at blender. at last, I found another way ,use smoothNet, like this repo

@carlosedubarreto
Copy link

Hello @linxiang4200 , you mean that you were able use the code that start here?
https://github.com/haofanwang/CLIFF/blob/f38cad87cb9df34bf377129f9e5ebaaae54af51a/demo.py#L355C19-L355C20

Its looks preety interesting, I'll try it out. thanks for pointing out

@carlosedubarreto
Copy link

carlosedubarreto commented Aug 12, 2023

@linxiang4200 I must say that your idea was great, with it, most of the results I've got were much better.

Let me show a couple of tests I made.

the 0.05 is the default output from 4d humans, and the 0.06 is with smoothnet

20230812.0.06.wip.compare.TWITTER.mp4
blender_wCgdEwH52K.mp4
blender_aEm7wjouvx.mp4

@k-a-s-o-u
Copy link

k-a-s-o-u commented Aug 14, 2023

Hello @linxiang4200 , you mean that you were able use the code that start here? https://github.com/haofanwang/CLIFF/blob/f38cad87cb9df34bf377129f9e5ebaaae54af51a/demo.py#L355C19-L355C20

Its looks preety interesting, I'll try it out. thanks for pointing out

Hi, how did you fix
from mmhuman3d.utils.demo_utils import smooth_process
from mmhuman3d.utils.demo_utils import smooth_process
part?
I combined it with track.py but this error came out.

from mmhuman3d.utils.demo_utils import smooth_process
ModuleNotFoundError: No module named 'mmhuman3d'

@carlosedubarreto
Copy link

Hello @linxiang4200 , you mean that you were able use the code that start here? https://github.com/haofanwang/CLIFF/blob/f38cad87cb9df34bf377129f9e5ebaaae54af51a/demo.py#L355C19-L355C20
Its looks preety interesting, I'll try it out. thanks for pointing out

Hi, how did you fix from mmhuman3d.utils.demo_utils import smooth_process from mmhuman3d.utils.demo_utils import smooth_process part? I combined it with track.py but this error came out.

from mmhuman3d.utils.demo_utils import smooth_process ModuleNotFoundError: No module named 'mmhuman3d'

I used the track and after the track a executed that smooth code from cliff.

But to kake it work i had to install mmcv 1.6 (i think) and download the source code from mmhuman3d, and the source of pytorch3d. You dont need to build pytorch but have the oytorch folder inside where you are goi g to run the code, but mmhuan3d source code there too

@k-a-s-o-u
Copy link

Hello @linxiang4200 , you mean that you were able use the code that start here? https://github.com/haofanwang/CLIFF/blob/f38cad87cb9df34bf377129f9e5ebaaae54af51a/demo.py#L355C19-L355C20
Its looks preety interesting, I'll try it out. thanks for pointing out

Hi, how did you fix from mmhuman3d.utils.demo_utils import smooth_process from mmhuman3d.utils.demo_utils import smooth_process part? I combined it with track.py but this error came out.
from mmhuman3d.utils.demo_utils import smooth_process ModuleNotFoundError: No module named 'mmhuman3d'

I used the track and after the track a executed that smooth code from cliff.

But to kake it work i had to install mmcv 1.6 (i think) and download the source code from mmhuman3d, and the source of pytorch3d. You dont need to build pytorch but have the oytorch folder inside where you are goi g to run the code, but mmhuan3d source code there too

Thank you. I tried it but didn't understand placing the source of pytorch3d and mmhuman3d in conda env without installing.
I will wait someone integrates this function with 4D Humans!

@carlosedubarreto
Copy link

you can place in the same folder that you will use the scriot to smooth things out.

For example on my installation you'll find it this way. so when calling from mmhuman3d.utils.demo_utils import smooth_process it works because there is a folder called mmhuman3d there. and having that folder there, it doenst need to be installed.

You can install it too, but I was having a hard time trying to install it.

image

and here is the code I made to use on the smooth
smooth.zip

@carlosedubarreto
Copy link

On the code Im' loading from pickle instead of joblib, because I made an addon for it in blender and didnt want to install joblib on blender as pickle comes in python by default

@k-a-s-o-u
Copy link

k-a-s-o-u commented Aug 15, 2023

Thank you! However I got error on smooth.py
File "D:\4D-Humans\smooth.py", line 13, in
b = pickle.load(handle)
_pickle.UnpicklingError: invalid load key, 'x'.

I googled on Stack Overflow and Github but couldn't find out solution.
I tried it both on Python3.9 and 3.10 but got same error.
I installed mmcv 1.6.0 on conda activate 4D-Humans env
and placed Pytorch3D, mmhuman3d source code folders in 4D-Humans as on the picture.

May I ask if you know any clue about this?

2023-08-15 18 45 03

(4D-Humans) D:\4D-Humans>pip list
Package Version Editable project location


absl-py 1.4.0
accelerate 0.21.0
addict 2.4.0
aiofiles 23.2.1
aiohttp 3.8.5
aiosignal 1.3.1
altair 5.0.1
annotated-types 0.5.0
antlr4-python3-runtime 4.9.3
anyio 3.7.1
asttokens 2.2.1
async-timeout 4.0.2
attrs 23.1.0
av 10.0.0
backcall 0.2.0
black 23.7.0
boto3 1.26.124
botocore 1.29.124
braceexpand 0.1.7
Brotli 1.0.9
cachetools 5.3.1
certifi 2023.7.22
charset-normalizer 3.2.0
chumpy 0.70
click 8.1.6
cloudpickle 2.2.1
colorama 0.4.6
colorlog 6.7.0
contourpy 1.1.0
cycler 0.11.0
Cython 3.0.0
decorator 5.1.1
detectron2 0.6
diffusers 0.19.3
dill 0.3.7
einops 0.6.1
encodec 0.1.1
exceptiongroup 1.1.2
executing 1.2.0
fastapi 0.101.0
ffmpy 0.3.1
filelock 3.9.0
fonttools 4.41.1
freetype-py 2.4.0
frozenlist 1.4.0
fsspec 2023.4.0
funcy 2.0
fvcore 0.1.5.post20221221
google-auth 2.22.0
google-auth-oauthlib 1.0.0
gradio 3.40.1
gradio_client 0.4.0
grpcio 1.56.2
h11 0.14.0
hmr2 0.0.0 d:\4d-humans
httpcore 0.17.3
httpx 0.24.1
huggingface-hub 0.14.1
hydra-colorlog 1.2.0
hydra-core 1.3.2
hydra-submitit-launcher 1.2.0
idna 3.4
imageio 2.31.1
importlib-metadata 6.6.0
importlib-resources 6.0.1
iopath 0.1.9
ipython 8.14.0
jedi 0.18.2
Jinja2 3.1.2
jmespath 1.0.1
joblib 1.3.2
jsonschema 4.19.0
jsonschema-specifications 2023.7.1
kiwisolver 1.4.4
lazy_loader 0.3
lightning-utilities 0.9.0
linkify-it-py 2.0.2
Markdown 3.4.4
markdown-it-py 2.2.0
MarkupSafe 2.1.3
matplotlib 3.7.2
matplotlib-inline 0.1.6
mdit-py-plugins 0.3.3
mdurl 0.1.2
mmcv 1.6.0
mmhuman3d 0.11.0
mpmath 1.2.1
multidict 6.0.4
mypy-extensions 1.0.0
networkx 3.0
numpy 1.23.0
oauthlib 3.2.2
omegaconf 2.3.0
opencv-python 4.8.0.74
orjson 3.9.4
packaging 23.1
pandas 2.0.3
parso 0.8.3
pathspec 0.11.2
phalp 0.1.3
pickleshare 0.7.5
Pillow 9.3.0
pip 23.2.1
platformdirs 3.10.0
plyfile 1.0.1
portalocker 2.7.0
prompt-toolkit 3.0.38
protobuf 4.23.4
pure-eval 0.2.2
pyasn1 0.5.0
pyasn1-modules 0.3.0
pycocotools 2.0.6
pycparser 2.21
pydantic 2.1.1
pydantic_core 2.4.0
pydub 0.25.1
pyglet 2.0.9
Pygments 2.15.1
PyOpenGL 3.1.0
pyparsing 3.0.9
pyre-extensions 0.0.29
pyrender 0.1.45
pyrootutils 1.0.4
PySocks 1.7.1
PySoundFile 0.9.0.post1
python-dateutil 2.8.2
python-dotenv 1.0.0
python-multipart 0.0.6
pytorch-lightning 2.0.6
pytorch3d 0.7.4
pytube 15.0.0
pytz 2023.3
PyWavelets 1.4.1
pywin32 306
PyYAML 6.0
referencing 0.30.2
regex 2023.3.23
requests 2.31.0
requests-oauthlib 1.3.1
rich 13.5.0
rpds-py 0.9.2
rsa 4.9
Rtree 1.0.1
s3transfer 0.6.0
safetensors 0.3.2
scenedetect 0.6.2
scikit-image 0.21.0
scikit-learn 1.3.0
scipy 1.10.1
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 68.0.0
six 1.16.0
smplx 0.1.28
sniffio 1.3.0
stack-data 0.6.2
starlette 0.27.0
submitit 1.4.5
suno-bark 0.0.1a0
sympy 1.11.1
tabulate 0.9.0
tensorboard 2.13.0
tensorboard-data-server 0.7.1
termcolor 2.3.0
threadpoolctl 3.2.0
tifffile 2023.7.18
timm 0.9.2
tokenizers 0.13.3
tomli 2.0.1
toolz 0.12.0
torch 2.0.1
torchaudio 2.0.2+cu118
torchmetrics 1.0.1
torchvision 0.15.2
tqdm 4.65.0
traitlets 5.9.0
transformers 4.31.0
trimesh 3.22.5
typing_extensions 4.7.1
typing-inspect 0.8.0
tzdata 2023.3
uc-micro-py 1.0.2
urllib3 1.26.16
uvicorn 0.23.2
vedo 2023.4.6
wcwidth 0.2.6
webdataset 0.2.48
websockets 11.0.3
Werkzeug 2.3.6
wheel 0.41.0
win-inet-pton 1.1.0
xformers 0.0.20
yacs 0.1.8
yapf 0.40.1
yarl 1.9.2
zipp 3.15.0

@carlosedubarreto
Copy link

@k-a-s-o-u Probably yout problem is because you are using my loading process that uses Pickle instead of joblib.

You probably will ake it work exchaging for joblib import of the data

For exmaple, I did this code to convert to pickle

import joblib
import pickle
import os

path_addon = os.path.dirname(os.path.abspath(__file__))
base_file = os.path.join(path_addon,'4D-Humans-main','outputs','results')
file = os.path.join(base_file,'demo_video.pkl')
file_converted = os.path.join(base_file,'demo_video_converted.pkl')
results = joblib.load(file)


with open(file_converted, 'wb') as handle:
   pickle.dump(results, handle, protocol=pickle.HIGHEST_PROTOCOL)
   
# with open('filename.pickle', 'rb') as handle:
#    b = pickle.load(handle)

probably you'll just have to use the
results = joblib.load(file)

@k-a-s-o-u
Copy link

@carlosedubarreto
I just created pkl file by 4D-Humans' track.py. I used your import in blender.py when imported pkl to Blender.
I don't even have any demo_video_converted.pkl file and I can't combine your suggested code with smooth.py.
I struggle for few hours but I gave up and I should wait someone will pull request this and combine at next update!
Thank you so much for your help anyway!

import joblib
import pickle
import os

path_addon = os.path.dirname(os.path.abspath(__file__))
base_file = os.path.join(path_addon,'4D-Humans-main','outputs','results')
file = os.path.join(base_file,'demo_video.pkl')
file_converted = os.path.join(base_file,'demo_video_converted.pkl')
results = joblib.load(file)


with open(file_converted, 'wb') as handle:
   pickle.dump(results, handle, protocol=pickle.HIGHEST_PROTOCOL)
   
# with open('filename.pickle', 'rb') as handle:
#    b = pickle.load(handle)

@carlosedubarreto
Copy link

@k-a-s-o-u
Sorry for that. The code I sent if part of the addon I did, so it wont work just by executing it, you would need to analyze the output and adapt it in the code. the good part is that the most complicated part of the process is already in the code.

I hope someone does a pull request adding smoothnet on this repo, but, maybe that wont happen.

If you allow me to suggest, it would be great if you keep on trying, If you like the result youve saw and want it to be even better, you will only gain with that.

And you have one big advantage, me ☺️. I say that because I suffered a lot to make the code I did, and I'm willing to share the knowledge I've got from what I learned.
I can point out things that you could do to make it work for you.

But one thing I wont do ia make it "plug and play" that I do on the addons I make.

I do that because, the addon I make is for people that dont want to understande the code and just use it, and because of that, I charge then (not all the time, but most of the time lately ☺️, after all it takes me weeks codgin things for people to dont have to tjhink about the complexity of things) and when people want to learn, I think tht is a great thing and I'm always trying to help others if the want to learn.

so if you want to learn, if you want to go further, you can count on me, seriously.
Maybe it will take a bit of time for me to answer, becuase I help a lot of people, have a work, freelance and a business I'm building, but I will answer ☺️

Hope you understand and dont tkae what I said on the negative way, It should be a positive answer form my part 😀

@k-a-s-o-u
Copy link

k-a-s-o-u commented Aug 17, 2023

@carlosedubarreto
I'm appreciated for your warm hearted productive suggestion!
Eventually I made it working and got smoothed pkl!

This is the result movie
https://github.com/shubham-goel/4D-Humans/assets/29495485/635bc6cd-e41e-4c77-b18c-515810972bfd

by the way I have one last question.
I've got this error and to fix that I simply created a new folder on directory and then copied smoothnet_windowsize8.py as follow error indication.
because I couldn't find out which file I can change smoothnet_windowsize8.py loading directry even searched on VSCode.
I just want to know how did you fixed that, to edit some code?

File "D:\miniconda3\envs\4D-Humans\lib\site-packages\mmcv\utils\path.py", linene 23, in check_file_exist
raise FileNotFoundError(msg_tmpl.format(filename))
FileNotFoundError: file "D:\4D-Humans\configs_cliff_base_\post_processing\smoothnet_windowsize8.py" does not exist

Anyway I've got so much thank you that you gave me such opportunity for seriously facing to the code for learning😄and your big help!
I just solved error one by one patiently took many hours but impressed when it run!!

@carlosedubarreto
Copy link

hello @k-a-s-o-u

I'm vary happy that you were able to make it work.
I thought that you were in the right path, I just wanted to give a little push, and with that you would have 2 great things, the result and the happyness of making it by yourself, and I must congratulate you, its not an easy job 👏

Sorry for that error, I'm sure it was my fault.

Let me explain. As I was having to integrate the cliff code with the other 4d human code, to know where the code was from, I changes the config folder name so I could know from where it was coming from. 😅

You could rename it to the original name, probably "configs" intead of "configs_cliff_base"

And with that smooth it makes things better most of the time, but on some you will see that the tranlation will becamo messy, you can do this, enable it only for the pose data, and you'll be good to go.

On my addon I made an option for people to apply the smooth on the pose, the translation or on both. By default, it will apply on both, but if the movement become strange, the user can apply only on the pose and it will work.

Oh another thing, I'm planning to try another code to smooth the result, the darkpose that was suggested to me from pose2sim creator David Pagnon. If it works successfully, I'll put some info here.

@k-a-s-o-u
Copy link

k-a-s-o-u commented Aug 22, 2023

Hello @carlosedubarreto

And with that smooth it makes things better most of the time, but on some you will see that the tranlation will becamo messy, you can do this, enable it only for the pose data, and you'll be good to go.
On my addon I made an option for people to apply the smooth on the pose, the translation or on both. By default, it will apply on both, but if the movement become strange, the user can apply only on the pose and it will work.

I saw your add-on version of this. It's awesome work! I want to make it someday.

Oh another thing, I'm planning to try another code to smooth the result, the darkpose that was suggested to me from pose2sim creator David Pagnon. If it works successfully, I'll put some info here.

It sounds exciting! look forward to it!

By the way I realized 4D-Humans' motion captured doll got strange armature direction are standing perpendicularly on body(left on picuture) as usually armature should be set along anatomy on picture the right model.
I researched manually aligning armature bones without breaking animation on Blender but there're no solution for this weird case.
Do you know if this issue can fix?
2023-08-23 01 05 05

@carlosedubarreto
Copy link

carlosedubarreto commented Aug 22, 2023

this is a "normal" behavior of blender importing FBX.
There is an option in blender where you can choose for it to align the bones automatically, but I think i did a test in the past and it messed with the result.

My honest opinion, dont bother with that.
You can always retarget to another armature, or maybe use another armatura as input to get the movements (maybe if you have another chracter with the same bones structure and the same name , probably it might work when importing the motinon in blender. I didnt test it but I dont see why it would not work. Maybe that is the best solutio for what you want to have.

@smandava98
Copy link

Hi @carlosedubarreto I'm trying to use your smoothing file but doing it at the end affects how it is visualized on the image. It messes up the keypoint projections. Is there a better way to get the smoothed output visualized on the image/video?

@carlosedubarreto
Copy link

@smandava98 , can you show some examples?
I couldnt visualize (on my mind) what you are saying.

@smandava98
Copy link

smandava98 commented Oct 27, 2023

Basically, my issue was that I could not visualize the smoothed 4D-Humans results on my video.

I figured out the issue. I was using PHALP, not 4D-Humans (PHALP implemented 4D-Humans). They use 'camera' parameter to mean the 2d camera. I had to use 'camera_bbox' as that was the 3D camera parameter that they had when they created the pkl file.

@carlosedubarreto
Copy link

@smandava98 sorry, I didnt even remembered that I shared the code LOL.

I think the best would be to visualize the result in blender, for example. Using smoothnet I had some problems when there is too much movement on the original data. The pose for the character goes elsewhere in parts of the animation.

I actually dont know what to tell you, since my work was besically dealing with the result inside blender. I dont use the rendered video at all (I dont even look at the result)

So in the end what I suggest is not watching the video result, but the import of the animation in 3d space, in blender for example.

and to do that you can even get the free addon that I did called CEB 4d humans.
the free version you can create the data using the 4d humans code and there is an option to import it in blender.
The problem is that this free version doesnt have smooth net, but if you create another PKL file, there is an option to import using the addon (I'm just not sure if it will work with your pkl after smoothing.

Anyway, even smoothnet working, there is anther method for smoothing that you can use native tools in blender, I receiverd the tip from a user called giacomo spaconi,

the idea was to bake the animation, to make the animation curves to be corrected in blender and then use the smooth tool that blender has for animation. This approach brought the best result for smoothhing the animation. I would suggest you to try it.

@smandava98
Copy link

@carlosedubarreto Thanks Carlos. I ended up fixing my issue (edited my comment above). Your code works great!

@carlosedubarreto
Copy link

@smandava98 Thats great, thanks a lot for letting me know

@timtensor
Copy link

@carlosedubarreto - sorry to ask you this , i have not much idea but is there any other software which could be easier ( blender seems to have a big learning curve) to quickly import the pckl file to get some animations going ?

@carlosedubarreto
Copy link

@carlosedubarreto - sorry to ask you this , i have not much idea but is there any other software which could be easier ( blender seems to have a big learning curve) to quickly import the pckl file to get some animations going ?

Hello @timtensor , the main problem I think is "converting" the pkl data to something usable on the 3d softwares.

Actually blender could be the easiest, I say that because you can use an addon I did exactly for that.

You can get it for free here: https://carlosedubarreto.gumroad.com/l/ceb4dhumans?layout=profile

@timtensor
Copy link

@carlosedubarreto thank you for the feedback, my idea was to do a 360 degree panning, camera view across the smpl files. I will see what could be done and how difficult would it be to get it running.

@carlosedubarreto
Copy link

@timtensor , 360 around the animation result, right?

You might need to apply some smooth, because the result of 4d humans is very good, but has some jitters.

@timtensor
Copy link

@carlosedubarreto - thanks. For now I just want to have a quick demo even with thejitters . I will try to find the instructions if I can get it up and running .
Is there also a Ubuntu version of it ?

@carlosedubarreto
Copy link

@timtensor , for the blender addon I suggested, only for windows, sorry

@jackgeng19
Copy link

@linxiang4200 I must say that your idea was great, with it, most of the results I've got were much better.

Let me show a couple of tests I made.

the 0.05 is the default output from 4d humans, and the 0.06 is with smoothnet

20230812.0.06.wip.compare.TWITTER.mp4
blender_wCgdEwH52K.mp4
blender_aEm7wjouvx.mp4

I am having the same problem and trying to reproduce this issue. But what parameter does 0.05 stand for in PHALP?

@carlosedubarreto
Copy link

@jackgeng19

Wow, that is an old issue, I didnt remember most of it, actually ended up reading it all again.

Probably the 0.05 is not actually a parameter in PHALP, it was the version of the addon I was working on.

At that time it was on version 0.05. Now its in version 1.06, integrating not only 4D Humans, but also WHAM and SLAHMR (that is not 100% working)

And about the smooth, if you want to try, I think the best would be to apply the smooth that an amazing user suggested me (I loved the idea so much that I added to the addon and added his name to the type of smooth :D

Here is the method
https://youtu.be/o3wxe5Wrs7g

@jackgeng19
Copy link

Hey Carlos,

Thanks for your reply. While reproducing this issue, our videos involve multi-person tracking for NBA game recordings in a large dataset so Blender might not be a good option for me. Let me give you a quick look at the output we got from 4D:

processed.mp4

We also got an output pkl file every time we process a video.

Here is what I thought:

Applying our pkl file with your smooth.py script and get a processed pkl file. Then using the processed pkl to reconstruct a new video.

Is there any foreseeable obstacle or do you have any other suggestions for our current situation?

Thanks!

@jackgeng19
Copy link

Screenshot 2024-04-08 at 20 17 02

If that video doesn't work, here's a screenshot for your reference

@carlosedubarreto
Copy link

Hello @jackgeng19
Actually I still think that going through blender could be a good idea, because you are able to make a script to runn all that you need in a headless mode.

the smooth script that I share works well, but the smooth from blender is very superior.
I was suggesting it if you want better results.

It can work with the smoothnet? Yes it can, but there will be moments that the motion wont be good.
Actually, depending on what is your need, even the blender smooth wont be good enough.

I mean, 4d humans is an amazing project, and it outputs great results (with jitter) But the real problem are the jitters.

to have a great output, what I fould is that, the best tould be to select the importante frames from a chracter, and interpolate the in between. Or maybe use cascadeur to help on that jitter cleaning. (I didnt test yet the cascadeur path)

So to make my answer shorter.
Its possible to use blender even having lots of people to track, I also sent a script in an issue that you could use to load the pkl in blender.
And using blender you could have a better smooth option that works even faster than smooth net.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants