-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can you public eval code?? #29
Comments
Hi, I met the same question. Maybe you can try to use the evaluation code provided by freihand:
And the evaluation groud-truth is also released:
Note that, the code needs some modifications in my case
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(verts) d1 = gt.compute_point_cloud_distance(pr)
d2 = pr.compute_point_cloud_distance(gt) data_uri1 = base64.b64encode(open(img_path, 'rb').read())
data_uri1 = data_uri1.decode("utf-8") # byte string to string, b'123' to '123 If you have file import problem, import sys
sys.path.append('..') |
Can I have a look at your pred.py file? Thank you |
In fact, I don't really write a pred.py myself. here is the snippet to dump result: |
Ok, thank you, but can you take the liberty to ask how to get the corresponding mano parameters according to the pictures in the test set? |
I'm not sure, but maybe it's in the offcial eval zip file: FreiHAND_pub_v2_eval.zip updated: Line 28 in 5ea4ab9
maybe you can try this. |
But this is the. mono file in the training directory. Our dataset has this file, but only images and k.json and scale.json are available in the evaluation directory. What bothers me is how to use these files to predict xyz.json. If you are free, please help me. Thank you |
NONONO! FreiHAND have already released their evaluation annotations on official dataset website: the zip file contains _mano, _verts, _xyz, ... for evaluation set |
But his pred.py needs to write its own prediction code. Thank you for your advice. The official one does provide it, but I would like to ask how to predict xyz by myself: |
Sorry for not getting the point. What HandMesh does is just copying codes about( converting prediction result to json ) from the |
My question is: |
There are so many researches about
|
OK, thank you for your suggestion. You can ask about the modle in this project Is py trained? Position_ Can hand directly predict xyz and vert? What I don't understand here is pose_ What is the hand parameter mano? Is it a 61 bit mano parameter? How to obtain from a pending image? |
I think it's far away from the original issue title, so this is my last reply.
if this means:
Sorry I really can't understand it...
You can see this: https://github.com/hassony2/manopth
I guess it can only be regressed from ground-truth hand vertices. |
hi,I want to test my result on your Codalab, but it seams something wrong and I cant get the score. Can you public the evaluation code so that I can get my eval result.
The text was updated successfully, but these errors were encountered: