Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello, After reading your paper, may I have a question that why you choice 178 for the celebA dataset drop size. #178

Open
Matt-V50 opened this issue Apr 19, 2022 · 0 comments

Comments

@Matt-V50
Copy link

Here is what the paper describe.

With CelebA, we cropped the center 178x178 of the images, then resized them to 256x256 using bilinear interpolation. For Paris StreetView, since the images in the dataset are elongated (936 x 537), we separate each image into three: 1) Left 537 x 537, 2) middle 537 x 537, 3) right 537 x 537, of the image. These images are scaled down to 256x256 for our model, totaling 44; 700 images.

And after little test, I feel this number has a big impact on the results.

So, maybe you have some experience about it.

Could you share it? I really appreciate it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant