-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This feature is interesting, but the results seem a bit disappointing, what is the difference between it and foofocus zoom #22
Comments
Hi @chenpipi0807, thanks for your interest. As I mentioned in many places, DemoFusion is proposed for high-resolution generation. And a potential application is people can use a real image as the initialization. However, it's still a generation process, and the generated results strongly corresponds to SDXL's prior knowledge. For your needs, you can seek help from super-resolution (SR) methods. And SR is exactly the concept that we avoid using to prevent such misinformation to our readers. I'm also bummed that there seems to be such misunderstanding on social media right now about the motivation of our work :( |
Is it possible to add some new content and more details on the basis of the original image, I found that the image super resolution (SR) can not be implemented to add more details or add very little details, |
Providing some sample outputs I've made here to give out a good real-world example for anyone curious what this is useful for. Simply put, the level of detail that I'm able to get out of this pipeline is amazing! But I'm generating from new idea's and concepts. While I can do img2img, and control-net with it, I'll never get the original with more details because of what it is (a generation process, like @RuoyiDu mentioned earlier). I wouldn't say that the results are disappointing at all! I don't intend to run defense, but when used as intended the results are absolutely astounding. I do hope the confusion that's propagating on social media gets a bit more quiet. When used to generate new (or derivative works using control-net) I haven't found anything that can output at this resolution with this level of detail. There's even a pipeline I use to optimize generated images. Thanks to its 3 step output process I can upscale the smaller generations to help repair oddities or replications that might get generated on the last step, if I even need to. It's no hyperbole to say that it's revolutionized how I'm approaching and creating generative AI images! Example Images |
good work |
Her similarity to the original is too low. I thought I could use it to enlarge my girlfriend's photo. Is it more like how a refiner works?
The text was updated successfully, but these errors were encountered: