Replies: 2 comments 1 reply
-
Yes. This is planned. I already started working on it, but it will take some time before it's done. |
Beta Was this translation helpful? Give feedback.
1 reply
-
This is available now. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
It seems that HCP-Diffusion support prompt tuning on fine tuning with pivotal tuning approach I suppose, but they lack on interface and documentation. Adding this method seems make OneTrainer better as "All in one" trainer because you don't need to train LoRAs one by one and is very much in line with Onetrainer's objectives to be one-stop solution for all your stable diffusion training needs
Beta Was this translation helpful? Give feedback.
All reactions