Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the time encoding #2

Open
violet-sto opened this issue Nov 9, 2023 · 1 comment
Open

Question about the time encoding #2

violet-sto opened this issue Nov 9, 2023 · 1 comment

Comments

@violet-sto
Copy link

Hi, I have a question about the time encoding. As DiffPreT is trained like a diffusion model, does the backbone network GearNet explicitly encode the time step?

@Oxer11
Copy link
Collaborator

Oxer11 commented Nov 27, 2023

Hi, thanks for the question. This is essentially a difference bewteen diffusion models for generation and pre-training. In pre-training, we do not explicitly encode the time step to accomordate different encoder architectures and keep the consistency between pre-training and fine-tuning, as no noise is introduced during fine-tuning. Instead, we can implicitly encode the noise level (time step) by using a perturbed distance encoder.

self.dist_mlp = layers.MLP(1, [output_dim] * (num_mlp_layer - 1) + [output_dim])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants