Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about the fundamental #394

Open
drxxy98 opened this issue Aug 21, 2024 · 0 comments
Open

A question about the fundamental #394

drxxy98 opened this issue Aug 21, 2024 · 0 comments

Comments

@drxxy98
Copy link

drxxy98 commented Aug 21, 2024

Can the image feature extraction module in CLIP be used as an encoder to map images from pixel space to latent space? During the training phase of VAE, it would only be necessary to use CLIP's image feature extraction module as the encoder, training it together with the decoder. The advantage of this approach, I believe, is that it would keep the text and image encodings in U-Net within the same latent space. In the current code, it is clear that CLIP and VAE are trained independently. My confusion is whether the encodings of text and images not residing in the same latent space could lead to a mismatch of information. Can U-Net effectively link the information from these two feature spaces? I look forward to your response and interpretation. Thank you very much.

@drxxy98 drxxy98 changed the title A question regarding the underlying principles A question about the fundamental Aug 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant