-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about the appearance space and the structure space #37
Comments
@BossBobxuan I also have same questions. My hypothesis is we need to make generated image have high predicted probability for the ID from both spaces. The reason is both have id-related information. |
Hi @BossBobxuan , @Zonsor However, we did not do it. The main reason is that the structure space is relatively low-level, which is used to reconstruct the image. Thus, if we want to extract the high-level feature, we need one more res-net, which will introduce extra parameters. So we conduct a tradeoff that learn the structure info, e.g., hair, hat, bag and body size, on the appearance embedding as well. |
Hello, I couldn't find in code how fine grained classification is done? |
We just use two classifiers, which do not share weights. |
So the ft_netAB has a shared base parameters for two purposes:
I have a naive doubt that whether base parameters will be a tradeoff between having nice appearance information and also having discriminative reID information. Can we use different model for learning p? |
Yes. The generation somehow affect the reID. So I applied the detach at https://github.com/NVlabs/DG-Net/blob/master/reIDmodel.py#L142 |
Thanks for your quick response. Your work is quite inspiring and I am learning a lot from your paper. |
Thanks for your response. |
Hello, In your paper, I think if use appearance space of id I and structure space of id J to generate an image. The ID of generated image should be J. So I think the structure space encodes the ID information.
This loss use the ID of appearance space to determine the ID of generated image.
But this loss use the ID of structure space to determine the ID of generated image.
And you also mentioned in the paper that
And obviously the structure space encodes the hair, hat, bag and body size. Therefore, the sutructure space encodes the ID.
This is confusing to use appearance space to discriminative ID. Could you please explain about this.
The text was updated successfully, but these errors were encountered: