Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plotting features using UMAP #9

Open
fabriziojpiva opened this issue Apr 29, 2021 · 4 comments
Open

Plotting features using UMAP #9

fabriziojpiva opened this issue Apr 29, 2021 · 4 comments

Comments

@fabriziojpiva
Copy link

fabriziojpiva commented Apr 29, 2021

Hello, thanks for such a good contribution in the field, it is really a groundbreaking work.

I was trying to reproduce the plot of the features that you have in Figure 5 of the main manuscript using UMAP. How did you determine which features belong to those specific classes (building, traffic sign, pole, and vegetation)? We can determine from the output to which class each pixel belongs to, but how did you do it in the feature space? Resizing the logits back to the feature space shape, then argmax to determine the correspondence?

@super233
Copy link

super233 commented May 2, 2021

ProDA/calc_prototype.py

Lines 119 to 122 in 9ba80c7

s = feat_cls[n] * outputs_pred[n][t]
# if (torch.sum(outputs_pred[n][t] * labels_expanded[n][t]).item() < 30):
# continue
s = F.adaptive_avg_pool2d(s, 1) / scale_factor[n][t]

The network has two outputs which are feat and out, note that feat and out have the same shape. The process is as follows:

  1. Get pseudo labels by using argmax in out.
  2. For each class, select corresponding feat in pixel-level by pseudo labels, and then perfome F.adaptive_avg_pool2d in selected feat to get image-level features of each class.

@fabriziojpiva
Copy link
Author

fabriziojpiva commented Jun 21, 2021

2\. For each class, select corresponding `feat` in pixel-level by pseudo labels, and then perfome `F.adaptive_avg_pool2d` in selected  `feat` to get image-level features of each class.

Why is it needed to perform adaptive average pooling? To my understanding, if I were to plot features I would do the following:

  1. Get pseudo labels by using argmax in out. The resulting tensor out_argmax has a shape of [batch_size, h, w], which I flatten out into a unidimensional vector called class_ids of size [N], where N=batch_size*h*w.
  2. Reshape the features feat to match the vector of class_ids: from a feature tensor of shape [batch_size, depth, h, w] to a new shape [N, depth]. Let's call the resulting reshaped tensor feats_r.
  3. Store class_ids from 1) and feats_r from 2) into a pandas dataframe. All the class ids and reshaped features are accumulated into a pandas dataframe df with depth + 1 columns, where the first depth columns are for the features and the last one for the class ids.
  4. Use UMAP to reduce all but the last column of df, and plot the resulting embeddings using the class ids for the corresponding color of each point.

@fabriziojpiva fabriziojpiva reopened this Jun 21, 2021
@fabriziojpiva
Copy link
Author

ProDA/calc_prototype.py

Lines 119 to 122 in 9ba80c7

s = feat_cls[n] * outputs_pred[n][t]
# if (torch.sum(outputs_pred[n][t] * labels_expanded[n][t]).item() < 30):
# continue
s = F.adaptive_avg_pool2d(s, 1) / scale_factor[n][t]

The network has two outputs which are feat and out, note that feat and out have the same shape. The process is as follows:

1. Get pseudo labels by using `argmax` in `out`.

2. For each class, select corresponding `feat` in pixel-level by pseudo labels, and then perfome `F.adaptive_avg_pool2d` in selected  `feat` to get image-level features of each class.

I just tried this approach, storing all these vectors s in a dataframe, and then reducing this dataframe to 2D representations using UMAP, but I obtained very dense clusters compared to the figures in the manuscript, where the point clouds look more sparse. Could you please provide more information about these feature representations:

  1. Are these features computed on the training split of Cityscapes?
  2. What parameters are used for UMAP (n_neighbors, etc.)?
  3. Are these feature vectors computed per batch or per image?

Would be glad to hear from you. Thanks!

@xylzjm
Copy link

xylzjm commented May 19, 2023

no reply, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants