-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have two images img1 and img2,how to get the matched point in img2 given points in img1 #51
Comments
Hi @ddongcui , you can use the dense feature map from the network and index only the features you want in the source image. Then you can search in the dense feature map of the other image the most similar features. You can also use the match refinement module in case you need refined matching. |
Hello, I have read the code carefully. I find the model generating the dense feature map in xfeat.py
where self.net(x) is an instantiation of the model XFeatModel. |
Once you have M1, you can index it with your source keypoints. Then perform similarity search on the dense map M1 of the target image. Here as an example (please consider this carefully, I did not check this code, its just a reference code I generated with LLM, just for you to get the idea) import torch
# Assuming M1 is a (B,C,H,W) tensor and src_kpts is a (N,2) tensor with keypoint coordinates
B, C, H, W = M1.shape
N = src_kpts.shape[0]
# Step 1: Extract sparse features (S1) at the provided coordinates
# Normalize src_kpts to range [0, 1] as grid_sample expects normalized coordinates
src_kpts_norm = src_kpts.clone()
src_kpts_norm[:, 0] = src_kpts[:, 0] / (W - 1) * 2 - 1 # Normalize x-coordinates
src_kpts_norm[:, 1] = src_kpts[:, 1] / (H - 1) * 2 - 1 # Normalize y-coordinates
# Expand the src_kpts to match the batch size, and add a dimension for grid_sample (BxNx1x2)
grid = src_kpts_norm.unsqueeze(0).unsqueeze(2).expand(B, -1, 1, -1)
# Extract features at the keypoint locations (B, C, N)
S1 = torch.nn.functional.grid_sample(M1, grid, mode='bilinear', align_corners=True)
S1 = S1.view(B, C, N)
# Step 2: Unflatten the target M1 tensor (S2)
S2 = M1.view(B, C, -1) # Flatten H and W to (H*W)
# Step 3: Perform similarity search using dot product from S1 to S2
# (B, C, N) @ (B, C, H*W) -> (B, N, H*W)
similarity = torch.bmm(S1.permute(0, 2, 1), S2)
# Step 4: Retrieve the correct image coordinates from S2
# Get the maximum similarity indices
_, max_indices = torch.max(similarity, dim=-1) # (B, N)
# Convert flattened indices back to 2D coordinates (x, y)
max_y = max_indices // W
max_x = max_indices % W
# (B, N, 2) contains the matched coordinates in the target image
matched_coords = torch.stack([max_x, max_y], dim=-1) |
I have two images img1 and img2,how to get the matched point in img2 given points(eg [px1,py1],[px2,py2]……) in img1. That means I just want to match some specific points in two images,not the full image.
The text was updated successfully, but these errors were encountered: