Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation and library on ICoordinateMapper #80

Open
KonstantinosAng opened this issue Feb 4, 2020 · 10 comments
Open

Documentation and library on ICoordinateMapper #80

KonstantinosAng opened this issue Feb 4, 2020 · 10 comments

Comments

@KonstantinosAng
Copy link

KonstantinosAng commented Feb 4, 2020

I have found how to use most of the ICoordinateMapper functions using the PyKinectV2. For anyone struggling with this i wrote a file in my repository to use for free:

https://github.com/KonstantinosAng/PyKinect2-Mapper-Functions

@KonstantinosAng
Copy link
Author

If you cant understand how to use these functions ask me and i will show you an example.

@AlexCardaillac
Copy link

If you cant understand how to use these functions ask me and i will show you an example.

In the main of your example I get [-inf, -inf] when trying to get the depth point of a color point.
How can I fix this ?

@KonstantinosAng
Copy link
Author

Kinect returns -inf for some values that have too much noise and cannot map them from one frame to another.

First try to tilt or move slightly the camera and try to clean the lens to reduce noise. Also if you are close to a window, thelight from the sun might interfere with the sensor. Make sure to avoid direct contact with the sun and use artificial light to have a clear view. Also change the if statement to:
if kinect.has_new_depth_frame () and kinect.has_new_color_frame ()

To make sure that the kinect has retreived at least one depth and color frame.

Try all this and let me know if anything changed.

@AlexCardaillac
Copy link

It works, thank you very much.
Your repo is really helpful

@KonstantinosAng
Copy link
Author

Thank you very much, feel free to ask me any time. I am coding a big project with kinect2 in python and I have learnt a lot functions, i cannot share the code yet but I can help with any question relating the pykinect2 not only the mapper functions.

@hardik-uppal
Copy link

Hey, I tried your repo to map depth images to color space with already captured images but depth_2_color_space function returns arrays with zeros. Can you help me out with it?
Thanks

@KonstantinosAng
Copy link
Author

KonstantinosAng commented May 11, 2020

It returns arrays of zeros because the Kinect device is not connected and running. In order to map a depth Image to the Color space you need the Kinect's depth values that represent the distance of the object in meters. A saved Depth Image or Color Image only has pixels values from 0 to 255, thus it cannot be used to produce a mapped image. You can try it with the code below, but I don't think that it will produce accurate results without the Kinect running:

import numpy as np
import ctypes
import cv2
from pykinect2 import PyKinectV2
from pykinect2.PyKinectV2 import *
from pykinect2 import PyKinectRuntime

kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Color)

""" import your images here """
depth_img = cv2.imread('path_to_your_depth_frame')
align_depth_img = cv2.imread('path_to_your_color_frame')

color2depth_points_type = _DepthSpacePoint * np.int(1920 * 1080)
color2depth_points = ctypes.cast(color2depth_points_type(), ctypes.POINTER(_DepthSpacePoint))
kinect._mapper.MapColorFrameToDepthSpace(ctypes.c_uint(512 * 424), kinect._depth_frame_data, ctypes.c_uint(1920 * 1080), color2depth_points)
depthXYs = np.copy(np.ctypeslib.as_array(color2depth_points, shape=(kinect.color_frame_desc.Height*kinect.color_frame_desc.Width,)))
depthXYs = depthXYs.view(np.float32).reshape(depthXYs.shape + (-1,))
depthXYs += 0.5
depthXYs = depthXYs.reshape(kinect.color_frame_desc.Height, kinect.color_frame_desc.Width, 2).astype(np.int)
depthXs = np.clip(depthXYs[:, :, 0], 0, kinect.depth_frame_desc.Width - 1)
depthYs = np.clip(depthXYs[:, :, 1], 0, kinect.depth_frame_desc.Height - 1)
align_depth_img[:, :] = depth_img[depthYs, depthXs, :1]
cv2.imshow('Aligned Image', cv2.resize(cv2.flip(align_depth_img, 1), (int(1920 / 2.0), int(1080 / 2.0))))
cv2.waitKey(0)

Without accessing the Kinect device you cannot map the color pixels to the depth pixels.

Also if you want to Map Depth Frames to Color Space you should use the color_2_depth function. Or the code below:

import numpy as np
import ctypes
import cv2
from pykinect2 import PyKinectV2
from pykinect2.PyKinectV2 import *
from pykinect2 import PyKinectRuntime

kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Color)

""" import your images here """
color_img = cv2.imread('path_to_your_color_frame')
align_color_img = cv2.imread('path_to_your_depth_frame')

depth2color_points_type = _ColorSpacePoint * np.int(512 * 424)
depth2color_points = ctypes.cast(depth2color_points_type(), ctypes.POINTER(_ColorSpacePoint))
kinect._mapper.MapDepthFrameToColorSpace(ctypes.c_uint(512 * 424), kinect._depth_frame_data, kinect._depth_frame_data_capacity, depth2color_points)
colorXYs = np.copy(np.ctypeslib.as_array(depth2color_points, shape=(kinect.depth_frame_desc.Height * kinect.depth_frame_desc.Width,)))
colorXYs = colorXYs.view(np.float32).reshape(colorXYs.shape + (-1,))
colorXYs += 0.5
colorXYs = colorXYs.reshape(kinect.depth_frame_desc.Height, kinect.depth_frame_desc.Width, 2).astype(np.int)
colorXs = np.clip(colorXYs[:, :, 0], 0, kinect.color_frame_desc.Width - 1)
colorYs = np.clip(colorXYs[:, :, 1], 0, kinect.color_frame_desc.Height - 1)
align_color_img[:, :] = color_img[colorYs, colorXs, :]
cv2.imshow('img', cv2.flip(align_color_img, 1))
cv2.waitKey(0)

But again I don't think it would produce something useful.

@5av10
Copy link

5av10 commented May 8, 2021

Thank you very much, feel free to ask me any time. I am coding a big project with kinect2 in python and I have learnt a lot functions, i cannot share the code yet but I can help with any question relating the pykinect2 not only the mapper functions.

How do I extract the real time X,Y,Z coordinates of a detected blob?
I am using opencv with cv2.SimpleBlobDetector_create() to detect and track the ball in the dept video but cannot manage to sample a point from the detected blob and get its x,y,z coordinates the code is:

depth_colormap= cv2.applyColorMap(cv2.convertScaleAbs(depth_img, alpha=255/clipping_distance), cv2.COLORMAP_JET)
g=cv2.cvtColor(dept_colormap,cv2.COLOR_BGR2GRAY)
detector=cv2.SimpleBlobDetector_create(params)
points=detector.detect(g)
blank=np.zeros((1,1,1))
blobs=cv2.drawKeypoints(g,points,np.array([]),(0,255,0),cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

@KonstantinosAng
Copy link
Author

Using my mapper repo you can use the following code to get the world points of the detected blobs:


import mapper

depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_img, alpha=255/clipping_distance), cv2.COLORMAP_JET)
g = cv2.cvtColor(dept_colormap,cv2.COLOR_BGR2GRAY)
detector = cv2.SimpleBlobDetector_create(params)
points = detector.detect(g)

for point in points:
  """ Get depth x, y points of detected blobs """
  depth_x, depth_y = point.pt[0], point.pt[1]
  """ Get world points of your depth x, y """
  world_x, world_y, world_z = mapper.depth_point_2_world_point(kinect, _DepthSpacePoint, [depth_x, depth_y])

blank=np.zeros((1,1,1))
blobs=cv2.drawKeypoints(g,points,np.array([]),(0,255,0),cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

@songjiahao-wq
Copy link

Kinect returns -inf for some values that have too much noise and cannot map them from one frame to another.

First try to tilt or move slightly the camera and try to clean the lens to reduce noise. Also if you are close to a window, thelight from the sun might interfere with the sensor. Make sure to avoid direct contact with the sun and use artificial light to have a clear view. Also change the if statement to: if kinect.has_new_depth_frame () and kinect.has_new_color_frame ()

To make sure that the kinect has retreived at least one depth and color frame.

Try all this and let me know if anything changed.

color_point_2_depth_point,I always get [0,0]
if name == 'main':
"""
Example of some usages
"""
from pykinect2 import PyKinectV2
from pykinect2.PyKinectV2 import *
from pykinect2 import PyKinectRuntime
import cv2
import numpy as np

kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Color)

while True:
    if kinect.has_new_depth_frame() and kinect.has_new_color_frame():
        color_frame = kinect.get_last_color_frame()
        colorImage = color_frame.reshape((kinect.color_frame_desc.Height, kinect.color_frame_desc.Width, 4)).astype(np.uint8)
        colorImage = cv2.flip(colorImage, 1)
        cv2.imshow('Test Color View', cv2.resize(colorImage, (int(1920 / 2.5), int(1080 / 2.5))))
        depth_frame = kinect.get_last_depth_frame()
        depth_img = depth_frame.reshape((kinect.depth_frame_desc.Height, kinect.depth_frame_desc.Width)).astype(np.uint8)
        depth_img = cv2.flip(depth_img, 1)
        cv2.imshow('Test Depth View', depth_img)
        # print('*'*80)
        # print(kinect._depth_frame_data)
        print(color_point_2_depth_point(kinect, _DepthSpacePoint, kinect._depth_frame_data.contents, [100, 150]))
        # print('*' * 80)
        print(depth_points_2_world_points(kinect, _DepthSpacePoint, [[100, 100], [200, 200], [300, 300]]))
        # print(color_2_world(kinect, kinect._depth_frame_data, _CameraSpacePoint,as_array=True))
        # print(intrinsics(kinect).FocalLengthX, intrinsics(kinect).FocalLengthY, intrinsics(kinect).PrincipalPointX, intrinsics(kinect).PrincipalPointY)
        # print(intrinsics(kinect).RadialDistortionFourthOrder, intrinsics(kinect).RadialDistortionSecondOrder, intrinsics(kinect).RadialDistortionSixthOrder)
        # print(world_point_2_depth(kinect, _CameraSpacePoint, [0.550, 0.325, 2]))
        img = depth_2_color_space(kinect, _DepthSpacePoint, kinect._depth_frame_data, show=False, return_aligned_image=True)
        # depth_2_color_space(kinect, _DepthSpacePoint, kinect._depth_frame_data, show=True)
        # img = color_2_depth_space(kinect, _ColorSpacePoint, kinect._depth_frame_data, show=True, return_aligned_image=True)

    # Quit using q
    if cv2.waitKey(1) & 0xff == ord('q'):
        break

cv2.destroyAllWindows()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants