Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trouble while using the calibration matrices #134

Open
Theopetitjean opened this issue Jan 31, 2024 · 4 comments
Open

Trouble while using the calibration matrices #134

Theopetitjean opened this issue Jan 31, 2024 · 4 comments

Comments

@Theopetitjean
Copy link

Hello, Thank you for making your code public. I'm having some problems using the calibration. Here are the details, I hope you can help me!

I'm trying to calibrate a Fanuc Cr7ial robot with a Kinect camera. Here is the launch file used to launch Easy-hand-eye:

<?xml version="1.0" ?>
<launch>

    <arg name="namespace_prefix" default="fanuc_kinect_handeyecalibration" />
    <arg name="eye"             default="left"/>

   <!-- Démarrer Kinect (start the Kinect_v1/Xtion with openNI)-->

    <include file="$(find openni_launch)/launch/openni.launch" >

        <arg name="depth_registration" value="true" />

    </include>

   <!-- start ArUco -->
    <node name="aruco_single" pkg="aruco_ros" type="single">
        <remap from="/camera_info" to="/camera/rgb/camera_info" />
        <remap from="/image" to="/camera/rgb/image_rect_color" />
        <param name="image_is_rectified" value="true"/>
        <param name="marker_size"        value="0.14"/>
        <param name="marker_id"          value="26"/>
        <param name="reference_frame"    value="camera_link"/>
        <param name="camera_frame"       value="camera_rgb_optical_frame"/>
        <param name="marker_frame"       value="camera_marker" />
    </node>

   <!-- Start the CR7IAL FANUC robot -->

    <include file="$(find fanuc_cr7ial_moveit_config)/launch/moveit_planning_execution.launch">

        <arg name="sim" value="false"/>
        <arg name="robot_ip" value="192.168.0.9" doc="Adresse IP du robot Fanuc CR-7iAL" />

    </include>


   <!-- start easy_handeye -->
    <include file="$(find easy_handeye)/launch/calibrate.launch" >
        <arg name="namespace_prefix" value="$(arg namespace_prefix)" />
        <arg name="eye_on_hand" value="false" />

        <arg name="robot_base_frame" value="base_link" />
        <arg name="robot_effector_frame" value="tool0" />
        <arg name="tracking_base_frame" value="camera_link" />
        <arg name="tracking_marker_frame" value="camera_marker" />


        <arg name="freehand_robot_movement" value="false" />
        <arg name="robot_velocity_scaling" value="0.5" />
        <arg name="robot_acceleration_scaling" value="0.2" />


    </include>

</launch>

While launching this fine the robot seems to do the work as expected. Having some result such as :

image

Sometime the detection of the aruco marker is lost but it appear that i should probably stick it to a solid suport cause sometime it get some deformation. We then obtain the sample such as : (while writing this im expecting if you'r supose to take sampe between each poses ? should i do this ?

image

It seems like i do smtg in the wrong way but i've not been able to find out what. What should i do to properly use this code ? Do i forget something while using the package ?

My main objective is to find out the calibration matrices in order to transform the detected information from the caméra in and reachable pose for the robot. To make thing short, after using the calibration package, i get both the quaternion and translation to apply direction the transform matrices to the xyz point obtain while using the camera. So what should be done to get thoses correct tranform R|t matrices ? do i forgot smtg while using the easyhandeye process ?

@Theopetitjean
Copy link
Author

Okay, hello again,

Here is un update on my problem. While taking the sample and computing the resultats i've wondered somthing about the suposed position of my marker.

image

AS we can see on this image, the marker is detected to be down on the rviz visualizer. But in fact it should be near Tool0 of the robot since the marker is monted on the tool of the robot.
Is that a problem regarding to the final calibration ? Can it be fixed ? It apear that the marker position is somehow shifted by a 90 degree angle ....

Thanks alot for your support !

@Theopetitjean
Copy link
Author

Hello again,
I decide to ignore the previous error and just keept going.
I obtained the folowing results :

  eye_on_hand: true
  freehand_robot_movement: false
  move_group: manipulator
  move_group_namespace: /
  namespace: /fanuc_kinect_handeyecalibration_eye_on_hand/
  robot_base_frame: base
  robot_effector_frame: tool0
  tracking_base_frame: camera_depth_optical_frame
  tracking_marker_frame: camera_marker
transformation:
  qw: 0.6729530917701868
  qx: -0.03934630893158204
  qy: 0.016416276976428446
  qz: 0.7384554895866374
  x: 0.0344229908306081
  y: -0.078848650464033
  z: -0.056035976895112756

I suposed that the results for rotation was expressed as a quaternion, is that correct ?

I am now wondering about the usage of this calibration matrix. I have a kinect camera and a Yolo object detector that provide me the center of a bounding box from the RGB image. SO i do get X and Y corrdinates from the RGB image. Then thanks to the kinect sensor i get the corresponding Z from the raw data of the depth sensor. I then apply the calibration matrix to the so obtained 3D point such as :

def CoordonneesDeB(Ptx, Pty,Ptz,T_values, R_values):
    
    MatRotation= Ro.from_quat(R_values)
    A = [Ptx,Pty,Ptz]
    B = MatRotation.apply(A) + T_values
    # B = np.dot(MatRotation, A) + T
    print('la valeur de B est :')
    print(B)

    text = "B: ({}, {}, {})".format(B[0], B[1], B[2])
    
    return B[0], B[1], B[2]

SO here my question : Should I indeed input the Z coordinate obtained from the depth sensor while calculating the new coordinates of my point or should i do somthing like [X, Y, 0] ? what is after the calibration got obtained the right way to use it ? should i provide only the X and Y from the RGB sensor or should i give the Entire 3D point ?

I have much trouble using this results so if you can provide me any backup on it I would apreciate alot !

Thanks for the hard work !

@marcoesposito1988
Copy link
Collaborator

Hi @Theopetitjean,

I'm not sure I have the full picture of your current status, but here a couple of points that caught my attention:

  • when using an AR marker, it is crucial that it is absolutely flat and rigid. Otherwise the output of the marker tracking will be very noisy, and useless for a calibration
  • all calibration methods boil down to forming a system of equations that constrain the set of possible solutions for the parameters of the rigid transformation between the camera and the robot frame. These methods are designed to factor out the unknown transform between the marker and the other end of the kinematic chain of the robot. If this unknown were not there, you could just directly measure the relative position of the camera with respect of the robot by chaining their outputs; instead, you need multiple independent samples. You can picture this like three planes determining a single point in 3D space. If two of them are parallel, your solution will not be a single point, but zero or infinity (or better, a random point chosen by your optimizer to be a solution, but it will actually be garbage). The closer you are to this pathological case with your acquired samples, the worse your calibration result. As a simple strategy to avoid this, you should acquire your samples by starting with the camera looking straight to the marker, aligned to the camera image. Then you rotate your end effector as much as possible along one axis (e.g. the marker will move horizontally), then stop and acquire a sample. Then you go back to the starting position, and rotate about an axis perpendicular to it (e.g. the marker will move vertically). For the third sample, you should rotate about the third missing axis (i.e. the marker would rotate about its normal in the camera image). Tsai recommends minimizing the translation of the effector during the process. If you are curious about the math of this, you can read the paper cited in the README.
  • the result of the process will indeed be the geometric transformation between the camera frame and the robot effector frame for the eye-in-hand case, and between the camera frame and the base frame for the eye-on-base case, which seems to be yours. I don't remember which direction; you can just try inverting the matrix to see if it is what you expect instead. The result is expressed as translation vector and quaternion, as is consistent with tf; you can easily generate the rotation matrix from it with the python transformations3d package.
  • you can use the calibration result directly in your code, but to make your life simpler you could first use tf in its full extent. After computing the calibration and saving it to file, you can use the publish launch file to have easy_handeye publish your camera frame in tf in the correct position. You can then publish the position of your objects of interest in tf, and query tf for the position of these objects in the robot base frame to plan your motions; MoveIt will do all of this automatically.
  • related to the point above: please note that the position of the camera during the calibration is completely wrong and arbitrary. It is just set to some value so that it will be shown in RViz, and you can check if the marker moves with respect to the camera when you expect it to. The camera should be in the correct position only when using publish.launch after calibrating.

@Theopetitjean
Copy link
Author

Hi @marcoesposito1988,

First of all, thank you for your very detailed answer.

In the meantime I've continued to experiment in order to obtain a usable calibration but unfortunately I'm still stuck. I freely admit that I must be making a lot of mistakes as I'm not particularly comfortable with ros. So here's exactly where I'm at:

  • I've printed a bigger tag again and fixed it to a rigid support to eliminate any possible distortion.
  • I run an Eye-in-Hand calibration for an object picking task.

Here is my launch:

<?xml version="1.0" ?>
<launch>

    <arg name="namespace_prefix" default="fanuc_kinect_handeyecalibration" />
    <arg name="eye"             default="left"/>

   <!-- Démarrer Kinect (start the Kinect_v1/Xtion with openNI)-->

    <include file="$(find openni_launch)/launch/openni.launch" >

        <arg name="depth_registration" value="true" />

    </include>

   <!-- start ArUco -->
    <node name="aruco_single" pkg="aruco_ros" type="single">
        <remap from="/camera_info" to="/camera/rgb/camera_info" />
        <remap from="/image" to="/camera/rgb/image_rect_color" />
        <param name="image_is_rectified" value="true"/>
        <param name="marker_size"        value="0.192"/>
        <param name="marker_id"          value="26"/>
        <param name="reference_frame"    value="camera_link"/>
        <param name="camera_frame"       value="camera_optical_rgb_frame"/>
        <param name="marker_frame"       value="camera_marker" />
    </node>

   <!-- Start the CR7IAL FANUC robot -->

    <include file="$(find fanuc_cr7ial_moveit_config)/launch/moveit_planning_execution.launch">

        <arg name="sim" value="false"/>
        <arg name="robot_ip" value="192.168.0.9" doc="Adresse IP du robot Fanuc CR-7iAL" />

    </include>


   <!-- start easy_handeye -->
    <include file="$(find easy_handeye)/launch/calibrate.launch" >
        <arg name="namespace_prefix" value="$(arg namespace_prefix)" />
        <arg name="eye_on_hand" value="true" />

        <arg name="robot_base_frame" value="base" />
        <arg name="robot_effector_frame" value="tool0" />
        <arg name="tracking_base_frame" value="camera_link" />
        <arg name="tracking_marker_frame" value="camera_marker" />


        <arg name="freehand_robot_movement" value="false" />
        <arg name="robot_velocity_scaling" value="0.5" />
        <arg name="robot_acceleration_scaling" value="0.2" />


    </include>

I have a few questions about this launch because some of the points are a bit unclear to me:

  1. When loading this parameter, as shown as an example in your code, I load:
<param name="reference_frame" value="camera_link"/>

 <param name="camera_frame "value="camera_optical_rgb_frame"/>

Does this give the origin of the Aruco marker points? A point detected by the RGB camera later should therefore be expressed in a similar way using the same origin? is this correct?
From my understanding, the camera-Optical-rgb-Frame should be used while using a kinect camera but why ? What is the main diffrence with the other parameter also available Camera-rgb-frame ?

  1. In a similar way, when loading the robot origin:
<arg name="robot_base_frame" value="base" />
<arg name="robot_effector_frame" value="tool0" />

My question is as follows: The Fanuc cr7ial is a collaborative robot, so it has two origins, Base_Link and Base. Base Link is located on the effort sensor, while Base is located on the base of the robot's first axis. It would seem that most of the time on the fanuc side, the origin expressed by the robot is Base. In this respect, should I use Base or Base-Link? Have you ever had similar cases?

Then I do the calibration, taking between 5 and 10 samples depending on their quality. The first position is always well in front of the tag, for the following we use the movements proposed in the UI that you provided.

How can I find the orientation of my camera marker? Because in the Rviz feedback it seems to me that the Aruco tag reverses X, although I'm not sure of this point.

Anyway, we create the following transformation grid using Tzai-Lenz. Of course, this result remains uncertain in view of the various issues I raised earlier.

image

Here are my questions about the use of such results. So first of all, i'm not using the Tf for a simple and perhaps slightly silly reason: I have no idea how to apply my vision algorithms properly in ros. I'm a PhD student in computer vision and robotics, as I said earlier, really isn't my forte. I knaw that i could launch the results in Tf, I've read the many example you give and go through many issues of this Git, but After doing so i have no idea how to use it with my own code. This is why i am trying to do it with python straight forward.

As indicated in my python code I detect an object via an AI, then I get the coordinates of the centre of this object (they are expressed as X[Width in pixel], Y[Height in pixel], and Z[distance from the 3d sensor]). So I get an X,Y,Z point in the camera frame.
I transform the quaternion into a rotation matrix so that I can apply it to my point, then I do an operation such as :
B = (R*point) + t

I have some questions about this new point. Is it an offset to be applied to the current co-ordinate of my end effector or is it the raw co-ordinate of my point in the robot frame?

I freely admit that I don't really understand why this doesn't work. I may have made a mistake but I can't find where ...

Sorry to write such a big message again, I'm trying to be as concise as possible :D
Thank you again for your reply and your work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants