-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trouble while using the calibration matrices #134
Comments
Okay, hello again, Here is un update on my problem. While taking the sample and computing the resultats i've wondered somthing about the suposed position of my marker. AS we can see on this image, the marker is detected to be down on the rviz visualizer. But in fact it should be near Tool0 of the robot since the marker is monted on the tool of the robot. Thanks alot for your support ! |
Hello again,
I suposed that the results for rotation was expressed as a quaternion, is that correct ? I am now wondering about the usage of this calibration matrix. I have a kinect camera and a Yolo object detector that provide me the center of a bounding box from the RGB image. SO i do get X and Y corrdinates from the RGB image. Then thanks to the kinect sensor i get the corresponding Z from the raw data of the depth sensor. I then apply the calibration matrix to the so obtained 3D point such as :
SO here my question : Should I indeed input the Z coordinate obtained from the depth sensor while calculating the new coordinates of my point or should i do somthing like [X, Y, 0] ? what is after the calibration got obtained the right way to use it ? should i provide only the X and Y from the RGB sensor or should i give the Entire 3D point ? I have much trouble using this results so if you can provide me any backup on it I would apreciate alot ! Thanks for the hard work ! |
Hi @Theopetitjean, I'm not sure I have the full picture of your current status, but here a couple of points that caught my attention:
|
First of all, thank you for your very detailed answer. In the meantime I've continued to experiment in order to obtain a usable calibration but unfortunately I'm still stuck. I freely admit that I must be making a lot of mistakes as I'm not particularly comfortable with ros. So here's exactly where I'm at:
Here is my launch:
I have a few questions about this launch because some of the points are a bit unclear to me:
Does this give the origin of the Aruco marker points? A point detected by the RGB camera later should therefore be expressed in a similar way using the same origin? is this correct?
My question is as follows: The Fanuc cr7ial is a collaborative robot, so it has two origins, Base_Link and Base. Base Link is located on the effort sensor, while Base is located on the base of the robot's first axis. It would seem that most of the time on the fanuc side, the origin expressed by the robot is Base. In this respect, should I use Base or Base-Link? Have you ever had similar cases? Then I do the calibration, taking between 5 and 10 samples depending on their quality. The first position is always well in front of the tag, for the following we use the movements proposed in the UI that you provided. How can I find the orientation of my camera marker? Because in the Rviz feedback it seems to me that the Aruco tag reverses X, although I'm not sure of this point. Anyway, we create the following transformation grid using Tzai-Lenz. Of course, this result remains uncertain in view of the various issues I raised earlier. Here are my questions about the use of such results. So first of all, i'm not using the Tf for a simple and perhaps slightly silly reason: I have no idea how to apply my vision algorithms properly in ros. I'm a PhD student in computer vision and robotics, as I said earlier, really isn't my forte. I knaw that i could launch the results in Tf, I've read the many example you give and go through many issues of this Git, but After doing so i have no idea how to use it with my own code. This is why i am trying to do it with python straight forward. As indicated in my python code I detect an object via an AI, then I get the coordinates of the centre of this object (they are expressed as X[Width in pixel], Y[Height in pixel], and Z[distance from the 3d sensor]). So I get an X,Y,Z point in the camera frame. I have some questions about this new point. Is it an offset to be applied to the current co-ordinate of my end effector or is it the raw co-ordinate of my point in the robot frame? I freely admit that I don't really understand why this doesn't work. I may have made a mistake but I can't find where ... Sorry to write such a big message again, I'm trying to be as concise as possible :D |
Hello, Thank you for making your code public. I'm having some problems using the calibration. Here are the details, I hope you can help me!
I'm trying to calibrate a Fanuc Cr7ial robot with a Kinect camera. Here is the launch file used to launch Easy-hand-eye:
While launching this fine the robot seems to do the work as expected. Having some result such as :
Sometime the detection of the aruco marker is lost but it appear that i should probably stick it to a solid suport cause sometime it get some deformation. We then obtain the sample such as : (while writing this im expecting if you'r supose to take sampe between each poses ? should i do this ?
It seems like i do smtg in the wrong way but i've not been able to find out what. What should i do to properly use this code ? Do i forget something while using the package ?
My main objective is to find out the calibration matrices in order to transform the detected information from the caméra in and reachable pose for the robot. To make thing short, after using the calibration package, i get both the quaternion and translation to apply direction the transform matrices to the xyz point obtain while using the camera. So what should be done to get thoses correct tranform R|t matrices ? do i forgot smtg while using the easyhandeye process ?
The text was updated successfully, but these errors were encountered: