Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sensor fusion + ptam + IMU setup issue #21

Open
Benjaminmar8 opened this issue Jan 17, 2014 · 8 comments
Open

sensor fusion + ptam + IMU setup issue #21

Benjaminmar8 opened this issue Jan 17, 2014 · 8 comments

Comments

@Benjaminmar8
Copy link

Hi, All,

Apology if my questions seems silly or they have been answered in other threads. I am trying to use sensor fusion + ptam on my quadrotor (not AscTec) with a front facing camera. There are related questions here #20 and #8 asked by other users. However, I still can't get it to work as there's seems no consolidated answer. Here is my questions:

  1. What is the frame coordinate used by the camera? I assume that the Z-axis points out from the camera lens. Then I use right hand rule to deduce the X and Y axis. Is this correct?

  2. For the IMU, I am using a microstrain type and it uses NED frame. I edit the ROS code such that it outputs /imu/data in ENU frame. Assuming IMU's X-axis points to the front (same direction as the front facing camera), Z-axis points up out of quadrotor's body. What is the correct transformation between camera and IMU in the yaml file?

  3. Once PTAM is intiallized, the PTAM's camera frame seems arbitrarily oriented with respect to the world frame. How can I make my camera frame gravity aligned, assuming my quadrotor rarely roll/pitch a lot? Or, the world frame should be gravity aligned?

  4. the scale, /lambda, what I understand is that P_metric * /lambda = P_sensor, where P_metric is the position from /ssf_core/pose, and P_sensor is the position from /vslam/pose. Is this the correct interpretation?

  5. My goal is to use this setup to navigate a quadrotor in a small indoor environment.The obvious input I can use is /ssf_core/pose as it gives quadrotor's position in metric form. I notice that there's no frame id for this /ssf_core/pose, is it specified in the world frame used in PTAM? If so, how can I use it for control as the world frame is not gravity aligned.

Thanks a lot for the help!

@stephanweiss
Copy link
Contributor

Hi Benjamin,

There are no silly questions. We're always glad when someone is using our framework and the fastest way to get it run is asking what is unclear:-)

To your questions:

  1. yes, z is pointing out of the camera and follows the right hand rule. Although, this is probably not the information you are looking for -> see answer to 3)

2a) I would need to know a second axis of the camera to answer this, since I do not know how the camera is mounted on its yaw around camera z. To determine the camera x and y axis you can run PTAM and display the GUI (where you see the 3D point cloud). This will confirm your camera z axis and also show you which one is x and y. Then you can compute the transform to your IMU.
2b) To verify your enu output: if the helicopter is horizontal, the z-output should be +9.81. Similarly, if you tilt the helicopter such that its positive x-axis points to the ceiling, the x-output should be 9.81 (same for y).

  1. That is correct, the PTAM frame (or Vision frame as I call it in my papers) is arbitrarily in space (I rather call it PTAM frame since the constructed map is a "world" for its own, not the camera frame). The difficulty is to find the initial alignment between PTAM frame (not the camera frame) and a gravity aligned world (i.e. navigation) frame and init the state q_vw accordingly. Once q_vw is well initialized the filter takes care to fine-adjust the world such that it is well gravity aligned. So there is no need to actually transform any frame to align it with gravity.

  2. That is essentially correct (as long as the PTAM frame is gravity aligned) . In detail, there is the rotation between PTAM and World frame as well as cam-IMU transformation to consider. You find the detailed equation in pose_sensor.cpp where the position residual is computed.

  3. Currently we do not publish any TF but just a PoseWithCovarianceStamped for control (i.e. /ssf_core/pose). This is metric and in a gravity aligned navigation frame with the origin in the origin of the PTAM frame (i.e. PTAM and world frame only differ in rotation).

So you see, although our framework allows front-looking camera setup, at the moment it is not very trivial to initialize it. The difficulty being to find the initial rotation between PTAM map and gravity aligned world frame. If you are motivated I can guide you through how this could be done such that we could contribute this to the code once it is robust. As a first step I would suggest you bring the framework to run properly with the camera down looking. This would reduce error sources at the beginning.

I hope this helps.
Best
Stephan


From: Benjaminmar8 [[email protected]]
Sent: Friday, January 17, 2014 2:06 AM
To: ethz-asl/ethzasl_sensor_fusion
Subject: [ethzasl_sensor_fusion] sensor fusion + ptam + IMU setup issue (#21)

Hi, All,

Apology if my questions seems silly or they have been answered in other threads. I am trying to use sensor fusion + ptam on my quadrotor (not AscTec) with a front facing camera. There are related questions here #20#20 and #8#8 asked by other users. However, I still can't get it to work as there's seems no consolidated answer. Here is my questions:

  1. What is the frame coordinate used by the camera? I assume that the Z-axis points out from the camera lens. Then I use right hand rule to deduce the X and Y axis. Is this correct?

  2. For the IMU, I am using a microstrain type and it uses NED frame. I edit the ROS code such that it outputs /imu/data in ENU frame. Assuming IMU's X-axis points to the front (same direction as the front facing camera), Z-axis points up out of quadrotor's body. What is the correct transformation between camera and IMU in the yaml file?

  3. Once PTAM is intiallized, the PTAM's camera frame seems arbitrarily oriented with respect to the world frame. How can I make my camera frame gravity aligned, assuming my quadrotor rarely roll/pitch a lot? Or, the world frame should be gravity aligned?

  4. the scale, /lambda, what I understand is that P_metric * /lambda = P_sensor, where P_metric is the position from /ssf_core/pose, and P_sensor is the position from /vslam/pose. Is this the correct interpretation?

  5. My goal is to use this setup to navigate a quadrotor in a small indoor environment.The obvious input I can use is /ssf_core/pose as it gives quadrotor's position in metric form. I notice that there's no frame id for this /ssf_core/pose, is it specified in the world frame used in PTAM? If so, how can I use it for control as the world frame is not gravity aligned.

Thanks a lot for the help!


Reply to this email directly or view it on GitHubhttps://github.com//issues/21.

@Benjaminmar8
Copy link
Author

Hi, Stephan,

Thanks a lot for the clarification. Here is what I have found out:

  1. Related to 3-5.

2a) I run the PTAM with a front facing camera (no sensor fusion yet) and got a few screen shots. From the pictures, for a front facing camera, it seems like that Z-axis points out (facing the front), X-axis points to the right of the camera, Y-axis points down along the gravity. However, the grid in the PTAM GUI window seems to be different every time.
ptam_1
ptam_2
ptam_3

2b) I have verified the ENU output using the method you described and they are correct.

      1. I think all these are related, namely to find the initial alignment between PTAM frame and gravity world. The world referred by the q_wv is the gravity aligned world, not the PTAM's "world", the vision referred by the q_wv is, in fact, the PTAM's "world" shown in the PTAM's GUI (vision frame in your paper), aren't they?

I am interested to make this work. I will configure a down-looking camera now and update you once I have the result.

Thanks a lot!

Ben

@Benjaminmar8
Copy link
Author

HI, Stephan,

I have tried the down-looking camera, The result "seems" better though I still have a few things unclear. One obvious improvement is that I always had "fuzzy tracking warning" with front facing camera configuration. I know this is due to setup issue and the filter is just doing forward integration. With the down-looking camera, I rarely had this warning except 1, 2 times at start-up.

My setup is as follows, I fix the down-looking camera at about 1.1 meter height on a movable table. Ths IMU is fixed on the same table with a bit of translation. I have tried my best to align the axis of the camera and IMU, the picture below shows the setup
config

Since IMU's Z axis points up and the camera's Z axis points down, I assume that there is a 180 degree rotation around the camera's X axis. So in the yaml file, I change the q_ci to [0, -1, 0, 0]. Is this correct? Is there anyway I can verify the configuration and results? When I push the table towards the +ve IMU's x axis, shall I expect a increase or decrease in filter's x reading (ssf_core/pose/x)?

Thanks

Ben

@stephanweiss
Copy link
Contributor

Ben,

Good to hear that the setup is working'ish in down-looking configuration. Fuzzy tracking should never occur and means something with the initialization or with the map tracking is seriously wrong. If you verify that the PTAM map is initialized gravity-aligned'ish before initializing the filter (down looking config, q_vw init to unit quaternion) then fuzzy tracking should not occur (except cam-IMU init is wrong, but based on your image q_ci(w,x,y,z)=(0,-1,0,0) should be correct). If you know that you are 1.1m above ground, you can also use "set height" to start the filter, this will automatically compute a correct scale initialization.

To verify correct functioning you can use "rosrun ssf_core plot relevant" which plots

  • the IMU biases (should be between about +/-0.5m/s^2)
  • the scale (can be arbitrary but should not change after convergence even in abrupt rotation and strong motion)
  • the position and velocity (should be gravity aligned: the z-position should reflect the actual metric height (i.e. 1.1m), xy coordinates depend on PTAM init, the plots should be smooth)
    Note that the plot only run as long as PTAM tracks well the map (i.e. the plots are updated each time a measurement arrives).

Once you managed to have it working robustly with the down looking setup. We can proceed to the front looking one.

Best
Stephan


From: Benjaminmar8 [[email protected]]
Sent: Monday, January 20, 2014 12:31 AM
To: ethz-asl/ethzasl_sensor_fusion
Cc: Stephan Weiss
Subject: Re: [ethzasl_sensor_fusion] sensor fusion + ptam + IMU setup issue (#21)

HI, Stephan,

I have tried the down-looking camera, The result "seems" better though I still have a few things unclear. One obvious improvement is that I always had "fuzzy tracking warning" with front facing camera configuration. I know this is due to setup issue and the filter is just doing forward integration. With the down-looking camera, I rarely had this warning except 1, 2 times at start-up.

My setup is as follows, I fix the down-looking camera at about 1.1 meter height on a movable table. Ths IMU is fixed on the same table with a bit of translation. I have tried my best to align the axis of the camera and IMU, the picture below shows the setup

[config]https://f.cloud.github.com/assets/5643170/1952598/dff8808a-81ab-11e3-945c-f2740f9835ed.png

Since IMU's Z axis points up and the camera's Z axis points down, I assume that there is a 180 degree rotation around the camera's X axis. So in the yaml file, I change the q_ci to [0, -1, 0, 0]. Is this correct? Is there anyway I can verify the configuration and results? When I push the table towards the +ve IMU's x axis, shall I expect a increase or decrease in filter's x reading (ssf_core/pose/x)?

Thanks

Ben


Reply to this email directly or view it on GitHubhttps://github.com//issues/21#issuecomment-32741608.

@Benjaminmar8
Copy link
Author

Stephan,

I obtained some results on the down-looking camera configuration. Here is the plot
plot2

  1. As you can see, the Z-axis (height) did not converge to 1.1 meter (this is fixed camera to floor height). Does this mean that my configuration is still wrong?

  2. I tried to use the set_height method to init the filter, the initial height is set to 1 meter, it converge to 1.1 meter quickly. However, after some movement (translation + rotation), the height start to drop to 0.6 meters. This suggest the scale estimate is keep on changing.

  3. Sometimes when I init the filter, I got the errors like

=== ERROR === prediction p: NAN at index 0
=== ERROR === prediction p: NAN at index 0
=== ERROR === update: NAN at index 0
=== ERROR === prediction p: NAN at index 0
=== ERROR === update: NAN at index 0

I have to check the init box one more time to get ride of it, after that the filter initialize properly. Is there something I am missing here?

Thanks a lot for the help.

Ben

@stephanweiss
Copy link
Contributor

Ben,

to 1) this might be because the scale is initialized too far off the true value. Thus I recommend to use only the set-height initialization option.

to 2) When you say "suggest the scale estimate is keep on changing", please verify this in the corresponding plot.
A couple of things you can do:
a) make sure you do not corrupt/loose the map while doing motions. Check the map quality by visualizing the PTAM output with ptam_visualizer or the native PTAM GUI. I see in the image part that you might have a low textured surface covered with checker boards. This setup is prone to fail PTAM. Please make sure you have rich and distinctive texture with high contrast in your test area (e.g. put some stickers, newspapers, carpets on the floor).

b) unlikely but as a sanity check: check if PTAM drifts that fast in z. To do so, plot the 3D map points either in RViz using the ptam_visualizer or the native PTAM GUI in map view. The point cloud should always lie in the xy plane of the PTAM map (z=0).

c) check if the extrinsics are correct: "rosrun ssf_core plot_calib" plots the cam-IMU extrinsics. Verify if they are correct. If they are not, their initialization was way off the true value. If they "seem" to be correct, augment the noise on p_ic and q_ic separately (e.g. to 0.005) and apply rotational movement (careful to not corrupt/loose the map while doing the motion!). If the states change, your initialization was wrong.

Hope that helps,
Stephan

Btw, to let our readers know where our framework is and can be used, and if you are allowed, may I ask your affiliation? And do you have a link to the project in which you plan to use the framework?


From: Benjaminmar8 [[email protected]]
Sent: Tuesday, January 21, 2014 2:08 AM
To: ethz-asl/ethzasl_sensor_fusion
Cc: Stephan Weiss
Subject: Re: [ethzasl_sensor_fusion] sensor fusion + ptam + IMU setup issue (#21)

Stephan,

I obtained some results on the down-looking camera configuration. Here is the plot
[plot2]https://f.cloud.github.com/assets/5643170/1962192/50c7d9a0-8283-11e3-88da-25ac3d1b98f9.png

  1. As you can see, the Z-axis (height) did not converge to 1.1 meter (this is fixed camera to floor height). Does this mean that my configuration is still wrong?

  2. I tried to use the set_height method to init the filter, the initial height is set to 1 meter, it converge to 1.1 meter quickly. However, after some movement (translation + rotation), the height start to drop to 0.6 meters. This suggest the scale estimate is keep on changing.

Thanks.
Ben


Reply to this email directly or view it on GitHubhttps://github.com//issues/21#issuecomment-32835198.

@Benjaminmar8
Copy link
Author

Stephan,
I made a mistake about my camera's x/y axis. Anyway, the picture below is the correct setup.
config3
2a) Now I put newspaper on the floor for all the testing as shown below
desktop
2b) I have checked, the PTAM does not drift in Z axis, even under abrupt motion. The point clouds are on the x-y plane.
2c)This is tricky. I did many testings and notice that if I move the table slowly, the Z-axis reading does not drift. However, as long as I move (all kinds of wild movement) the table fast, the Z-axis reading drifts away quite fast. You can see this in the two images below, the first one is the slow movement while I push the table in the front direction (notice that Z-axis velocity reading is not zero even though I am pushing the table in one direction, this might due to mounting of IMU). The posez is around 1.05 meter in this case. The second one is the wild movement, again the Z-axis velocity reading is not zero. In this case posez drifts to 0.5 meter quite fast. In both cases, the vslam/pose/z does not drift at all. So I guess PTAM did not loose track in both cases. Could it be my IMU, or my configuration is still not correct?
plot3
plot4

I also add the noise to p_ic and q_ic, the above observation holds for this case as well.

I am affiliated to Autonomous and Control System Center, Nanyang Ploytechnic, a Singapore institute. I don't have a url to the project yet, but we will put up one eventually.

Thanks a lot for the help.

Ben

@Razbotics
Copy link

Hello,
Looks like this thread is old but still I will try my luck in this thread. I have a question if PTAM + SSF requires hardware synchronised Camera and IMU pair?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants