Skip to content

Mapping

rajat edited this page Oct 5, 2021 · 1 revision

Unity3D SONAR

We have emulated a multi-beam sonar in our Unity3D vehicle to send depth information over our ROSBridge. This is done using the Raycast method of the Physics package of Unity. The steps required for this implementation are as follows:

  1. Create a Raycast hit array for the number of rays you want.
  2. Check for the ray collisions with an object.
  3. If a collision occurs, save the distance to a depth array element.
  4. If not, assign a large enough value to the element, which will then be substituted for infinity by the ROS decoder for the /scan message.
        RaycastHit hit;
        Debug.DrawRay(auv.transform.position, auv.transform.right * -30);

        if (Physics.Raycast(auv.transform.position, auv.transform.right * -1, out hit, 30))
        {
            depthLog[320] = hit.distance;
        }
        else
        {
            depthLog[320] = 30;
        }

This concludes the Unity3D implementation, and we can now move on to the Octomap module.

Octomap Mapping

As mentioned on the Octomap's homepage - "The OctoMap library implements a 3D occupancy grid mapping approach, providing data structures and mapping algorithms in C++ particularly suited for robotics." We use octomap to recreate the environment scanned by our underwater vehicle and subsequently save it for future implementation. This is done in several steps:

  1. By incorporating the position published from the Unity3D simulator, we publish the transform of the vehicle by using the tf2 library. This is then broadcasted by a tf broadcaster, to be used by our OctoMap node later on.
  2. A string decoder subscribes to the decoded depth array sent from Unity3D and republishes that data as a float array.
  3. The LaserScan Publisher uses the decoded depth float array to convert it into a laser scan topic. It also uses the field of view topic to get the angle of scan for the laserscan topic.
  4. The octomap mapping tool needs depth data as a point cloud. So we convert the laser scan to point cloud and publish it over the topic /laserPointCloud
  5. Once we have our point cloud and transforms ready, we feed them into our octomap mapping node, where we can use the rviz tool to visualize the recreated map.
  6. The final Node Map of the whole system is represented by the following figure

Node Map


Clone this wiki locally