RangeImagePlanar + Kinect

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

RangeImagePlanar + Kinect

Matt Bell
Hi,
I recently noticed the existence of RangeImagePlanar, which, as its documentation points out, is tailor-made for working with cameras like the Kinect.

I'm not clear on its usage though.  What are the recommended parameters to use when converting from a Kinect-generated point cloud to a RangeImagePlanar?  (I'm assuming di_width and di_height are the same as the input point cloud, but the other parameters are less clear.  )   I'd like to have a 1:1 mapping between original point cloud points and range image points.  

pcl::RangeImagePlanar::createFromPointCloudWithFixedSize

Parameters:
    point_cloud the source point cloud
    di_width the disparity image width
    di_height the disparity image height
    di_center_x the x-coordinate of the camera's center of projection
    di_center_y the y-coordinate of the camera's center of projection
    di_focal_length_x the camera's focal length in the horizontal direction
    di_focal_length_y the camera's focal length in the vertical direction
    sensor_pose the pose of the virtual depth camera
    coordinate_frame the used coordinate frame of the point cloud
    noise_level what is the typical noise of the sensor - is used for averaging in the z-buffer
    min_range minimum range to consifder points

-------

Also, does noise_level do z-buffer averaging only if multiple points that end up in a given bin, or is it doing averaging across nearby bins in the range image?

Also, since range images only support X,Y,Z,Range but not RGB, what's the best way of integrating the newly z-smoothed or otherwise altered range image XYZ data back into the XYZRGB point cloud?  Will the range image operations preserve a 1:1 mapping between the points in the range image and the points in the original point cloud in terms of their ordering in the "points" data structure so long as I choose the proper original camera viewpoint?  

Thanks,
Matt
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] RangeImagePlanar + Kinect

Yohei Kakiuchi
Hi Matt,

This is the sample code for using createFromPointCloudWithFixedSize.

There are two messages named point_cloud and named camera_info.

This code for making a pcl::RangeImagePlanar object corresponding
to the image related to the camera_info.

Parameters (di_width, di_height, di_center_x, di_center_y,
 di_focal_length_x, di_focal_length_y, sensor_pose ) are calculated from
camera_info.
The others ( coordinate_frame, noise_level, min_range ) are not necessary.

Created range image have a 1:1 mapping to ''image'' related to used camera_info.

If you want to know original points related to the pixel of range image,
you should search nearest point from original point cloud.

Regards,
Yohei Kakiuchi.

<<<<< sample code
      int width = camera_info->width;
      int height = camera_info->height;
      float fx = camera_info->P[0];
      float cx = camera_info->P[2];
      float tx = camera_info->P[3];
      float fy = camera_info->P[5];
      float cy = camera_info->P[6];

      Eigen::Affine3f sensorPose;
      {
        tf::StampedTransform transform;
        try {
            tf_listener_.lookupTransform(point_cloud->header.frame_id,
                                         camera_info->header.frame_id,
                                         camera_info->header.stamp, transform);
          }
          catch ( std::runtime_error e ) {
            ROS_ERROR("%s",e.what());
          }
        }
        tf::Vector3 p = transform.getOrigin();
        tf::Quaternion q = transform.getRotation();
        sensorPose = (Eigen::Affine3f)Eigen::Translation3f(p.getX(),
p.getY(), p.getZ());
        Eigen::Quaternion<float> rot(q.getW(), q.getX(), q.getY(), q.getZ());
        sensorPose = sensorPose * rot;

        if (tx != 0.0) {
          Eigen::Affine3f trans =
(Eigen::Affine3f)Eigen::Translation3f(-tx/fx , 0, 0);
          sensorPose = sensorPose * trans;
        }
      }

      pcl::PointCloud<pcl::PointsXYZ> pointCloud;
      pcl::RangeImagePlanar rangeImageP;

      // This is intended way for making RangeImagePlanar, but not
going well ...
      //pcl::fromROSMsg(*point_cloud, pointCloud);
      //rangeImageP.createFromPointCloudWithFixedSize (pointCloud,
width, height, cx, cy, fx, fy,
      //                                               sensorPose);

      {
        // here code is dirty, some bag is in RangeImagePlanar
        pcl::PointCloud<pcl::PointsXYZ> tmp_cloud;
        pcl::fromROSMsg(*point_cloud, tmp_cloud);

        Eigen::Affine3f inv;
        pcl::getInverse(sensorPose, inv);
        pcl::getTransformedPointCloud<PointCloud> (tmp_cloud, inv, pointCloud);

        Eigen::Affine3f dummytrans;
        dummytrans.setIdentity();
        rangeImageP.createFromPointCloudWithFixedSize (pointCloud,
width, height, cx, cy, fx, fy,
                                                       dummytrans);
      }

>>>>> end of sample

2011/3/9 Matt Bell <[hidden email]>:

> Hi,
> I recently noticed the existence of RangeImagePlanar, which, as its
> documentation points out, is tailor-made for working with cameras like the
> Kinect.
>
> I'm not clear on its usage though.  What are the recommended parameters to
> use when converting from a Kinect-generated point cloud to a
> RangeImagePlanar?  (I'm assuming di_width and di_height are the same as the
> input point cloud, but the other parameters are less clear.  )   I'd like to
> have a 1:1 mapping between original point cloud points and range image
> points.
>
> pcl::RangeImagePlanar::createFromPointCloudWithFixedSize
>
> Parameters:
>    point_cloud the source point cloud
>    di_width the disparity image width
>    di_height the disparity image height
>    di_center_x the x-coordinate of the camera's center of projection
>    di_center_y the y-coordinate of the camera's center of projection
>    di_focal_length_x the camera's focal length in the horizontal direction
>    di_focal_length_y the camera's focal length in the vertical direction
>    sensor_pose the pose of the virtual depth camera
>    coordinate_frame the used coordinate frame of the point cloud
>    noise_level what is the typical noise of the sensor - is used for
> averaging in the z-buffer
>    min_range minimum range to consifder points
>
> -------
>
> Also, does noise_level do z-buffer averaging only if multiple points that
> end up in a given bin, or is it doing averaging across nearby bins in the
> range image?
>
> Also, since range images only support X,Y,Z,Range but not RGB, what's the
> best way of integrating the newly z-smoothed or otherwise altered range
> image XYZ data back into the XYZRGB point cloud?  Will the range image
> operations preserve a 1:1 mapping between the points in the range image and
> the points in the original point cloud in terms of their ordering in the
> "points" data structure so long as I choose the proper original camera
> viewpoint?
>
> Thanks,
> Matt
>
> --
> View this message in context: http://kinect-with-ros.976505.n3.nabble.com/RangeImagePlanar-Kinect-tp2654362p2654362.html
> Sent from the Kinect with ROS mailing list archive at Nabble.com.
> _______________________________________________
> Ros-kinect mailing list
> [hidden email]
> https://code.ros.org/mailman/listinfo/ros-kinect
>
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] RangeImagePlanar + Kinect

Bastian Steder
In reply to this post by Matt Bell
Hi Matt,

have a look at
pcl_tutorials/src/range_image_live_viewer.cpp

With the command line parameter -s you can control what is taken to
create the range image, a point cloud, a disparity image or a depth
image. In the latter cases a RangeImagePlanar is used. So you don't have
to do the step over a point cloud from the Kinect. In the code it sets
the parameters according to the calibration data in the message.

noise_level only averages points that fall into the same bin.

If you create the image from a depth image from the Kinekt and set the
resolution to -1 it will create the RangeImagePlanar with the original
dimensions and you can later on map the colors back to the pixels. If
you use a lower resolution it will skip pixels in the creation.

Cheers,
Bastian

On 09.03.2011 09:53, Matt Bell wrote:

> Hi,
> I recently noticed the existence of RangeImagePlanar, which, as its
> documentation points out, is tailor-made for working with cameras like the
> Kinect.
>
> I'm not clear on its usage though.  What are the recommended parameters to
> use when converting from a Kinect-generated point cloud to a
> RangeImagePlanar?  (I'm assuming di_width and di_height are the same as the
> input point cloud, but the other parameters are less clear.  )   I'd like to
> have a 1:1 mapping between original point cloud points and range image
> points.
>
> pcl::RangeImagePlanar::createFromPointCloudWithFixedSize
>
> Parameters:
>      point_cloud the source point cloud
>      di_width the disparity image width
>      di_height the disparity image height
>      di_center_x the x-coordinate of the camera's center of projection
>      di_center_y the y-coordinate of the camera's center of projection
>      di_focal_length_x the camera's focal length in the horizontal direction
>      di_focal_length_y the camera's focal length in the vertical direction
>      sensor_pose the pose of the virtual depth camera
>      coordinate_frame the used coordinate frame of the point cloud
>      noise_level what is the typical noise of the sensor - is used for
> averaging in the z-buffer
>      min_range minimum range to consifder points
>
> -------
>
> Also, does noise_level do z-buffer averaging only if multiple points that
> end up in a given bin, or is it doing averaging across nearby bins in the
> range image?
>
> Also, since range images only support X,Y,Z,Range but not RGB, what's the
> best way of integrating the newly z-smoothed or otherwise altered range
> image XYZ data back into the XYZRGB point cloud?  Will the range image
> operations preserve a 1:1 mapping between the points in the range image and
> the points in the original point cloud in terms of their ordering in the
> "points" data structure so long as I choose the proper original camera
> viewpoint?
>
> Thanks,
> Matt
>
> --
> View this message in context: http://kinect-with-ros.976505.n3.nabble.com/RangeImagePlanar-Kinect-tp2654362p2654362.html
> Sent from the Kinect with ROS mailing list archive at Nabble.com.
> _______________________________________________
> Ros-kinect mailing list
> [hidden email]
> https://code.ros.org/mailman/listinfo/ros-kinect


--
Bastian Steder

Albert-Ludwigs-University
Institute of Computer Science
Autonomous Intelligent Systems
Georges-Koehler-Allee 079
D-79110 Freiburg, Germany

Phone:  +49 (761) 203-8013
Fax  :  +49 (761) 203-8007
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] RangeImagePlanar + Kinect

Matt Bell 2
In reply to this post by Yohei Kakiuchi
Hi,

Thanks for the sample code.  It appears to work, except (strangely) the range image created is slightly misaligned from the original point cloud.  If I look at the two point clouds together, the points from the range image are half a pixel width shifted in both x and y from the original point cloud.  This probably isn't a big deal as the RGB misalignment relative to depth data is larger than this, but I was curious as to its source.  It looks like I can correct it by changing the center_x and center_y parameters by 0.5, but now I'm curious which position value is more authoritative -- the one using the provided parameters from camera_info, or my "corrected" centers.  

Thanks,
Matt

Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] RangeImagePlanar + Kinect

Bastian Steder
Hi Matt,

well, that's weird. I just 'stole' the code from the Kinect driver.
Might be that I screwed it up or that something was changed in the
driver afterwards. I don't have the time to go over it right now, but
I'll check/fix it at some point.
Thanks for the information!

Cheers,
Basti


On 10.03.2011 00:34, Matt Bell 2 wrote:

> Hi,
>
> Thanks for the sample code.  It appears to work, except (strangely) the
> range image created is slightly misaligned from the original point cloud.
> If I look at the two point clouds together, the points from the range image
> are half a pixel width shifted in both x and y from the original point
> cloud.  This probably isn't a big deal as the RGB misalignment relative to
> depth data is larger than this, but I was curious as to its source.  It
> looks like I can correct it by changing the center_x and center_y parameters
> by 0.5, but now I'm curious which position value is more authoritative --
> the one using the provided parameters from camera_info, or my "corrected"
> centers.
>
> Thanks,
> Matt
>
>
>
> --
> View this message in context: http://kinect-with-ros.976505.n3.nabble.com/RangeImagePlanar-Kinect-tp2654362p2657926.html
> Sent from the Kinect with ROS mailing list archive at Nabble.com.
> _______________________________________________
> Ros-kinect mailing list
> [hidden email]
> https://code.ros.org/mailman/listinfo/ros-kinect


--
Bastian Steder

Albert-Ludwigs-University
Institute of Computer Science
Autonomous Intelligent Systems
Georges-Koehler-Allee 079
D-79110 Freiburg, Germany

Phone:  +49 (761) 203-8013
Fax  :  +49 (761) 203-8007
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect