Point clouds with other cameras: Where to start?

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Point clouds with other cameras: Where to start?

Robin
Hi all,

I'd like to put together a ROS package that can create point clouds using the depth information from the Kinect, but the RGB values from another, higher-quality camera. I know how to calibrate the cameras to each other, but I'm not sure how to efficiently create the point clouds in ROS.

Could someone please point me to a .cpp file in the ni stack that contains the existing code for creating and publishing the current point clouds? I figure it would be very useful as a starting point.

Best
~Robin
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] Point clouds with other cameras: Where to start?

rusu
Administrator

On 03/05/2011 05:57 PM, Robin Held wrote:

> Hi all,
>
> I'd like to put together a ROS package that can create point clouds using
> the depth information from the Kinect, but the RGB values from another,
> higher-quality camera. I know how to calibrate the cameras to each other,
> but I'm not sure how to efficiently create the point clouds in ROS.
>
> Could someone please point me to a .cpp file in the ni stack that contains
> the existing code for creating and publishing the current point clouds? I
> figure it would be very useful as a starting point.


https://kforge.ros.org/openni/openni_ros/file/5414d278b8a5/openni_camera/src/openni_nodelet.cpp

;)

Cheers,
Radu.
--
http://pointclouds.org

>
> Best
> ~Robin
>
> --
> View this message in context: http://kinect-with-ros.976505.n3.nabble.com/Point-clouds-with-other-cameras-Where-to-start-tp2640416p2640416.html
> Sent from the Kinect with ROS mailing list archive at Nabble.com.
> _______________________________________________
> Ros-kinect mailing list
> [hidden email]
> https://code.ros.org/mailman/listinfo/ros-kinect
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] Point clouds with other cameras: Where to start?

Robin
Thanks, Radu!

I have a couple more questions, if you don't mind.

When I make these custom point clouds, would it be sufficient to write a ROS package that takes in the /camera/depth/image topic and and the topic from my other RGB camera, combines then, and then outputs the pointcloud? Or should I write a lower-level driver, like the one found in openni_nodelet.cpp? In other words, would you expect there be a large drop in framerate by using the processed depth feed published through ROS, rather than the one coming directly from the openni drivers?

Also,  when one sets the resolution of the kinect RGB feed to XVGA, is the resolution of the point cloud automatically also set to XVGA? I feel like that's the case based on visual assessment, but I don't see it happening in openni_nodelet.cpp.

Best
~Robin
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] Point clouds with other cameras: Where to start?

rusu
Administrator
Robin,


On 03/06/2011 04:48 PM, Robin wrote:

> Thanks, Radu!
>
> I have a couple more questions, if you don't mind.
>
> When I make these custom point clouds, would it be sufficient to write a ROS
> package that takes in the /camera/depth/image topic and and the topic from
> my other RGB camera, combines then, and then outputs the pointcloud? Or
> should I write a lower-level driver, like the one found in
> openni_nodelet.cpp? In other words, would you expect there be a large drop
> in framerate by using the processed depth feed published through ROS, rather
> than the one coming directly from the openni drivers?

I suggest you design it similar to the openni_nodelet.cpp class, where one could use the driver as a nodelet. There can
be a huge penalty in performance if you copy and de/serialize data at fast frame rates over multiple nodes.

> Also,  when one sets the resolution of the kinect RGB feed to XVGA, is the
> resolution of the point cloud automatically also set to XVGA? I feel like
> that's the case based on visual assessment, but I don't see it happening in
> openni_nodelet.cpp.

Nope, the cloud stays to VGA (that's the maximum size of the depth image).

Cheers,
Radu.
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] Point clouds with other cameras: Where to start?

Robin
Hi Radu,

It seems like the simplest thing would be to just copy the openni_nodelet.cpp file into a new project and modify it for my purposes. Does that mean I should copy all of the openni_camera package into a new package? Or should I try to make a new package with my version of openni_nodelet.cpp and link it to the files in ~/ni/ni/openni_camera?

I'm thinking it would be nice to create something that links to the files in the ni stack so I can take advantage of improvements in the future. But I'm new to nodelets, so I'm unclear whether that's feasible.

Best
~Robin
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] Point clouds with other cameras: Where to start?

rusu
Administrator
Robin,

Copying files over is definitely a better start than assuming that certain files will always be present in a package (so
you can include them). I would start with that, and then see over time if it makes sense to merge your work with the NI
stack or something. As always, let us know how we can help! :)

Cheers,
Radu.
--
http://pointclouds.org

On 03/07/2011 12:20 PM, Robin wrote:

> Hi Radu,
>
> It seems like the simplest thing would be to just copy the
> openni_nodelet.cpp file into a new project and modify it for my purposes.
> Does that mean I should copy all of the openni_camera package into a new
> package? Or should I try to make a new package with my version of
> openni_nodelet.cpp and link it to the files in ~/ni/ni/openni_camera?
>
> I'm thinking it would be nice to create something that links to the files in
> the ni stack so I can take advantage of improvements in the future. But I'm
> new to nodelets, so I'm unclear whether that's feasible.
>
> Best
> ~Robin
>
> --
> View this message in context: http://kinect-with-ros.976505.n3.nabble.com/Point-clouds-with-other-cameras-Where-to-start-tp2640416p2647737.html
> Sent from the Kinect with ROS mailing list archive at Nabble.com.
> _______________________________________________
> Ros-kinect mailing list
> [hidden email]
> https://code.ros.org/mailman/listinfo/ros-kinect
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] Point clouds with other cameras: Where to start?

Robin
It definitely seems like having an "official" option to set the RGB point cloud to SXGA could be very useful. Please let me know the best way to share the code I end up writing and maybe you can evaluate whether it should be merged, as you said.

Best
~Robin
Reply | Threaded
Open this post in threaded view
|

Re: [Ros-kinect] Point clouds with other cameras: Where to start?

Robin
In case you're interested, this is the only code i had to modify. The published point cloud refreshes slowly, but at least it's hi-res.

void OpenNINodelet::publishXYZRGBPointCloud (const sensor_msgs::ImageConstPtr& depth_msg, const sensor_msgs::ImageConstPtr& rgb_msg) const
{
  pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_msg (new pcl::PointCloud<pcl::PointXYZRGB>() );

        int depth_width = depth_msg->width;
        int depth_height = depth_msg->height;
        int cloudWidth = rgb_msg->width;
        int cloudHeight;
        if ((int)((float)cloudWidth/(float)depth_width)*(float)depth_height < (int)rgb_msg->height){
                cloudHeight = (int)((float)cloudWidth/(float)depth_width)*(float)depth_height;
        } else {
                cloudHeight = rgb_msg->height;
        }

  cloud_msg->header.stamp     = rgb_msg->header.stamp;
  cloud_msg->header.frame_id  = rgb_frame_id_;
  cloud_msg->height           = cloudHeight;
  cloud_msg->width            = cloudWidth;
  cloud_msg->is_dense         = false;

  // do not publish if rgb image is smaller than color image -> seg fault
  if (rgb_msg->height < depth_msg->height || rgb_msg->width < depth_msg->width)
  {
    // we dont want to flood the terminal with warnings
    static unsigned warned = 0;
    if (warned % 100 == 0)
      NODELET_WARN("rgb image smaller than depth image... skipping point cloud for this frame rgb:%dx%d vs. depth:%3dx%d"
              , rgb_msg->width, rgb_msg->height, depth_msg->width, depth_msg->height );
    ++warned;
    return;
  }
  cloud_msg->points.resize (cloud_msg->height * cloud_msg->width);

  float constant = 1.0f / device_->getImageFocalLength (cloud_msg->width);
  float centerX = (cloud_msg->width >> 1) - 0.5f;
  float centerY = (cloud_msg->height >> 1) - 0.5f;
  const float* depth_buffer = reinterpret_cast<const float*>(&depth_msg->data[0]);
  const uint8_t* rgb_buffer = &rgb_msg->data[0];

  // but rgb_msg may be higher res. need to account for upsampling the depth
  unsigned color_step = 3;
  unsigned color_skip = 0;

  float depth_idx = 0;
        float depth_step = (float)depth_width/(float)cloud_msg->width;
        float depth_line_idx = 0;
        int rounded_depth_idx = 0;


  int color_idx = 0;
  pcl::PointCloud<pcl::PointXYZRGB>::iterator pt_iter = cloud_msg->begin ();
  for (int v = 0; v < (int)cloud_msg->height; ++v, color_idx += color_skip, depth_line_idx += depth_step)
  {
          depth_idx = floor(depth_line_idx)*depth_width;
    for (int u = 0; u < (int)cloud_msg->width; ++u, color_idx += color_step, depth_idx += depth_step, ++pt_iter)
    {
      pcl::PointXYZRGB& pt = *pt_iter;

      rounded_depth_idx = floor(depth_idx);

      float Z = depth_buffer[rounded_depth_idx];

      // Check for invalid measurements
      if (std::isnan (Z))
      {
        pt.x = pt.y = pt.z = Z;
      }
      else
      {
        // Fill in XYZ
        pt.x = (u - centerX) * Z * constant;
        pt.y = (v - centerY) * Z * constant;
        pt.z = Z;
      }

      // Fill in color
      RGBValue color;
      color.Red   = rgb_buffer[color_idx];
      color.Green = rgb_buffer[color_idx + 1];
      color.Blue  = rgb_buffer[color_idx + 2];
      color.Alpha = 0;
      pt.rgb = color.float_value;
    }
  }

  pub_point_cloud_rgb_.publish (cloud_msg);
}