[Ros-kinect] Point-cloud projection in a second pass, for CPU-limited systems

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[Ros-kinect] Point-cloud projection in a second pass, for CPU-limited systems

Mac Mason
I have a netbook that will be combined with a Kinect on a mobile robot; some quick benchmarking indicates that this machine (using openni_camera) can't produce PointCloud2 messages at much more than 5hz. However, it can read (and bag) the RGB and depth images at the full 30hz.

Is there a straightforward way to bag the raw data, and then generate the PointCloud2 messages offline, or add them to the bag in a second pass? Since I'll be teleoperating the robot, I don't need the point cloud  to appear online.

What's the best way to approach this?

Thanks!

        --Mac

--
Julian "Mac" Mason      [hidden email]      www.cs.duke.edu/~mac

_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Point-cloud projection in a second pass, for CPU-limited systems

Jack O'Quin
On Mon, Feb 14, 2011 at 5:48 PM, Mac Mason <[hidden email]> wrote:
> I have a netbook that will be combined with a Kinect on a mobile robot; some quick benchmarking indicates that this machine (using openni_camera) can't produce PointCloud2 messages at much more than 5hz. However, it can read (and bag) the RGB and depth images at the full 30hz.
>
> Is there a straightforward way to bag the raw data, and then generate the PointCloud2 messages offline, or add them to the bag in a second pass? Since I'll be teleoperating the robot, I don't need the point cloud  to appear online.
>
> What's the best way to approach this?

The rosbag tool is pretty flexible. You may be able to do everything with it.

 1) rosbag record the raw data bag on-line
 2) rosbag play it back off-line with the --clock option setting ROS time
 3) generate the PointCloud2 messages
 4) rosbag record the point cloud

The timestamps should match (due to rosbag play --clock), so it should
be possible to play back *both* bags with their times interleaved.
--
 joq
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Point-cloud projection in a second pass, for CPU-limited systems

Mac Mason
On Feb 14, 2011, at 7:02 PM, Jack O'Quin wrote:
> The rosbag tool is pretty flexible. You may be able to do everything with it.
>
> 1) rosbag record the raw data bag on-line
> 2) rosbag play it back off-line with the --clock option setting ROS time
> 3) generate the PointCloud2 messages
> 4) rosbag record the point cloud

Ah, excellent. That solves half of the problem (the storing-the-data part). The remaining part is how to turn the raw kinect data into a PointCloud2 in a second pass (rather than having openni_camera do it for me online).

Thanks!

        --Mac

--
Julian "Mac" Mason      [hidden email]      www.cs.duke.edu/~mac

_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Point-cloud projection in a second pass, for CPU-limited systems

Jack O'Quin
On Mon, Feb 14, 2011 at 6:10 PM, Mac Mason <[hidden email]> wrote:
> On Feb 14, 2011, at 7:02 PM, Jack O'Quin wrote:
>> The rosbag tool is pretty flexible. You may be able to do everything with it.
>>
>> 1) rosbag record the raw data bag on-line
>> 2) rosbag play it back off-line with the --clock option setting ROS time
>> 3) generate the PointCloud2 messages
>> 4) rosbag record the point cloud
>
> Ah, excellent. That solves half of the problem (the storing-the-data part). The remaining part is how to turn the raw kinect data into a PointCloud2 in a second pass (rather than having openni_camera do it for me online).

I think it does both (unless I misunderstood your original question).
You play back the saved data, while generating a point cloud from it.

We frequently do similar things with Velodyne 3D LIDAR data.
--
 joq
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Point-cloud projection in a second pass, for CPU-limited systems

rusu
Administrator
In reply to this post by Mac Mason
Mac,

Good point. We'll add a node(let) that can take the Kinect depth/rgb data and produce PointCloud2 messages, separate
from the driver.

Cheers,
Radu.
--
http://pointclouds.org

On 02/14/2011 03:48 PM, Mac Mason wrote:

> I have a netbook that will be combined with a Kinect on a mobile robot; some quick benchmarking indicates that this machine (using openni_camera) can't produce PointCloud2 messages at much more than 5hz. However, it can read (and bag) the RGB and depth images at the full 30hz.
>
> Is there a straightforward way to bag the raw data, and then generate the PointCloud2 messages offline, or add them to the bag in a second pass? Since I'll be teleoperating the robot, I don't need the point cloud  to appear online.
>
> What's the best way to approach this?
>
> Thanks!
>
> --Mac
>
> --
> Julian "Mac" Mason      [hidden email]      www.cs.duke.edu/~mac
>
> _______________________________________________
> Ros-kinect mailing list
> [hidden email]
> https://code.ros.org/mailman/listinfo/ros-kinect
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Point-cloud projection in a second pass, for CPU-limited systems

garratt
Hi Radu,

Was this ever done?  I notice that you can take the depth image and the rgb image and call publishXYZRGBPointCloud, but you need the variable 'constant'. 

Thanks
Garratt

On Tue, Feb 15, 2011 at 1:13 PM, Radu Bogdan Rusu <[hidden email]> wrote:
Mac,

Good point. We'll add a node(let) that can take the Kinect depth/rgb data and produce PointCloud2 messages, separate
from the driver.

Cheers,
Radu.
--
http://pointclouds.org

On 02/14/2011 03:48 PM, Mac Mason wrote:
> I have a netbook that will be combined with a Kinect on a mobile robot; some quick benchmarking indicates that this machine (using openni_camera) can't produce PointCloud2 messages at much more than 5hz. However, it can read (and bag) the RGB and depth images at the full 30hz.
>
> Is there a straightforward way to bag the raw data, and then generate the PointCloud2 messages offline, or add them to the bag in a second pass? Since I'll be teleoperating the robot, I don't need the point cloud  to appear online.
>
> What's the best way to approach this?
>
> Thanks!
>
>       --Mac
>
> --
> Julian "Mac" Mason      [hidden email]      www.cs.duke.edu/~mac
>
> _______________________________________________
> Ros-kinect mailing list
> [hidden email]
> https://code.ros.org/mailman/listinfo/ros-kinect
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect


_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Point-cloud projection in a second pass, for CPU-limited systems

Suat Gedikli
The best way is to bag the disparity and rgb image. The disparity image contains all necessary informations to calculate the point cloud afterwards (the constant can be calculated from the baseline and focal length).

-Suat
Suat
Loading...