[Ros-kinect] Kinect datasets

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[Ros-kinect] Kinect datasets

Stéphane Magnenat
Dear list,

We (Juergen Sturm, others, me) are combining forces to create kinect
datasets along ground truth, that are suitable for developing, testing
and benchmarking perception, navigation, and mapping algorithms.

To keep the datasets of a reasonable size (GB, not TB), we want to
record the least data as possible, and to provide a program to
regenerate others. Thus we plan to record depth maps instead of point
clouds.

We plan to record:
- raw color camera image (un-de-bayered)
- registered depth map (float)
- raw disparity map (uint16?) or unregistered depth map (float)
- camera_info (both intrinsic) + ground-truth as tf

However, there are some questions regarding the current ROS kinect driver:
- The driver does not allow to record simultaneously both the registered
depth map and the unregistered one (or the corresponding disparity map).
- The driver does not provide the RAW RGB camera data on a topic

WG, do you think it is a good idea to add these? If yes, would you like
to so? I am worried about fragmentation if we begin to fork the
"official" ROS driver. I would like our datasets work to be part of the
core tools of the ROS/kinect ecosystem.

Kind regards,

Stéphane

--
Dr Stéphane Magnenat
http://stephane.magnenat.net
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Kinect datasets

rusu
Administrator
Stéphane,

Gary Bradski is already doing this as part of the NIST Perception Challenge -- try coordinating with him. In fact, NIST
has already recorded a lot of datasets that they will present at ICRA as "standard benchmarks". It's good to have
someone like them do this because it's a big big burden to make everything correctly, and NIST is an organization with a
proven record on nitpicking on things like this :)

Regarding the ROS driver comments:

  * using OpenNI you cannot have both "unregistered" RGB-depth _and_ "registered" RGB-depth, as that is a switch in the
firmware not the software running on our PCs. This means that you have to choose one before you start recording;

  * using libfreenect you have no registration, which means you're leaving it up to the users to calibrate their
cameras. That is a pain and can provide much worse results if not done appropriately;

  * the latest driver has image_raw for both depth and RGB, so you should be able to register the bayer images.



One thing to keep in mind though is that we found a bit of variance in several Kinect units at Willow Garage, which
means that they do not 100% coincide ;) This might or might not affect your benchmarks. It's a $150 sensor, so they
don't care too much about the variance of the errors, as long as it does the job in their standard depth benchmarks for
the XBox.

Cheers,
Radu.
--
http://pointclouds.org

On 04/08/2011 02:33 AM, Stéphane Magnenat wrote:

> Dear list,
>
> We (Juergen Sturm, others, me) are combining forces to create kinect
> datasets along ground truth, that are suitable for developing, testing
> and benchmarking perception, navigation, and mapping algorithms.
>
> To keep the datasets of a reasonable size (GB, not TB), we want to
> record the least data as possible, and to provide a program to
> regenerate others. Thus we plan to record depth maps instead of point
> clouds.
>
> We plan to record:
> - raw color camera image (un-de-bayered)
> - registered depth map (float)
> - raw disparity map (uint16?) or unregistered depth map (float)
> - camera_info (both intrinsic) + ground-truth as tf
>
> However, there are some questions regarding the current ROS kinect driver:
> - The driver does not allow to record simultaneously both the registered
> depth map and the unregistered one (or the corresponding disparity map).
> - The driver does not provide the RAW RGB camera data on a topic
>
> WG, do you think it is a good idea to add these? If yes, would you like
> to so? I am worried about fragmentation if we begin to fork the
> "official" ROS driver. I would like our datasets work to be part of the
> core tools of the ROS/kinect ecosystem.
>
> Kind regards,
>
> Stéphane
>
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Kinect datasets

Stéphane Magnenat
Hi Radu,

> Gary Bradski is already doing this as part of the NIST Perception
> Challenge -- try coordinating with him. In fact, NIST has already
> recorded a lot of datasets that they will present at ICRA as "standard
> benchmarks". It's good to have someone like them do this because it's a
> big big burden to make everything correctly, and NIST is an organization
> with a proven record on nitpicking on things like this :)

That is a good idea indeed. Do you know weather he is also using the
Kinect? Is he on this mailing list?

> Regarding the ROS driver comments:
>
> * using OpenNI you cannot have both "unregistered" RGB-depth _and_
> "registered" RGB-depth, as that is a switch in the firmware not the
> software running on our PCs. This means that you have to choose one
> before you start recording;
>
> * using libfreenect you have no registration, which means you're leaving
> it up to the users to calibrate their cameras. That is a pain and can
> provide much worse results if not done appropriately;
>
> * the latest driver has image_raw for both depth and RGB, so you should
> be able to register the bayer images.
>
> One thing to keep in mind though is that we found a bit of variance in
> several Kinect units at Willow Garage, which means that they do not 100%
> coincide ;) This might or might not affect your benchmarks. It's a $150
> sensor, so they don't care too much about the variance of the errors, as
> long as it does the job in their standard depth benchmarks for the XBox.

Do you have some quantitative idea of this variance? It is interesting
to know weather it is worth recording datasets without calibration to
allow end-users to do the calibration themselves, or not.

Thank you, have a nice day,

Stéphane

--
Dr Stéphane Magnenat
http://stephane.magnenat.net
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Kinect datasets

Patrick Mihelich
Hi Stéphane,

I'm in the midst of refactoring the openni_camera ROS driver one last (hopefully) time. You can track progress in the 'bullet' branch of openni_kinect. This probably won't be done for a couple weeks, but I believe the changes will help you a lot. The goals are:

 * Full support for calibrating the Kinect cameras, and registering depths to the RGB image based on calibration (separate from OpenNI).
 * Break the current monolithic driver nodelet into several smaller, tightly-focused nodelets. So the driver nodelet will produce only the raw depth and RGB images and CameraInfo's, while colorization, rectification, registration, conversion to point clouds/disparity images, etc will be handled by other nodelets.

This makes your program to regenerate point cloud data trivial - you'll just need to hook the appropriate nodelets together. By bypassing the OpenNI registration, you can have both registered and unregistered depth maps, although this does require calibrating your Kinect.

The best archival format for kinect data has been discussed in other threads. By recording just the rgb/image_raw (Bayer) and depth/image_raw (uint16) topics, you're down to a svelte 3 bytes per pixel. Everything else can be recreated from these, plus the camera_info topics when the cameras are calibrated of course.

Cheers,
Patrick

2011/4/8 Stéphane Magnenat <[hidden email]>
Hi Radu,

> Gary Bradski is already doing this as part of the NIST Perception
> Challenge -- try coordinating with him. In fact, NIST has already
> recorded a lot of datasets that they will present at ICRA as "standard
> benchmarks". It's good to have someone like them do this because it's a
> big big burden to make everything correctly, and NIST is an organization
> with a proven record on nitpicking on things like this :)

That is a good idea indeed. Do you know weather he is also using the
Kinect? Is he on this mailing list?

> Regarding the ROS driver comments:
>
> * using OpenNI you cannot have both "unregistered" RGB-depth _and_
> "registered" RGB-depth, as that is a switch in the firmware not the
> software running on our PCs. This means that you have to choose one
> before you start recording;
>
> * using libfreenect you have no registration, which means you're leaving
> it up to the users to calibrate their cameras. That is a pain and can
> provide much worse results if not done appropriately;
>
> * the latest driver has image_raw for both depth and RGB, so you should
> be able to register the bayer images.
>
> One thing to keep in mind though is that we found a bit of variance in
> several Kinect units at Willow Garage, which means that they do not 100%
> coincide ;) This might or might not affect your benchmarks. It's a $150
> sensor, so they don't care too much about the variance of the errors, as
> long as it does the job in their standard depth benchmarks for the XBox.

Do you have some quantitative idea of this variance? It is interesting
to know weather it is worth recording datasets without calibration to
allow end-users to do the calibration themselves, or not.

Thank you, have a nice day,

Stéphane

--
Dr Stéphane Magnenat
http://stephane.magnenat.net
_______________________________________________


_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [Ros-kinect] Kinect datasets

rusu
Administrator
In reply to this post by Stéphane Magnenat
Stéphane,


On 04/08/2011 08:37 AM, Stéphane Magnenat wrote:

> Hi Radu,
>
>> Gary Bradski is already doing this as part of the NIST Perception
>> Challenge -- try coordinating with him. In fact, NIST has already
>> recorded a lot of datasets that they will present at ICRA as "standard
>> benchmarks". It's good to have someone like them do this because it's a
>> big big burden to make everything correctly, and NIST is an organization
>> with a proven record on nitpicking on things like this :)
>
> That is a good idea indeed. Do you know weather he is also using the
> Kinect? Is he on this mailing list?

Yes, NIST/Gary are using the Kinect as the main sensor.

Gary might or might not be on the mailing list -- he uses a very complicated randomized algorithm for reading e-mails,
so it would be hard to anticipate whether he replies or not. We could have a private discussion with him offline however :)

> Do you have some quantitative idea of this variance? It is interesting
> to know weather it is worth recording datasets without calibration to
> allow end-users to do the calibration themselves, or not.

No idea yet. The calibration might not help -- we're talking about different sensing errors in the depth and RGB data
due to variances in the lenses used, alignment, quality control, etc. Again, they might or might not influence your
results, depending on what you are trying to achieve.

Cheers,
Radu.
--
http://pointclouds.org
_______________________________________________
Ros-kinect mailing list
[hidden email]
https://code.ros.org/mailman/listinfo/ros-kinect
Loading...