Best edge-aware smoothing techniques for Kinect depth information?
I'd like to be able to smooth out the depth data in the Kinect point cloud in order to improve point cloud registration performance.
However, the problem seems to be less straightforward than just applying an edge-aware blur.
Depth quantization errors are much larger at greater distances; therefore I'd like to be able to apply a large radius of smoothing at these distances while doing little to no smoothing close up. I don't want to depth-blur across depth discontinuities. It seems relatively trivial to check for substantial depth discontinuities and avoid depth-blurring across them, but the situation of not wanting to blur out a corner point (eg the corner of a box that is pointing toward the camera, with all three adjacent faces visible) seems more tricky.
The question of how to make a variable-radius edge-aware blur fast is also an interesting one. Most of the blur speed optimizations I know about wouldn't work in this scenario.
Has anyone written something like this? I want to make sure I don't duplicate someone else's work before doing it.
Also, given that I am starting from the point cloud of the Kinect, what would be the best way to do this efficiently? I could make a range image, extract the ranges, and then operate on it like a monochrome image.