Unified treatment of sparse and dense data in graph-based least squares
In this paper, we present a novel method of incorporating dense (e.g., depth, RGB-D) data in a general purpose least-squares graph optimization framework. Rather than employing a loosely coupled, layered design where dense data is first used to estimate a compact SE(3) transform which then forms a link in the optimization graph as in previous approaches [28, 10, 26], we use a tightly coupled approach that jointly optimizes over each individual (i.e. per-pixel) dense measurement (on the GPU) and all other traditional sparse measurements (on the CPU). Concretely, we use Kinect depth data and KinectFusion-style point-to-plane ICP measurements. In particular, this allows our approach to handle cases where neither dense, nor sparse measurements separately define all degrees of freedom (DoF) while taken together they complement each other and yield the overall maximum likelihood solution. Nowadays it is common practice to flexibly model various sensors, measurements and to be estimated variables in least-squares frameworks. Our intention is to extend this flexibility to applications with dense data. Computationally, the key is to combine the many dense measurements on the GPU efficiently and communicate only the results to the sparse framework on the CPU in a way that is mathematically equivalent to the full least-squares system. This results in <;20 ms for a full optimization run. We evaluate our approach on a humanoid robot, where in a first experiment we fuse Kinect data and odometry in a laboratory setting, and in a second experiment we fuse with an unusual “sensor”: using the embodiedness of the robot we estimate elasticities in the kinematic chain modeled as unknown, time-varying joint offsets while it moves its arms in front of a tabletop manipulation workspace. In both experiments only tightly coupled optimization will localize the robot correctly.