Yun Li. 7 July, 2016.
Communicated by Andrew Chien.


The emergence of commodity depth sensors provides rich depth data for mobile devices to achieving Simultaneously Tracking and Mapping (SLAM) to enable applications like VR or AR. However, the limited computation resources on mobile devices are the bottleneck to guide spread and flexible use. This paper provides an analytical model for the computational cost for SLAM algorithm. We modeled the total number of instructions required for each kernel based on the input depth data resolution, the volume resolution, and scene dependent information. To validate the model, we ran SLAM with both the publicly avail- able ICL NUIM noise-free dataset and data we collected with a Kinect V2. We varied the frame rate, depth resolution, and volume resolution to test different configurations. Com- pared with the baseline measured, the model matches actual measurement with an error within 10%. In addition, we propose Distributed SLAM framework (DSLAM), which allows different computation kernels of SLAM to be processed by different devices in a pipelined fashion, so as to lower the computation requirement of a single device. We also analyzed data communication between each kernel and identified the communication bottleneck—the volume data transferred between the integration stage and the raycasting stage. We solve the problem by sending updated voxels instead of the whole volume data, which reduce volume data communication by around 90%. We think the analytical model is a possible tool to enable intelligent distribute of SLAM elements to available devices to maximize flexibility and performance, particularly battery life.

Original Document

The original document is available in PDF (uploaded 7 July, 2016 by Andrew Chien).