MonoMAE: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders

1S-Lab, Nanyang Technological University, Singapore 2College of Computer Science and Technology, Zhejiang University of Technology, China 3UCAS-Terminus AI Lab, University of Chinese Academy of Sciences, China
*Corresponding author.

NeurIPS 2024

Abstract

Monocular 3D object detection aims for precise 3D localization and identification of objects from a single-view image. Despite its recent progress, it often struggles while handling pervasive object occlusions that tend to complicate and degrade the prediction of object dimensions, depths, and orientations. We design MonoMAE, a monocular 3D detector inspired by Masked Autoencoders that addresses the object occlusion issue by masking and reconstructing objects in the feature space. MonoMAE consists of two novel designs. The first is depth-aware masking that selectively masks certain parts of non-occluded object queries in the feature space for simulating occluded object queries for network training. It masks non-occluded object queries by balancing the masked and preserved query portions adaptively according to the depth information. The second is lightweight query completion that works with the depth-aware masking to learn to reconstruct and complete the masked object queries. With the proposed feature-space occlusion and completion, MonoMAE learns enriched 3D representations that achieve superior monocular 3D detection performance qualitatively and quantitatively for both occluded and non-occluded objects. Additionally, MonoMAE learns generalizable representations that can work well in new domains.

Method Overview: Overall Framework of MonoMAE

Overall Framework of MonoMAE.

The framework of MonoMAE training: Given a single-view image, the 3D Backbone extracts 3D object query features which are grouped into non-occluded query features and occluded query features by the Non-Occluded Query Grouping. The Depth-Aware Masking then masks the non-occluded query features to simulate object occlusions adaptively based on the object depth, and the Completion Network then learns to reconstruct the masked queries. Finally, the completed and the occluded query features are concatenated to train the 3D Detection Head for 3D predictions.

Depth-Aware Masking

Illustration of the Depth-Aware Masking.

Illustration of the Depth-Aware Masking. (a) Objects farther away are usually smaller capturing less visual information. (b) The Depth-Aware Masking determines the mask ratio of an object according to its depth - the closer the object is, the larger the mask ratio is applied, thereby compensating the information deficiency for objects that have larger distances from the camera.

Detection Visualization

Detection visualization over the KITTI val set.

Detection visualization over the KITTI val set. Ground-truth annotations are highlighted by red boxes, and predictions by MonoMAE and two state-of-the-art methods are highlighted by green boxes. Red arrows highlight objects that have very different predictions across the compared methods. The ground truth of LiDAR point clouds is provided for visualization only, and they are not used in MonoMAE training.

BibTeX


          @article{jiang2024monomae,
          title={MonoMAE: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders},
          author={Jiang, Xueying and Jin, Sheng and Zhang, Xiaoqin and Shao, Ling and Lu, Shijian},
          journal={Advances in Neural Information Processing Systems},
          year={2024}
        }