Masked Discrimination for Self-Supervised Learning on Point Clouds

By Haotian Liu et al
Published on Aug. 1, 2022
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Approach

Summary

This paper introduces a novel self-supervised learning approach, Masked Discrimination, for point clouds. The authors propose a discriminative mask pretraining Transformer framework, MaskPoint, to address the challenges of masked autoencoding in point cloud understanding. By representing point clouds as discrete occupancy values and performing binary classification between masked object points and sampled noise points, the approach aims to learn rich representations. The pretrained models show state-of-the-art results in various downstream tasks like 3D shape classification, segmentation, and object detection. The architecture involves a Transformer encoder for encoding point patches and a decoder for discriminative classification. The method aims to learn semantic feature representations without human supervision by distinguishing between real and fake query points. The approach is simple yet effective, offering significant speedup in pretraining compared to prior methods like Point-BERT. Overall, the paper contributes a new perspective on self-supervised learning for point clouds and demonstrates the effectiveness of the proposed Masked Discrimination approach.
×
This is where the content will go.