Point Transformer

By H. Zhao et al
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Point Transformer
3.1. Background
3.2. Point Transformer Layer
3.3. Position Encoding
3.4. Point Transformer Block
3.5. Network Architecture
4. Experiments

Summary

The Point Transformer explores the application of self-attention networks to 3D point cloud processing. By designing self-attention layers and networks, it achieves significant improvements in tasks like semantic scene segmentation, object part segmentation, and object classification. The Point Transformer sets new benchmarks in various domains, showcasing its effectiveness in 3D deep learning tasks. The network architecture is based on point transformer blocks and transition modules, enabling efficient feature encoding and decoding. Experimental results on datasets like S3DIS, ModelNet40, and ShapeNetPart demonstrate the superior performance of the Point Transformer design.
×
This is where the content will go.