Thursday, December 19, 2024 - 09:00 am
Online

DISSERTATION DEFENSE

Department of Computer Science and Engineering
University of South Carolina
Author : Pingping Cai
Advisor: Dr. Song Wang

 
Date: Dec 19, 2024
Time:  9 am – 10: 30 am
 
Place: Teams Link
 
Meeting ID: 240 720 185 444
Passcode: Lj6ot2X7 

Abstract

  3D computer vision is a promising research field with the potential to revolutionize future lifestyles. Among various 3D representation formats, point clouds stand out for their efficiency in depicting 3D objects using a set of coordinates, enabling advancements in fields such as autonomous driving, virtual reality, and robotics. Due to the limitations of sensor fields of view and scanning trajectories, the collected point clouds are usually sparse, noisy, and incomplete, impeding the performance of many downstream applications. Thus, the tasks of low-level point cloud processing are proposed to refine and generate dense, clean, and complete point clouds.

To accomplish these tasks, traditional algorithms rely on manually designed rules for processing point clouds in 3D coordinate space, but they often struggle with new or complex shapes. In contrast, neural network-based algorithms extract and manipulate geometric features in a high-dimensional feature space and have made substantial progress in point cloud processing. Nevertheless, outputs from existing neural networks frequently exhibit ambiguous shapes and excessive noise, indicating significant room for improvement. Therefore, we focus on advancing neural network-based low-level point cloud processing algorithms, including upsampling, completion, and denoising. A key contribution of this dissertation is the integration of task-specific properties, such as geometric surface constraints and 3D shape knowledge, into neural networks, resulting in significant improvements over previous methods.
We begin our research with the task of point cloud upsampling, a fundamental problem in 3D analysis. A number of attempts achieve this goal by establishing a point-to-point mapping function via deep neural networks. However, these approaches are prone to produce outlier points due to the lack of explicit surface-level constraints. To solve this problem, we introduce a novel surface regularizer into the upsampler network by forcing the neural network to learn the underlying parametric surface represented by bicubic functions and rotation functions, where the newly generated points are then constrained on the underlying surface.
Then, we focus on the point cloud shape completion task, which aims to reconstruct the missing regions of the incomplete point clouds with accurate shapes. Prior approaches address this task by generating a coarse but complete seed point cloud through an encoder-decoder network. However, the encoded features often suffer from information loss in the missing portions, leading to an inability of the decoder to reconstruct the seed point cloud with detailed geometric features. To overcome this challenge, we propose a novel dictionary-guided shape completion network. It consists of orthogonal dictionaries that can learn shape priors from training samples, thereby compensating for the information loss in missing portions during inference and enhancing the representation capability of seed points.
Finally, we continue our research on the point cloud denoising task, where the denoised point clouds are expected to well represent the underlying object shape, as well as exhibit better point distributions on the object surface. Previous methods iteratively shift noisy points toward the underlying surface using fixed directions, resulting in poor efficiency and distribution. To address this problem, we introduce a novel direction-guided denoising pipeline, where each point is shifted to the underlying surface using optimally predicted directions and distances. It includes the newly designed direction-guided projection blocks, based on neural implicit functions, to facilitate efficient point movement.