Learning Representations and Generative Models for 3D Point Clouds

原文鏈接:https://blog.csdn.net/e2297192638/article/details/89299545

計算機小白學習中查看完整翻譯.
點擊下載英文論文

Abstract

Three-dimensional geometric data offer an excel- lent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep AutoEncoder (AE) network with state-of-the-art reconstruction quality and gen- eralization ability. The learned representations outperform existing methods on 3D recognition tasks and enable shape editing via simple alge- braic manipulations, such as semantic part edit- ing, shape analogies and shape interpolation, as well as shape completion. We perform a thorough study of different generative models including GANs operating on the raw point clouds, signifi- cantly improved GANs trained in the fixed latent space of our AEs, and Gaussian Mixture Models (GMMs). To quantitatively evaluate generative models we introduce measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs yield the best results overall.

摘要

三維幾何數據爲研究表示學習和生成建模提供了一個很好的領域。在本文中,我們研究用點雲表示的幾何數據。介紹了一種具有最先進的重構質量和泛化能力的deep AutoEncoder (AE) 網絡。學習表示在三維識別任務上優於現有方法,通過簡單的代數操作實現了形狀編輯,如語義部分編輯、形狀類比和形狀插值以及形狀補全。我們對不同的生成模型進行了深入的研究,包括在原始點雲上運行的GANs、在我們AEs的固定潛空間中訓練的具有顯著提升的GANs以及高斯混合模型(GMMs)。爲了定量地評估生成模型,我們引入了基於點雲組間匹配的樣本保真度和多樣性度量。有趣的是,我們對泛化、保真度和多樣性的評估表明,在我們的AEs的潛在空間中訓練過的GMMs總體效果最好。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章