[1] Charlie Nash, Yaroslav Ganin, S. M. Ali Eslami, Peter W. Battaglia. PolyGen: An autoregressive generative model of 3D meshes. International Conference on Machine Learning (ICML). 7220-7229, 2020.[2] Yawar Siddiqui, Antonio Alliegro, Alexey Artemov, Tatiana Tommasi, Daniele Sirigatti, Vladislav Rosov, Angela Dai, Matthias Nießner. MeshGPT: Generating triangle meshes with decoder-only Transformers. Conference on Computer Vision and Pattern Recognition (CVPR). 19615-19625, 2024.[3] Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, Wook-Shin Han. Autoregressive image generation using residual quantization. Conference on Computer Vision and Pattern Recognition (CVPR). 11523-11532, 2023.[4] William L. Hamilton, Rex Ying, Jure Leskovec. Inductive representation learning on large graphs. Advances in Neural Information Processing Systems (NeurIPS). 1024-1034, 2017.[5] Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics (TACL). 9:53-68, 2021.[6] Sebastian Koch, Albert Matveev, Zhongshi Jiang, Francis Williams, Alexey Artemov, Evgeny Burnaev, Marc Alexa, Denis Zorin, Daniele Panozzo. ABC: A big CAD model dataset for geometric deep learning. Conference on Computer Vision and Pattern Recognition (CVPR). 9601-9611, 2019.
[7] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems (NeurIPS). 5998-6008, 2017.
作者:马雪奇
来源:公众号【深圳大学可视计算研究中心】
llustration From IconScout By IconScout Store -The End-本周上新!扫码观看!