【直播预告】CSIG图像图形技术国际在线研讨会第13期将于11月22日举办

学术   2024-11-15 18:06   北京  

       

外观建模与渲染,是为虚拟世界注入生命力、展现细腻真实感、呈现“栩栩如生”视觉盛宴的核心技术。然而,现实世界中的材质多样性,在不同光照条件与视角下所展现出的独特外观细节,使得在复杂多变的光照环境中实现高效且高保真度的动态多视角外观渲染成为一项极具挑战性的研究议题。

为此,中国图象图形学学会(CSIG)将于2024年11月22日(周五)上午10:00举办CSIG图像图形技术国际在线研讨会第13期,本期将特别聚焦Appearance Modeling领域的最新研究成果。

会议邀请到了三位来自工业界和学术界不同领域的杰出研究者。他们针对外观细节重建、真实感渲染、光照自适应等外观渲染研究中的核心难题,进行了深入而系统的探索。在本次报告中,他们将分享各自在计算机图形学顶级会议SIGGRAPH Asia 2024中即将发表的最新研究成果。我们诚挚邀请学术与工业界同行的积极参与,共同探讨外观建模与渲染技术未来的发展与应用。

组织机构

主办单位:中国图象图形学学会(CSIG)
承办单位:CSIG国际合作与交流工作委员会

日程安排


会议时间2024年11月22日(周五)上午10:00

会议直播:欢迎关注CSIG官方视频号,点击预约即可到时观看。

   

讲者简介

Mingming He

Netflix Eyeline Studios

Mingming He is a Senior Research Scientist at Netflix’s Eyeline Studios. She earned her Ph.D. in Computer Science and Engineering from the Hong Kong University of Science and Technology (HKUST) in 2018, after completing her M.S. and B.E. degrees in Computer Science at Zhejiang University in 2014 and 2011, respectively. Her research interests lie in the fields of Computer Graphics and Computer Vision, with a particular focus on computational photography, image and video processing, as well as visual synthesis and manipulation for both human faces and general content in 2D and 3D domains.

Talk title: DifFRelight: Diffusion-Based Facial Performance Relighting

Abstract: We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation. Leveraging a subject-specific dataset containing diverse facial expressions captured under various lighting conditions, including flat-lit and one-light-at-a-time (OLAT) scenarios, we train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs. Our framework includes spatially-aligned conditioning of flat-lit captures and random noise, along with integrated lighting information for global control, utilizing prior knowledge from the pre-trained Stable Diffusion model. This model is then applied to dynamic facial performances captured in a consistent flat-lit environment and reconstructed for novel-view synthesis using a scalable dynamic 3D Gaussian Splatting method to maintain quality and consistency in the relit results. In addition, we introduce unified lighting control by integrating a novel area lighting representation with directional lighting, allowing for joint adjustments in light size and direction. We also enable high dynamic range imaging (HDRI) composition using multiple directional lights to produce dynamic sequences under complex lighting conditions. Our evaluations demonstrate the models efficiency in achieving precise lighting control and generalizing across various facial expressions while preserving detailed features such as skintexture andhair. The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency, advancing photorealism within our framework.


Yixin Zeng

Zhejiang University

Yixin Zeng is a master's student in the CG&CAD Lab at Zhejiang University. Her current research focuses on computational imaging and computer graphics, with a particular emphasis on differentiable geometry and material reconstruction.

Talk title: GS3: Efficient Relighting with Triple Gaussian Splatting

Abstract: Digital shape and appearance modeling is crucial for novel lighting and view synthesis. Over the past five years, methods like Neural Radiance Fields (NeRF) have achieved impressive results but often come with high computational costs, limiting practical application. In this webinar, I will introduce our upcoming paper at SIGGRAPH Asia, where we present a new spatial and angular Gaussian based representation and a triple splatting process, for real-time, high-quality novel lighting-and-view synthesis from multi-view point-lit input images. The effectiveness of our representation is demonstrated on 30 samples with a wide variation in geometry (from solid to fluffy) and appearance (from translucent to anisotropic), as well as using different forms of input data, including rendered images of synthetic/reconstructed objects, photographs captured with a handheld camera and a flash, or from a professional lightstage. Achieving training times between 40-70 minutes and rendering speeds of 90 fps on a single GPU, our results compare favorably with state-of-the-art techniques in terms of quality/performance.


Di Luo

Nankai University

Di Luo is a master's student in the PCA Lab at Nankai University. His current research focuses on material reconstruction and generation.

Talk title: Correlation-aware Encoder-Decoder with Adapters for SVBRDF Acquisition

Abstract: In recent years, capturing real-world materials has gained significant attention as an alternative to labor-intensive manual material authoring. However, extracting high-fidelity Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) maps from a limited number of captured images remains a challenging task due to inherent ambiguities. While prior methods have aimed to tackle this issue by utilizing generative models with latent space optimization or feature extraction through various encoder-decoder networks, they often struggle with decomposition accuracy. This results in noticeable inconsistencies when rendering under novel views or lighting conditions. In our upcoming paper, we introduce a correlation-aware encoder-decoder network designed to address this challenge by explicitly modeling the correlation in SVBRDF acquisition. The capability of our full solution is demonstrated on two datasets by comparing with the state-of-the-art (SOTA) single/multiple SVBRDF recovery approaches. Our method can outperform these methods on both synthetic data and real data. 


主持人

Meng Zhang

Nanjing University of Science and Technology

Meng Zhang is an Associate Professor at Nanjing University of Science and Technology, China. Before that, she spent three years working as a Postdoc Researcher, at UCL, UK. She received her Ph.D. from department of computer science and technology, Zhejiang University. Her research interests are in computer graphics, with a recent focus on applying deep learning to dynamic modeling, rendering, and editing. Her work mainly lies in hair modeling, garment simulation and animation.





图像图形领域高质量科技期刊分级目录
中国图象图形学学会科普活动、素材征集通知
中国图象图形学学会高校志愿者招募
中国图象图形学学会关于组织开展科技成果评价的通知
2024年CSIG图像图形中国行承办方征集中

中国图象图形学学会CSIG
发布图象图形技术的理论研究、应用推广、科学普及、专业培训、技术咨询、学术交流、出版专业书刊等信息,促进该学科技术的发展和在国民经济各个领域的推广应用。
 最新文章