DSA研讨会 | Efficient Deep Neural Architecture Design and Training

文摘   2024-08-22 10:51   广东  



2024

DSA Seminar

研讨会



研讨会主题

TITLE

Efficient Deep Neural Architecture Design and Training



研讨会时间

TIME

Aug. 23, 2024, Fri.

09:30 AM – 10:30 AM (Beijing Time)


研讨会地址

VENUE

E1-201


研讨会链接

ZOOM ID

Zoom Meeting ID: 997 2793 7315
Passcode: dsat


研讨会简介

ABSTRACT

The growing complexity of machine learning models has introduced significant challenges in artificial intelligence, necessitating substantial computational resources, memory, and energy. Model compression algorithms have emerged as a critical solution in both academia and industry, forming a common pipeline for developing more efficient models. In this talk, I will present my research in two key areas of model compression: (1) Neural architecture search (NAS): I will introduce my pioneering work in enhancing the search precision and efficiency of NAS by refining the sampling strategy of architectures, leading to more optimal model designs with reduced computational overhead. (2) Knowledge distillation (KD): I will discuss my research on exploring and improving KD in the context of modern models and training strategies, addressing the unique challenges posed by the substantial capacity gap between teacher and student models. Additionally, I will briefly touch upon my investigations into the efficiency of large foundation models, highlighting emerging trends that are driving the future of efficient AI.


分享者简介

SPEAKER BIO

Tao HUANG

University of Sydney

Tao Huang is a final-year PhD candidate in the School of Computer Science at the University of Sydney. Prior to this, he obtained his Bachelor’s degree in Computer Science from Huazhong University of Science and Technology and worked as a Researcher at SenseTime Research. His primary research interests lie in efficient machine learning, particularly in knowledge distillation, neural architecture design, and efficient training algorithms. Tao has published over 15 papers in top-tier conferences such as CVPR, NeurIPS, ICLR, and ECCV, including 9 as the first author. He was the developer of the OpenMMLab Model Compression Toolbox - MMRazor, which integrates multiple model compression algorithms from his research and has garnered over 1,400 stars on GitHub. In industry, Tao's model compression algorithms have been integrated into SenseTime's Intelligent Cabin products. These products have established partnerships with more than 30 leading domestic and international companies, with over 13 million vehicles covered by designated mass production projects.



关注了解更多资讯

DSA Thrust




港科大广州 I 数据科学与分析
香港科技大学(广州)信息枢纽数据科学与分析学域官方公众平台 Data Science and Analytics Thrust-Information Hub- HKUST(GZ)
 最新文章