理论中心前沿系列讲座 | 线上讲座:Regularization and Optimal Multiclass Learning

科技   科技   2024-08-20 12:00   北京  


微软亚洲研究院理论中心前沿系列讲座第十九期,将于 8 月 21 日(周三)上午 10:30 - 12:00 与你相见。


本期,我们请到了南加州大学计算机科学系教授滕尚华,带来以 “Regularization and Optimal Multiclass Learning” 为主题的讲座分享,欢迎通过 Teams 参会!


  //  

理论中心前沿系列讲座是微软亚洲研究院的常设系列讲座,将邀请全球站在理论研究前沿的研究者介绍他们的研究发现,主题涵盖大数据、人工智能以及其他相关领域的理论进展。讲座以线上直播与线下研讨的形式呈现,通过这一系列讲座,我们期待与各位一起探索当前理论研究的前沿发现,并建立一个活跃的理论研究社区。


欢迎对理论研究感兴趣的老师同学们参与讲座并加入社区(加入方式见后文),共同推动理论研究进步,加强跨学科研究合作,助力打破 AI 发展瓶颈,实现计算机技术实质性发展!



参加方式


欢迎您通过 Teams 参会并与讲者互动

会议链接:

https://www.microsoft.com/ja-jp/microsoft-teams/join-a-meeting?rtc=1

会议 ID:290 139 973 096

会议密码:kFEwZC

会议时间:8 月 21 日(周三)上午 10:30 - 12:00


讲座信息


滕尚华

南加州大学

教授


Shang-Hua Teng is a University Professor and Seely G. Mudd Professor of Computer Science and Mathematics at USC. He is a fellow of SIAM, ACM, and Alfred P. Sloan Foundation, and has twice won the Gödel Prize, first in 2008, for developing smoothed analysis, and then in 2015, for designing the breakthrough scalable Laplacian solver. Citing him as, “one of the most original theoretical computer scientists in the world”, the Simons Foundation named him a 2014 Simons Investigator to pursue long-term curiosity-driven fundamental research. He also received the 2009 Fulkerson Prize,  2023 Science & Technology Award for Overseas Chinese from the China Computer Federation, 2022 ACM SIGecom Test of Time Award (for settling the complexity of computing a Nash equilibrium), 2021 ACM STOC Test of Time Award (for smoothed analysis), 2020 Phi Kappa Phi Faculty Recognition Award (2020)  for his book Scalable Algorithms for Data and Network Analysis, 2011 ACM STOC Best Paper Award (for improving maximum-flow minimum-cut algorithms). In addition, he and collaborators developed the first optimal well-shaped Delaunay mesh generation algorithms for arbitrary three-dimensional domains, settled the Rousseeuw-Hubert regression-depth conjecture in robust statistics, and resolved two long-standing complexity-theoretical questions regarding the Sprague-Grundy theorem in combinatorial game theory. For his industry work with Xerox, NASA, Intel, IBM, Akamai, and Microsoft, he received fifteen patents in areas including compiler optimization, Internet technology, and social networks. Dedicated to teaching his daughter to speak Chinese as the sole Chinese-speaking parent in an otherwise English-speaking family and environment, he has also become fascinated with children's bilingual learning.


报告题目:

Regularization and Optimal Multiclass Learning


The quintessential learning algorithm of empirical risk minimization (ERM) is known to fail in various settings for which uniform convergence does not characterize learning. Relatedly, the practice of machine learning is rife with considerably richer algorithmic techniques, perhaps the most notable of which is regularization. Nevertheless, no such technique or principle has broken away from the pack to characterize optimal learning in these more general settings. The purpose of this work is to precisely characterize the role of regularization in perhaps the simplest setting for which ERM fails: multiclass learning with arbitrary label sets. Using one-inclusion graphs (OIGs), we exhibit optimal learning algorithms that dovetail with tried-and-true algorithmic principles: Occam’s Razor as embodied by structural risk minimization (SRM), the principle of maximum entropy, and Bayesian inference. We also extract from OIGs a combinatorial sequence we term the Hall complexity, which is the first to characterize a problem’s transductive error rate exactly. Lastly, we introduce a generalization of OIGs and the transductive learning setting to the agnostic case, where we show that optimal orientations of Hamming graphs – judged using nodes’ outdegrees minus a system of node-dependent credits – characterize optimal learners exactly. We demonstrate that an agnostic version of the Hall complexity again characterizes error rates exactly, and exhibit an optimal learner using maximum entropy programs.


Joint work with Julian Asilis, Siddartha Devic, Shaddin Dughmi, and Vatsal Sharan



上期讲座回顾


在上期讲座中,我们邀请到密歇根大学计算机科学与工程系助理教授胡威,带来以 “Toward Demystifying Grokking” 为主题的讲座分享,探讨神经网络“Grokking”现象及其关联。


若想了解往期讲座详情,请点击文末“阅读原文”或访问下方链接:

https://www.microsoft.com/en-us/research/event/msr-asia-theory-lecture-series/



 加入理论研究社区


欢迎扫码加入理论研究社区,与关注理论研究的研究者交流碰撞,群内也将分享微软亚洲研究院理论中心前沿系列讲座的最新信息。


【微信群二维码】



您也可以向

MSRA.TheoryCenter@outlook.com 发送以"Subscribe the Lecture Series"为主题的邮件,以订阅讲座信息。


关于微软亚洲研究院理论中心

2021 年 12 月,微软亚洲研究院理论中心正式成立,期待通过搭建国际学术交流与合作枢纽,促进理论研究与大数据和人工智能技术的深度融合,在推动理论研究进步的同时,加强跨学科研究合作,助力打破 AI 发展瓶颈,实现计算机技术实质性发展。目前,理论中心已经汇集了微软亚洲研究院内部不同团队和研究背景的成员,聚焦于解决包括深度学习、强化学习、动力系统学习和数据驱动优化等领域的基础性问题。


想了解关于理论中心的更多信息,请访问
https://www.microsoft.com/en-us/research/group/msr-asia-theory-center/




微软学术合作
架起微软与学术界的合作桥梁
 最新文章