信管·讲座 | Neural-Network Mixed Logit Choice Model...

教育   2024-12-18 22:53   上海  

时间

TIME

 2024年12月26日10:00-11:30

地点

VENUE

信管学院102报告厅

主讲人

SPEAKER

 

Shuang Li(李爽) is an Assistant Professor at the School of Data Science at the Chinese University of Hong Kong, Shenzhen. She received her Ph.D. in Industrial Engineering from the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology in 2019. After that, she was a postdoctoral fellow working with Dr. Susan Murphy in the Department of Statistics at Harvard University. She has published in top-tier machine learning conferences and journals, including ICML, NeurIPs, and JMLR. Her works have been selected as an oral presentation and a spotlight presentation at NeurIPS. She was also a finalist in the INFORMS Quality, Statistics,and Reliability (QSR) Best Student Paper Competition and Social Media Analytics Best Student Paper Competition. She has served as the area chair for conferences including NeurIPS, ICLR, and ICML.


主题

TITLE

Neural-Network Mixed Logit Choice Model: Statistical and Optimality Guarantees


摘要

ABSTRACT

The mixed logit model, widely used in operations, marketing, and econometrics, represents choice probabilities as mix- tures of multinomial logits. This study investigates the effectiveness of representing the mixed logit as a single-hidden-layer neural network, which approximates the mixture distribution with an equally weighted distribution over a finite number of consumer types. Despite its simple architecture, the model’s statistical and computational properties have not been thoroughly examined. From a statistical perspective, we demonstrate that the approximation error of the neural network does not suffer from the curse of dimensionality, and that overparameterization does not lead to overfitting when proper regularization is applied. From an optimization perspective, we prove that the noisy stochastic gradient descent algorithm can find the global optimizer of the entropy-regularized non-convex parameter learning problem with a nearly optimal convergence rate. Experiments on synthetic and real datasets validate our theoretical findings, highlighting the potential of overparameterized neural network representations, coupled with efficient training algorithms, to effectively learn choice models with strong performance guarantees.


编审:王震 江波

欢迎 关注

上财信息
上海财经大学信息管理与工程学院官方新媒体平台,用于学院各类信息发布,欢迎关注!
 最新文章