当前位置: 首 页 - 科学研究 - 学术报告 - 正文

bat365在线平台、所2020年系列学术活动(第72场):鄂维南 院士 普林斯顿大学

发表于: 2020-06-17   点击: 


报告题目:机器学习的数学理论

报 告 人:鄂维南 院士 普林斯顿大学

报告时间:2020年06月19日 上午 09:00-10:00

报告地点:腾讯会议 ID:212 239 221

或点击链接直接加入会议:

https://meeting.tencent.com/s/jPIMgNcWrprA

校内联系人:张然 zhangran@jlu.edu.cn

报告摘要:

The heart of modern machine learning is the approximation of high dimensional functions. Traditional approaches, such as approximation by piecewise polynomials, wavelets, or other linear combinations of fixed basis functions, suffer from the curse of dimensionality. We will discuss representations and approximations that overcome this difficulty, as well as gradient flows that can be used to find the optimal approximation. We will see that at the continuous level, machine learning can be formulated as a series of reasonably nice variational and PDE-like problems. Modern machine learning models/algorithms, such as the random feature and shallow/deep neural network models, can be viewed as special discretizations of such continuous problems. At the theoretical level, we will present a framework that is suited for analyzing machine learning models and algorithms in high dimension, and present results that are free of the curse of dimensionality. Finally, we will discuss the fundamental reasons that are responsible for the success of modern machine learning, as well as the subtleties and mysteries that still remain to be understood.

报告人简介:

鄂维南,数学家,主要从事机器学习、计算数学、应用数学及其在力学、物理、化学和工程等领域中的应用等方面的研究。1999年成为普林斯顿大学数学系和应用数学及计算数学研究所教授;2011年当选为中国科学院院士;2012年入选美国数学学会会士。曾获国际工业与应用数学协会颁发的 Collatz 奖,首届美国青年科学家和工程师总统奖,冯康科学计算奖,由SIAM和ETH Zürich联合授予的 Peter Henrici奖等。