Archive / INF Seminars / INF_2022_07_07_Vladimir_Cherkassky
USI - Email
 
 
Università
della
Svizzera
italiana
INF
 
 
 
  
 main_banner
 

Machine Learning: Capabilities, Limitations and Misconceptions

 
 
 

Host: prof. Miroslaw Malek

 

Thursday

07.07

USI Campus EST, room D0.02, Sector D - Online on MS Teams
11:00 - 12:30
  
 

Vladimir Cherkassky
Dept. of Electrical & Computer Engineering, University of Minnesota
Please click here to join

Abstract: This talk presents an overview of predictive data-analytic methods, using VC-theoretical framework. This framework helps to understand better technical limitations of modern approaches (such as Deep Learning), and to separate conceptual vs. theoretical vs. computational aspects of machine learning. On a technical side, I will show that ‘double descent’ phenomenon recently discovered in Deep Learning can be fully explained by classical Vapnik-Chervonenkis (VC) theory. On a methodological side, I will present examples of ‘non-standard’ learning problems, such as Universum Learning and Learning Using Privileged Information (LUPI). These new learning problem settings are appropriate for many real-life applications that have complex data, in addition to labeled samples used for traditional supervised learning. Overall, this talk will emphasize practical importance of scientific and conceptual framework for machine learning, rather than brute-force computational approaches for Big Data.

Biography: Vladimir Cherkassky is Professor of Electrical and Computer Engineering at the University of Minnesota, Twin Cities. He received MS in Operations Research from Moscow Aviation Institute in 1976 and PhD in Electrical and Computer Engineering from the University of Texas at Austin in 1985. He has worked on theory and applications of statistical learning since late 1980’s and co-authored the monograph Learning from Data, Wiley-Interscience, now in its second edition. He is also the author of a new textbook Predictive Learning - see www.VCtextbook.com
He has served on editorial boards of IEEE Transactions on Neural Networks (TNN), Neural Networks, Natural Computing, and Neural Processing Letters. He was a Guest Editor of the IEEE TNN Special Issue on VC Learning Theory and Its Applications in 1999. Dr. Cherkassky was Director of NATO Advanced Study Institute (ASI) From Statistics to Neural Networks: Theory and Pattern Recognition Applications held in France in 1993. He received the IBM Faculty Partnership Award in 1996 and 1997 for his work on learning methods for data mining. In 2007, he became Fellow of IEEE for ‘contributions and leadership in statistical learning’. In 2008, he received the A. Richard Newton Breakthrough Award from Microsoft Research for ‘development of new methodologies for predictive learning’.