Prof. Zhi-Hua Zhou
ACM, AAAI, IEEE, Fellow
Nanjing University, China
Zhi-Hua Zhou is Professor of Computer Science and Artificial Intelligence at Nanjing University. His research interests are mainly in machine learning and data mining, with significant contributions to ensemble learning, weakly supervised learning, multi-label learning, etc. He has authored the books "Ensemble Methods: Foundations and Algorithms", "Machine Learning", etc., and published more than 200 papers in top-tier journals or conferences. Many of his inventions have been successfully transferred to industry. He founded ACML (Asian Conference on Machine Learning), served as Program Chair for AAAI-19, IJCAI-21, etc., General Chair for ICDM'16, SDM’22, etc., and Senior Area Chair for NeurIPS and ICML. He is series editor of Springer LNAI, on the advisory board of AI Magazine, and serves as associate editor of AIJ, MLJ, IEEE TPAMI, ACM TKDD, etc. He is a Fellow of the ACM, AAAI, AAAS, IEEE, and recipient of the National Natural Science Award of China, the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, etc.
Speech Title: The Long March of Theoretical Exploration of Boosting
Abstract: AdaBoost is a famous mainstream ensemble learning approach that has greatly influenced machine learning and related areas. A fundamentally fascinating mystery of Adaboost lies in the phenomenon that it seems resistant to overfitting, which has inspired a lot of theoretical investigations. In this talk, we will briefly introduce the long history of learning theory studies and debates about Boosting, where the recently concluding result discloses the importance of minimizing margin variance when maximizing margin mean during learning process, which provides new inspiration for the design of powerful learning algorithms such as ODMs (Optimal margin Distribution Machines).
Prof. Jie Lu
IEEE Fellow, IFSA Fellow
University of Technology Sydney, Australia
Distinguished Professor Jie Lu is an internationally renowned scientist in the areas of computational intelligence, primarily known for her work in fuzzy transfer learning, concept drift detection, decision support systems, and recommender systems. She is an IEEE Fellow, IFSA Fellow, Australian Laureate Fellow; the Director of Australian Artificial Intelligence Institute (AAII) which has over 250 staff and students working on 50+ research projects and the Associate Dean (Research Excellence) in the Faculty of Engineering and Information Technology at the University of Technology Sydney (UTS). She has published six research books and over 400 papers in Artificial Intelligence, IEEE TPAMI, IEEE TCybernetics, IEEE TNNLS, IEEE TFS and other leading journals and leading conference proceedings e.g., ICML, NeurIPS, IJCAI, AAAI, KDD; has won 20 ARC Discovery and Linkage projects and large industry projects; supervised 50 PhD students to completion. She serves as Editor-In-Chief for Knowledge-Based Systems (Elsevier) and Editor-In-Chief for International journal of computational intelligence systems. She has delivered over 40 keynote speeches at international conferences and chaired 20 international conferences. She has received national and international awards including the IEEE Transactions on Fuzzy Systems Outstanding Paper Award (2019 and 2022).
Speech Title: Concept Drift Detection, Understanding and Adaptation
Abstract: Concept drift is known as an unforeseeable change in underlying streaming data distribution over time. The phenomenon of concept drift has been recognized as the root cause of decreased effectiveness in many decision-related applications. A promising solution for coping with persistent environmental change and avoiding system performance degradation is to build a detection and adaptive system. This talk will present a set of methods and algorithms that can effectively and accurately detect understand and adapt concept drift. The main contents include (1) two novel competence models to indirectly measure variations in data distribution through changes in competence. By detecting changes in competence, differences in data distribution can be accurately detected and quantified, then further described in unstructured data streams; (2) algorithms for determining a drift region to identify when and where a concept drift takes place in a data stream, and a local drift degree measurement that can continuously monitor regional density changes. (3) a fuzzy adaptive regression approach to dynamically recognize, train, and store patterns. The approach assigns the membership degree of the upcoming examples belonging to these patterns to identify which pattern the current examples belong to during the modelling process. The new algorithms and techniques can be applied to data-driven prediction in complex real-world environments.
James Tin-Yau Kwok
Hong Kong University of Science and Technology, Hong Kong, China
Prof. Kwok is a Professor in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology. He received his B.Sc. degree in Electrical and Electronic Engineering from the University of Hong Kong and his PhD degree in computer science from the Hong Kong University of Science and Technology. Prof. Kwok served/is serving as an Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, Neural Networks, Neurocomputing, Artificial Intelligence Journal, International Journal of Data Science and Analytics, Editorial Board Member of Machine Learning, Governing Board Member and Vice President for Publications of the Asia Pacific Neural Network Society. He also served/is serving as Senior Area Chairs / Area Chairs of major machine learning / AI conferences including NeurIPS, ICML, ICLR, IJCAI, AAAI and ECML. He is recognized as the Most Influential Scholar Award Honorable Mention for "outstanding and vibrant contributions to the field of AAAI/IJCAI between 2009 and 2019". He is an IEEE Fellow.
Speech Title: Automated Machine Learning
Abstract: Automated machine learning (AutoML) aims to automatically construct machine learning solutions from data. In this talk, we discuss several different applications of AutoML, from the learning of entity/relation embeddings in knowledge graphs, to the design of sample selection schedules for robust learning from noisy labels, and to the search for data-specific deep networks in neural architecture search. By carefully designing the underlying search space and efficient solvers for the resultant optimization problems, we demonstrate the effectiveness of AutoML in all these different scenarios, and that the searched embedding/schedule/neural architecture can significantly outperform the manually designed counterparts.
Prof. Ivor W Tsang
Prof Ivor W Tsang is Director of A*STAR Centre for Frontier AI Research (CFAR) since Jan 2022. Previously, he was a Professor of Artificial Intelligence, at University of Technology Sydney (UTS), and Research Director of the Australian Artificial Intelligence Institute (AAII), the largest AI institute in Australia, which is the key player to drive the University of Technology Sydney to rank 10th globally and 1st in Australia for AI research, in the latest AI Research Index. Prof Tsang is working at the forefront of big data analytics and Artificial Intelligence. His research focuses on transfer learning, deep generative models, learning with weakly supervision, big data analytics for data with extremely high dimensions in features, samples and labels. His work is recognised internationally for its outstanding contributions to those fields.
Prof Tsang serves as the Editorial Board for the Journal of Machine Learning Research, Machine Learning, Journal of Artificial Intelligence Research, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Artificial Intelligence, IEEE Transactions on Big Data, and IEEE Transactions on Emerging Topics in Computational Intelligence. He serves as a Senior Area Chair/Area Chair for NeurIPS, ICML, AAAI and IJCAI, and the steering committee of ACML. Recently, Prof Tsang was conferred the IEEE Fellow for his outstanding contributions to large-scale machine learning and transfer learning.
Speech Title: Robust Rank Aggregation and Its Application
Abstract: In rank aggregation (RA), a collection of preferences from different users are summarized into a total order under the assumption of homogeneity of users. Model misspecification in RA arises since the homogeneity assumption fails to be satisfied in the complex real-world situation. Existing robust RAs usually resort to an augmentation of the ranking model to account for additional noises, where the collected preferences can be treated as a noisy perturbation of idealized preferences. Since the majority of robust RAs rely on certain perturbation assumptions, they cannot generalize well to agnostic noise-corrupted preferences in the real world. In this talk, I first summarize the literature of Robust RA methods, and I present CoarsenRank, which possesses robustness against model misspecification. Specifically, the properties of our CoarsenRank are summarized as follows: (1) CoarsenRank is designed for mild model misspecification, which assumes there exist the ideal preferences (consistent with model assumption) that locate in a neighborhood of the actual preferences. (2) CoarsenRank then performs regular RAs over a neighborhood of the preferences instead of the original data set directly. Therefore, CoarsenRank enjoys robustness against model misspecification within a neighborhood. (3) The neighborhood of the data set is defined via their empirical data distributions. (4) CoarsenRank is further instantiated to Coarsened Thurstone, Coarsened Bradly-Terry, and Coarsened Plackett-Luce with three popular probability ranking models. Meanwhile, tractable optimization strategies are introduced with regards to each instantiation respectively. Finally, I present the applications of RAs in Neuroscience, Deep Generative Models, and Contrastive learning.