报告题目：VC dimensions and bounds
报 告 人：SHI Qinfeng（School of Computer Science, The University of Adelaide, SA, Australia，Senior Research Associate）
主 持 人：李映教授
In statistical learning theory, or sometimes computational learning theory, the VC dimension (for Vapnik–Chervonenkis dimension) is a measure of the capacity of a statistical classification algorithm, defined as the cardinality of the largest set of points that the algorithm can shatter. It is a core concept in Vapnik–Chervonenkis theory, and was originally defined by Vladimir Vapnik and Alexey Chervonenkis.
Informally, the capacity of a classification model is related to how complicated it can be. For example, consider the thresholding of a high-degree polynomial: if the polynomial evaluates above zero, that point is classified as positive, otherwise as negative. A high-degree polynomial can be wiggly, so it can fit a given set of training points well. But one can expect that the classifier will make errors on other points, because it is too wiggly. Such a polynomial has a high capacity. A much simpler alternative is to threshold a linear function. This polynomial may not fit the training set well, because it has a low capacity. This talk will cover growth function, VC dimensions, VC bound and its proofs.
SHI Qingfeng received his BSc and MSc degrees in Computer Science and Technology from Northwest Polytechnical University in 2003 and 2006, respectively, and his PhD degree in Computer Science from Australian National University (ANU) in 2009.
SHI Qingfeng’s main research interest includes: Machine Learning, Compressive Sensing, Image and Video Analysis; Particularly Structured Model Learning, Kernel Methods, PAC-Bayes Bounds, Compressibility Analysis and Transfer Learning.
Dr. SHI has more than ten papers published on international journal, such as IEEE TPAMI, IJCV, and international conference CVPR, NIPS etc. in pattern recognition and computer vision field.