SEMINAR ABSTRACT

Nonparametric Learning in High Dimensions

Han Liu, Department of Machine Learning, Carnegie Mellon University

Despite the high dimensionality and complexity of many modern datasets, some problems have hidden structure that makes efficient statistical inference feasible. Examples of these hidden structures include: additivity, sparsity, low-dimensional manifold structure, smoothness, copula structure, and conditional independence relations.

In this talk, I will describe efficient nonparametric learning algorithms that exploit such hidden structures to overcome the curse of dimensionality. These algorithms have strong theoretical guarantees and provide practical methods for many fundamentally important learning problems, ranging from unsupervised exploratory data analysis to supervised predictive  modeling.

I will use two examples of high dimensional graph estimation and multi-task regression to illustrate the principles of developing high dimensional nonparametric methods. The theoretical results are presented in terms of risk consistency, estimation consistency, and model selection consistency. The practical performance of the algorithms is illustrated on genomics and cognitive neuroscience examples and compared to state-of-the-art parametric competitors.

This work is joint with John Lafferty and Larry Wasserman.


 
Return to Departmental Seminar List | Return to Home Page