Nonparametric Bayes Workshop at ICML/UAI/COLT 2008

**FOR NIPS 2009 WORKSHOP ON NPBAYES PLEASE GO TO npbayes-2009.wikidot.com **

One of the major problems driving current research in statistical machine learning is the search for ways to exploit highly-structured models that are both expressive and tractable. Nonparametric Bayesian methodology provides significant leverage on this problem. In the nonparametric Bayesian framework, the prior distribution is not a fixed parametric form, but is rather a general stochastic process—-a distribution over a possibly uncountably infinite number of random variables. This generality makes it possible to work with prior and posterior distributions on objects such as trees of unbounded depth and breadth, graphs, partitions, sets of monotone functions, sets of smooth functions and sets of general measures.

Applications of nonparametric Bayesian methods have begun to appear in disciplines such as information retrieval, natural language processing, machine vision, computational biology, cognitive science and signal processing. Because of their flexibility, they can also be used to express prior knowledge without restricting to small parametric classes. Furthermore, research on nonparametric Bayesian models has served to enhance the links between statistical machine learning and a number of other mathematical disciplines, including stochastic processes, algorithms, optimization, combinatorics and knowledge representation.

There have been several previous workshops on nonparametric Bayesian methods at machine learning conferences, including workshops at NIPS in 2003 and 2005 and a workshop at ICML in 2006. This workshop aims to build on the success of these earlier workshops to catalyze further research. There are many problem areas that need additional attention, and this workshop is intended to bring together the growing community of nonparametric Bayesian researchers to explore these issues.

  • Software The development of general software packages is not only of obvious practical significance but can also lead to important theoretical advances. Building nonparametric inference software packages will help us understand where we are faced with fundamental limits and where merely with engineering challenges, just as efforts towards general purpose Bayesian network algorithms led to the discovery of the importance of tree width. While most current algorithms are very model-specific, striving for general purpose methods will help bring about the theoretical framework to discuss these non-parametric models as a family, and the language to describe their various combinations. Last but not least, such software would allow a much larger community to reap the benefits of this research. In return, this field experience would quickly highlight the strengths and weaknesses of current methods, and draw attention to the most pressing needs.
  • Bridging communities This field attracts researchers from a broad range of disciplines, ranging from theoretical statisticians and probabilists to people building very specialized applications. It is important that we effectively communicate our advances and needs to better focus our effort. Theoreticians need to know which methods are used in practice and why, and what common tricks seem to help, while applied researchers will want to hear about the latest models and inference algorithms. It is especially important to bring statisticians and machine learning researchers together, since these communities work on the closely related topics, but have complementary strengths, often use different terminology, and focus on different application areas.
  • Efficient inference A key focus of software development, and a top concern of potential users, is scalability. Markov chain Monte Carlo methods have proved their versatility and various advances have greatly improved their speed, but ensuring and assessing convergence remains a difficult topic and it is still unclear whether they will be reliable enough for non experts to use them with confidence. Variational methods, where they have been applied, have brought great speedups and reliable convergence, often at little cost in accuracy, but designing these methods largely remains an art. Additionally, without a better understanding of the loss in accuracy incurred by this approximation, it is possible that this cost will increase in more complex models. This meeting will help us summarize what works, what doesn't, and why, and discuss how to assess performance and build benchmark datasets.

This workshop is supported by the Gatsby Charitable Foundation and by PASCAL-2 European Network of Excellence, as part of the thematic programme on Leveraging Complex Prior Knowledge for Learning.

gatsby_logo.png pascal2-logo.png

Original workshop proposal

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License