Research Interests

Learning in biological systems, measured by performance as a function of the number of training samples, is strikingly efficient when compared to artificial systems. These observations apply equally to individuals (children learn to recognize tens-of-thousands of categories in their first eight years) and to species (evolution outpaces our best models of selection and fitness). A prototype problem is computer vision. Humans outperform computers despite computer-vision training sets with far more examples than any human being will see in a lifetime. My hypothesis is that the dual principles of re-usability and hierarchy, or what cognitive scientists call compositionality, form the foundation for efficient learning in biological systems. Re-usability and hierarchy are prominent architectural themes of the world around us, and it is logical that they would form the basis for our internal generative representations ("the mind's eye") as well. Using the tools of probability modeling and statistical inference, I study the implications of these ideas for representation and computation in the micro-circuitry of the brain as well as their applications to artificial vision systems.

Membership Type


Election Year


Primary Section

Section 32: Applied Mathematical Sciences