The basic goal of our research is to investigate how humans learn and reason, and how intelligent machines might emulate them. In tasks that arise both in childhood (e.g., perceptual learning and language acquisition) and in adulthood (e.g., action understanding and analogical inference), humans often paradoxically succeed in making inferences from inadequate data. The data available are often sparse (very few examples), ambiguous (multiple possible interpretations), and noisy (low signal-to-noise ratio). How can an intelligent system cope?
We approach this basic question as it arises in both perception and higher cognition. Our research is highly interdisciplinary, integrating theories and methods from psychology, statistics, computer vision, machine learning, and computational neuroscience. The unified picture emerging from our work is that the power of human inference depends on two basic principles. First, people exploit generic priors — tacit general assumptions about the way the world works, which guide learning and inference from observed data. Second, people have a capacity to generate and manipulate structured representations— representations organized around distinct roles, such as multiple joints in motion with respect to one another in action perception. Our current areas of active study include action understanding, motion perception, object recognition, causal learning, and analogical reasoning. See the list of ongoing research projects on the research page.