Donald J. Geman

Johns Hopkins University


Primary Section: 32, Applied Mathematical Sciences
Membership Type:
Member (elected 2015)

Biosketch

Donald Geman is an applied mathematician recognized for his work in stochastic processes, image analysis, machine learning and computational medicine. He is best known for his work on occupation densities for random functions, Markov random fields for image processing, and for introducing the Gibbs Sampler algorithm for Bayesian computation and randomized decision trees for classification. Geman was born in Chicago in 1943, received a BA degree in English Literature from the University of Illinois in 1965 and a PhD in Mathematics from Northwestern University in 1970. He was Distinguished Professor at the University of Massachusetts until 2001, when he joined the Department of Applied Mathematics and Statistics at Johns Hopkins University, where he is a member of the Center for Imaging Science and the Institute for Computational Medicine. He also has a longstanding association with École Normale Supérieure de Cachan in France. Geman is a Fellow of both the Institute of Mathematical Statistics and the Society for Industrial and Applied Mathematics.

Research Interests

Geman's main interest is designing methodology and algorithms in the computational sciences, specifically involving statistical learning, modeling and prediction in computer vision and computational medicine. The driving problem in computer vision is to build a machine which produces a rich semantic description of an underlying scene based on image data. His group has focused on a "twenty questions" or "active testing" paradigm in which the order of the questions is determined online, during scene parsing, driven by removing as much uncertainty as possible about the overall scene interpretation given the evidence to date. One example is a sequential Bayesian approach where the prior distribution encodes contextual constraints and evidence is acquired by sequentially and adaptively executing high-level classifiers. In computational medicine, his group is focused on applying statistical learning to large-scale biomolecular data in cancer systems biology and biomarker discovery; the driving problem is to tailor cancer treatment to an individual molecular profile. Their work is motivated by the hypothesis that a key obstacle to clinical applications is that the decision rules that emerge from off-the-shelf machine learning methods are too complex, impeding biological understanding. As a result, his group is attempting to embed phenotype-dependent mechanisms specific to cancer pathogenesis and progression directly into the learning algorithms.

Powered by Blackbaud
nonprofit software