Multiscale Bayesian segmentation using a trainable context model

IEEE Trans Image Process. 2001;10(4):511-25. doi: 10.1109/83.913586.

Abstract

Multiscale Bayesian approaches have attracted increasing attention for use in image segmentation. Generally, these methods tend to offer improved segmentation accuracy with reduced computational burden. Existing Bayesian segmentation methods use simple models of context designed to encourage large uniformly classified regions. Consequently, these context models have a limited ability to capture the complex contextual dependencies that are important in applications such as document segmentation. We propose a multiscale Bayesian segmentation algorithm which can effectively model complex aspects of both local and global contextual behavior. The model uses a Markov chain in scale to model the class labels that form the segmentation, but augments this Markov chain structure by incorporating tree based classifiers to model the transition probabilities between adjacent scales. The tree based classifier models complex transition rules with only a moderate number of parameters. One advantage to our segmentation algorithm is that it can be trained for specific segmentation applications by simply providing examples of images with their corresponding accurate segmentations. This makes the method flexible by allowing both the context and the image models to be adapted without modification of the basic algorithm. We illustrate the value of our approach with examples from document segmentation in which test, picture and background classes must be separated.