Format

Send to

Choose Destination
J Vis. 2009 Nov 20;9(12):15.1-27. doi: 10.1167/9.12.15.

Static and space-time visual saliency detection by self-resemblance.

Author information

1
Electrical Engineering Department, University of California, Santa Cruz, Santa Cruz, CA, USA. rokaf@soe.ucsc.edu

Abstract

We present a novel unified framework for both static and space-time saliency detection. Our method is a bottom-up approach and computes so-called local regression kernels (i.e., local descriptors) from the given image (or a video), which measure the likeness of a pixel (or voxel) to its surroundings. Visual saliency is then computed using the said "self-resemblance" measure. The framework results in a saliency map where each pixel (or voxel) indicates the statistical likelihood of saliency of a feature matrix given its surrounding feature matrices. As a similarity measure, matrix cosine similarity (a generalization of cosine similarity) is employed. State of the art performance is demonstrated on commonly used human eye fixation data (static scenes (N. Bruce & J. Tsotsos, 2006) and dynamic scenes (L. Itti & P. Baldi, 2006)) and some psychological patterns.

PMID:
20053106
DOI:
10.1167/9.12.15
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Silverchair Information Systems
Loading ...
Support Center