We test the proposition that the appearance and detection of visual contours is based on an increase in the perceived contrast of contour elements. First we show that detection of contours is quite possible in the presence of very high levels of variability in contrast. Second we show that inclusion in a contour does not induce Gabor patches to appear to be of higher contrast than patches outside of a contour. These results suggest that, contrary to a number of current models, contrast or its assumed physiological correlate (the mean firing rate of early cortical neurons) is not the determining information for identifying the contour.