CINET: A Brain-Inspired Deep Learning Context-Integrating Neural Network Model for Resolving Ambiguous Stimuli

Brain Sci. 2020 Jan 24;10(2):64. doi: 10.3390/brainsci10020064.

Abstract

The brain uses contextual information to uniquely resolve the interpretation of ambiguous stimuli. This paper introduces a deep learning neural network classification model that emulates this ability by integrating weighted bidirectional context into the classification process. The model, referred to as the CINET, is implemented using a convolution neural network (CNN), which is shown to be ideal for combining target and context stimuli and for extracting coupled target-context features. The CINET parameters can be manipulated to simulate congruent and incongruent context environments and to manipulate target-context stimuli relationships. The formulation of the CINET is quite general; consequently, it is not restricted to stimuli in any particular sensory modality nor to the dimensionality of the stimuli. A broad range of experiments is designed to demonstrate the effectiveness of the CINET in resolving ambiguous visual stimuli and in improving the classification of non-ambiguous visual stimuli in various contextual environments. The fact that the performance improves through the inclusion of context can be exploited to design robust brain-inspired machine learning algorithms. It is interesting to note that the CINET is a classification model that is inspired by a combination of brain's ability to integrate contextual information and the CNN, which is inspired by the hierarchical processing of information in the visual cortex.

Keywords: Context effect; ambiguous stimuli; convolution neural networks; deep learning.