Reconstructing faces from fMRI patterns using deep generative neural networks

Commun Biol. 2019 May 21:2:193. doi: 10.1038/s42003-019-0438-y. eCollection 2019.

Abstract

Although distinct categories are reliably decoded from fMRI brain responses, it has proved more difficult to distinguish visually similar inputs, such as different faces. Here, we apply a recently developed deep learning system to reconstruct face images from human fMRI. We trained a variational auto-encoder (VAE) neural network using a GAN (Generative Adversarial Network) unsupervised procedure over a large data set of celebrity faces. The auto-encoder latent space provides a meaningful, topologically organized 1024-dimensional description of each image. We then presented several thousand faces to human subjects, and learned a simple linear mapping between the multi-voxel fMRI activation patterns and the 1024 latent dimensions. Finally, we applied this mapping to novel test images, translating fMRI patterns into VAE latent codes, and codes into face reconstructions. The system not only performed robust pairwise decoding (>95% correct), but also accurate gender classification, and even decoded which face was imagined, rather than seen.

Keywords: Machine learning; Perception.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Algorithms
  • Brain / diagnostic imaging*
  • Databases, Factual
  • Deep Learning*
  • Female
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Magnetic Resonance Imaging*
  • Male
  • Models, Theoretical
  • Neural Networks, Computer*
  • Pattern Recognition, Automated / methods
  • Pattern Recognition, Visual*
  • Principal Component Analysis
  • Probability
  • Young Adult