Kamitani Lab investigates how the brain represents and processes information, developing techniques to decode and visualize mental contents from brain activity. Our research bridges neuroscience and AI, spanning four interconnected areas.
Brain Decoding
Brain decoding uses machine learning to read out the contents of perception, imagery, and dreams from brain activity measured with fMRI. Kamitani Lab has been a pioneer of this field since the mid-2000s, progressively expanding what can be decoded from the brain.
In 2005, Kamitani and Tong demonstrated that fMRI signals from visual cortex could be decoded to identify the orientation of visual gratings β including subjectively perceived orientations in binocular rivalry β establishing that fine-grained perceptual information is accessible from population-level brain activity (Nature Neuroscience). This was followed by the first visual image reconstruction from brain activity: Miyawaki et al. (2008) showed that arbitrary visual images could be reconstructed by combining outputs of local image decoders trained on multi-scale spatial patterns (Neuron).
A landmark study by Horikawa et al. (2013) extended brain decoding to the contents of dreams, showing that visual imagery experienced during sleep could be decoded from brain activity measured during the transition to sleep (Science). This demonstrated that brain decoding can access private mental experiences that are otherwise only available through subjective report.
The integration of deep neural networks (DNNs) brought a major advance. Shen et al. (2019) developed deep image reconstruction, using hierarchical DNN features as an intermediate representation to generate high-quality reconstructions of both perceived and imagined images from brain activity (PLoS Computational Biology). This approach was further extended to reconstruct visual illusory experiences from brain activity (Cheng et al., 2023, Science Advances), and to reconstruct natural sounds from auditory brain activity (Park et al., 2025, PLOS Biology).
Our current framework treats brain decoding as a translationβgeneration pipeline: a “translator” maps brain activity into the latent representation space of a DNN or generative model, and a “generator” produces images, sounds, or other outputs from these latent representations. This framework is reviewed in Kamitani, Tanaka & Shirakawa (2025), Annual Review of Vision Science.
Key Publications
- Kamitani & Tong (2005) Decoding the visual and subjective contents of the human brain. Nature Neuroscience, 8(5), 679β685
- Miyawaki et al. (2008) Visual image reconstruction from human brain activity. Neuron, 60(5), 915β929
- Horikawa et al. (2013) Neural decoding of visual imagery during sleep. Science, 340(6132), 639β642
- Shen et al. (2019) Deep image reconstruction from human brain activity. PLoS Computational Biology, 15(1), e1006633
- Cheng et al. (2023) Reconstructing visual illusory experiences from human brain activity. Science Advances, 9, eadj3906
- Park et al. (2025) Natural sounds can be reconstructed from human neuroimaging data. PLOS Biology, 23(7), e3003293
- Kamitani, Tanaka & Shirakawa (2025) Visual image reconstruction from brain activity via latent representation. Annual Review of Vision Science, 11, 611β634
NeuroAI
NeuroAI is an emerging interdisciplinary field that investigates the relationship between biological and artificial neural systems. A central finding is that deep neural networks (DNNs), trained purely on engineering objectives, develop internal representations that align with brain activity patterns β even though they were never designed to model the brain.
Kamitani Lab has contributed to this field through a series of studies connecting DNN representations to human brain activity. Horikawa and Kamitani (2017) demonstrated that hierarchical visual features of a DNN can be decoded from human brain activity, and that the decodable features shift from lower visual areas to higher areas as the DNN layer depth increases β establishing a systematic correspondence between the brain’s visual hierarchy and DNN layer structure (Nature Communications). A follow-up study showed that which DNN features are decodable from the brain is consistent across individuals, suggesting that general-purpose DNNs capture universal aspects of human visual representation (Horikawa et al., 2019, Scientific Data).
Nonaka et al. (2021) introduced the brain hierarchy score, a metric for evaluating how well a DNN’s layer structure corresponds to the brain’s hierarchical organization. This work revealed that higher task performance does not always mean better alignment with the brain’s hierarchy β a finding that challenges the assumption that better engineering performance equals greater biological plausibility (iScience).
More recently, the lab has developed neural code conversion technology that translates brain representations across different individuals and measurement sites without requiring shared stimuli (Wang et al., 2025, Nature Computational Science), enabling broader applicability of decoding models. Shirakawa et al. (2025) critically examined current reconstruction methods, identifying that some high-profile results may reflect “spurious reconstruction” β category-level classification combined with generative model hallucination rather than genuine visual reconstruction (Neural Networks).
At the theoretical level, Onoo et al. (2025) proposed the concept of readout representation, which redefines neural codes not by the causal origin of neural activity but by the information that can be recovered (read out) from latent representations β providing a unified framework for understanding representation in both brains and AI systems.
These themes are discussed in Kamitani’s essay “Is the Brain Similar to AI? The Challenge of NeuroAI” (2026), which traces the intellectual arc from AI’s “bitter lesson” to the emerging science of latent representations shared between brains and machines.
Key Publications
- Horikawa & Kamitani (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nature Communications, 8, 15037
- Horikawa et al. (2019) Characterization of deep neural network features by decodability from human brain activity. Scientific Data, 6, 190012
- Nonaka et al. (2021) Brain hierarchy score: Which deep neural networks are hierarchically brain-like? iScience, 24(9), 103013
- Macpherson et al. (2021) Natural and artificial intelligence: A brief introduction to the interplay between AI and neuroscience research. Neural Networks, 144, 603β613
- Shirakawa et al. (2025) Spurious reconstruction from brain activity. Neural Networks, 190, 107515
- Wang et al. (2025) Inter-individual and inter-site neural code conversion without shared stimuli. Nature Computational Science, 5(7), 534β546
- Onoo et al. (2025) Readout representation: Redefining neural codes by input recovery. arXiv:2510.12228
BMI
Brain-machine interfaces (BMIs) translate brain signals into control commands for external devices, aiming to restore motor and communication functions for patients with neurological conditions. Kamitani Lab’s BMI research grew directly from the brain decoding methods developed for basic neuroscience.
In 2006, in collaboration with Honda Research Institute, the fMRI decoding technique from the 2005 Nature Neuroscience study was applied to demonstrate that hand shapes (rock, paper, scissors) could be decoded from brain activity in real time and used to control a robot hand β showing that brain decoding could serve as the basis for a brain-machine interface.
Subsequently, the lab began a collaboration with the neurosurgery group at Osaka University (led by Toshiki Yoshimine and later Haruhiko Kishima), shifting focus to electrocorticography (ECoG) β electrodes placed directly on the brain surface during neurosurgical procedures. From 2008, the research concentrated on ECoG-based decoding, which offers higher spatial and temporal resolution than fMRI. Yanagisawa et al. (2009) demonstrated neural decoding using gyral and intrasulcal electrocorticograms (NeuroImage), and Yanagisawa et al. (2011) achieved real-time control of a prosthetic hand using human ECoG signals (Journal of Neurosurgery).
A major milestone was reported in Yanagisawa et al. (2012), which demonstrated that paralyzed patients could control a prosthetic arm using ECoG signals decoded in real time (Annals of Neurology). This established the clinical viability of ECoG-based BMI for motor restoration.
The research then extended to phantom limb pain. Yanagisawa et al. (2016) showed that BMI-driven neurofeedback could induce sensorimotor brain plasticity and control pain in phantom limb patients (Nature Communications), and a subsequent randomized crossover trial confirmed that BCI training to move a virtual hand reduces phantom limb pain (Yanagisawa et al., 2020, Neurology).
Beyond motor decoding, the lab has also pursued vision-based brain-machine interfaces, where visual semantic information is decoded from brain activity and used for image retrieval and communication. Fukuma et al. (2018) decoded visual stimulus semantics from ECoG signals, Fukuma et al. (2022) demonstrated voluntary control of semantic neural representations (Communications Biology), and Fukuma et al. (2024) developed a closed-loop image retrieval system based on visual-semantic neural decoding.
A recent review in Trends in Cognitive Sciences (Beste et al., 2026) discusses the broader challenge of moving intentions from brains to machines, addressing fundamental questions about intention, agency, and neural coding that BMI research raises for cognitive science.
Key Publications
- Yanagisawa et al. (2009) Neural decoding using gyral and intrasulcal electrocorticograms. NeuroImage, 45(4), 1099β1106
- Yanagisawa et al. (2011) Real-time control of a prosthetic hand using human electrocorticography signals. Journal of Neurosurgery, 114(6), 1715β1722
- Yanagisawa et al. (2012) Electrocorticographic control of a prosthetic arm in paralyzed patients. Annals of Neurology, 71(3), 353β361
- Yanagisawa et al. (2016) Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nature Communications, 7, 13209
- Fukuma et al. (2022) Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. Communications Biology, 5(1), 1β15
- Fukuma et al. (2024) Image retrieval based on closed-loop visual-semantic neural decoding. bioRxiv
- Beste et al. (2026) Moving intentions from brains to machines. Trends in Cognitive Sciences
Art
Brain decoding technology has become a medium for contemporary art, enabling new forms of creative expression that visualize the hidden contents of the mind. Kamitani Lab has collaborated with internationally recognized artists to create installations, sculptures, music videos, and album artwork.
The most sustained collaboration has been with French contemporary artist Pierre Huyghe. In 2018, Huyghe’s UUmwelt at the Serpentine Galleries in London displayed neural images generated by deep image reconstruction from fMRI data on large LED screens, set within a gallery space inhabited by flies and other organisms β creating an environment housing different forms of cognition and emerging intelligence. The New York Times described it as “a new art form.” This was followed by Liminal (Punta della Dogana, Venice, 2024), a major solo exhibition exploring worlds without human presence, where brain-generated images were integrated into an AI-driven ecosystem.
Other collaborations include work with Daito Manabe / Rhizomatiks on Dissonant Imaginary, an audio-visual installation that reconstructed images from brain activity while listening to music; the dream visualization music video for Maison book girl; and album artwork using brain scan images for the post-punk band Squid on Warp Records, which was named one of the 50 Best Album Covers of 2021.
These activities are discussed in Kamitani’s essays “Art That Tickles the Brain” (2022) and “Pierre Huyghe β Liminal: Representing a World Without Humans” (2024).