top of page

The brain can make music

  • Writer: MMpsychotic
    MMpsychotic
  • Aug 6, 2025
  • 2 min read

The brain can make music - Professor Robert Knight, a neurologist at the University of California, Berkeley, together with postdoctoral fellow Ludovic Bellier, has conducted a groundbreaking study that takes brain-computer interfaces into an entirely new domain: music reconstruction from neural signals. Their research successfully reproduced a song using only data derived from the brain’s electrical activity—specifically, from intracranial electroencephalography (iEEG).

The experiment involved 29 volunteer patients undergoing neurosurgical monitoring, all of whom had intracranial electrodes implanted for medical reasons. These participants listened to a roughly three-minute segment of Pink Floyd’s “Another Brick in the Wall, Part 1,” during which researchers recorded the corresponding neural activity across various auditory and association brain regions.

Unlike previous speech decoding efforts, which focused mainly on reconstructing spoken words or phonemes, this study embraced the complexity of music. As Professor Knight explained, music is by its very nature highly emotional and multi-dimensional. It includes not just rhythm and pitch, but also stress patterns, accents, and intonation—elements that surpass the phonetic limitations of language. These additional layers make music a compelling target for neural decoding, potentially offering a richer and more intuitive interface for brain-machine communication.

Using advanced computational models and machine learning algorithms, the research team was able to transform the raw neural signals into an audio output that contained rhythmically and melodically recognizable features of the original Pink Floyd song. The reconstructed version, though not a perfect reproduction, retained the timing and contour of the lyrics, allowing listeners to recognize both the song and the identity of the band.

This is the first time that a complete song—rather than individual words or notes—has been reconstructed from brain signals with such clarity. The breakthrough opens a new frontier for neuroprosthetics, especially for patients who have lost their ability to speak due to conditions such as ALS (amyotrophic lateral sclerosis) or stroke. It offers the promise of restoring not just functional speech, but the expressive nuances of voice—tone, rhythm, and emotion.

Professor Knight emphasized that this advancement could eventually lead to implantable devices capable of decoding internal speech and emotional prosody, giving voice back to those who can no longer communicate verbally. Music, in this context, isn't just art—it becomes a medium through which the emotional and cognitive states of the human mind can be interpreted, decoded, and expressed.

In sum, this study demonstrates not only the extraordinary plasticity of the brain, but also the potential for technology to bridge neurological silence through the universal language of music.

 
 
 

Recent Posts

See All
you have the right—to shut the fuck up

Social media has democratized the expression of opinions, allowing anyone to share their views without institutional filters. Unfortunately, this accessibility raises serious questions about the quali

 
 
 
I Block People.

I blocked a certain Peter, and I don’t even remember his name. He left a comment on the video about Navalny, claiming I was lying. He’s a...

 
 
 

Comments


bottom of page