Scientists reconstruct Pink Floyd song by listening to people’s brainwaves

Breakthrough raises hopes that musicality of natural speech can be restored in patients with disabling neurological conditions

Scientists have reconstructed Pink Floyd’s Another Brick in the Wall by eavesdropping on people’s brainwaves – the first time a recognizable song has been decoded from recordings of electrical brain activity.

The hope is that doing so could ultimately help to restore the musicality of natural speech in patients who struggle to communicate because of disabling neurological conditions such as stroke or amyotrophic lateral sclerosis – the neurodegenerative disease that Stephen Hawking was diagnosed with.

Although members of the same laboratory had previously managed to decipher speech – and even silently imagined words – from brain recordings, “in general, all of these reconstruction attempts have had a robotic quality”, said Prof Robert Knight, a neurologist at the University of California in Berkeley, US, who conducted the study with the postdoctoral fellow Ludovic Bellier.

“Music, by its very nature, is emotional and prosodic – it has rhythm, stress, accent and intonation. It contains a much bigger spectrum of things than limited phonemes in whatever language, that could add another dimension to an implantable speech decoder.”

Whereas previous work has decoded electrical activity from the brain’s speech motor cortex – an area that controls the tiny muscle movements of the lips, jaw, tongue and larynx that form words – the current study utilised recordings from the brain’s auditory regions of the brain, where all aspects of sound are processed.

Pink Floyd’s ‘Another brick in the wall’ vocals and melody reconstructed from brain waves – audio

The team analysed brain recordings from 29 patients as they were played an approximately three-minute segment of the Pink Floyd song, taken from their 1979 album The Wall. The volunteers’ brain activity was detected by placing electrodes directly on the surface of their brains as they underwent surgery for epilepsy.

Artificial intelligence was then used to decode the recordings and then encode a reproduction of the sounds and words. Though very muffled, the phrase “All in all, it’s just another brick in the wall” comes through recognizably in the reconstructed song – with its rhythms and melody intact.

“It sounds a bit like they’re speaking underwater, but it’s our first shot at this,” said Knight.

He believes that using a higher density of electrodes might improve the quality of their reconstructions: “The average separation of the electrodes was about 5mm, but we had a couple of patients with 3mm [separations] and they were the best performers in terms of reconstruction,” Knight said.

“Now that we know how to do this, I think if we had electrodes that were like a millimeter and a half apart, the sound quality would be much better.”

As brain recording techniques improve, it may also become possible to make such recordings without the need for surgery – perhaps using sensitive electrodes attached to the scalp.

This year, researchers led by Dr. Alexander Huth at the University of Texas in Austin announced that they had managed to translate brain activity into a continuous stream of text using non-invasive MRI scan data. The system was not accurate enough to decode the exact words but could detect the gist of sentences.

“This [new study] is a really nice demonstration that a lot of the same techniques that have been developed for speech decoding can also be applied to music – — an under-appreciated domain in our field, given how important musical experience is in our lives,” Huth said.

“While they didn’t record brain responses while subjects were imagining music, this could be one of the things brain machine interfaces are used for in the future: translating imagined music into the real thing. It’s an exciting time.”

The research, published in PLoS Biology, also pinpointed new areas of the brain involved in detecting rhythm, and confirmed the right side of the brain was more attuned to music than the left.

A better understanding of how music and language is processed could also have practical applications, such as helping to shed light on the mystery of why people with Broca’s aphasia, who struggle to find and say the right words, can often sing words with no difficulty.

I hope you appreciated this article. Before you move on, I was hoping you would consider taking the step of supporting the Guardian’s journalism.

From Elon Musk to Rupert Murdoch, a small number of billionaire owners have a powerful hold on so much of the information that reaches the public about what’s happening in the world. The Guardian is different. We have no billionaire owner or shareholders to consider. Our journalism is produced to serve the public interest – not profit motives.

And we avoid the trap that befalls much US media – the tendency, born of a desire to please all sides, to engage in false equivalence in the name of neutrality. While fairness guides everything we do, we know there is a right and a wrong position in the fight against racism and for reproductive justice. When we report on issues like the climate crisis, we’re not afraid to name who is responsible. And as a global news organization, we’re able to provide a fresh, outsider perspective on US politics – one so often missing from the insular American media bubble.

Around the world, readers can access the Guardian’s paywall-free journalism because of our unique reader-supported model. That’s because of people like you. Our readers keep us independent, beholden to no outside influence and accessible to everyone – whether they can afford to pay for news, or not.