“Music is the universal language of mankind.” ― Henry Wadsworth Longfellow
There has always been much debate about what the human auditory system is meant for, because the brain evolved to a capacity way beyond speech, so many researchers wonder if speech is simply a byproduct of the auditory brain designed to hear music first.
Researchers Mapped the Brains of Musicians.
One study placed a jazz musician in a functional MRI machine with a keyboard. He played a memorized piece of music. Next, the jazz musician made-up piece of music as part of an improvisation with another musician in the control room.
The researchers discovered the brains of musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax.
But we can’t really use music as a language and speak freely.
One key difference between jazz conversation and spoken conversation: the brain is busy processing the structure and syntax of language, as well the semantics or meaning of the words. But brain areas linked to meaning shut down during improvisational jazz interactions. The music is syntactic but it’s not semantic.
Music communication may mean something to the listener, but that meaning can’t really be described because it does not have specificity of meaning in the same way words do.
Many researchers concur that the human brain is wired to process acoustic systems that are far more complicated than speech. (Source: Charles Limb, an otolaryngological surgeon at Johns Hopkins.)