Lately I've been listening a lot to Kate Bush's album Aerial - beautiful, wonderful stuff. The album cover is interesting too - the 'islands' that are reflected in the water are actually the amplitude envelope of a recording of some birds singing.
This idea of 'looking at sound' in different ways has been something I've really enjoyed exploring over the last several years. To help visualize the harmonics in a piece of music, I wrote a program a while back that analyses the frequency content of a sound waveform and creates a spectrogram (spectrum over time) of it, colour coding the intensity levels of each frequency.
I think I've found the bird song shown on the cover - it's 2:25 from the start of the song 'Aerial'. Here's what its spectrogram looks like:
The parallel contour lines that are stacked one on top of each other are the harmonics of the bird song. (A synthesizer's been added to the recording, which has changed the amplitued envelope somewhat and contributed the 'white noise' vertical smears and horizontal tones seen in this spectrogram.)
Here's what a bird singing solo looks like (from the song AerialTal at the 7 second mark):
Once you've learned what to look for, you can look at sound in the frequency domain and sort of recognize individual 'voices' by looking at their harmonic patterns. You can pick out harmonic 'signatures' like this even if there's background noise or other sound sources. It's much tougher to look at a spectrogram and figure out what's going on than it is simply to listen to the sound and figure it out, however. There's must be some pretty awesome signal processing going on in the ear+brain combo...
When I first started writing this, I thought I had a pretty good grasp of how hearing works - you know, vibrations in the air moving the ear drum and getting picked up by little hairs in the inner ear. But this only goes so far... How does the movement of these tiny hairs get turned into something the brain can make sense of? (Especially since there are only around 16,000 of these hair cells in the human cochlea.) This is where it gets totally fascinating. I stumbled across this awesome MIT website that delves into the micromechanics of the inner ear, and has some cool photos and videos of how these tiny hair cells convert sound energy into a form of chemical energy that the brain understands. From the website:
The inner ear performs some very remarkable signal processing. For example, the inner ear can detect motions of the eardrum on the order of a PICOMETER -- i.e., much smaller than the diameter of a hydrogen atom. ... Hair cells are small. But hair cells are themselves complex micromechanical systems whose function relies on an array of even smaller mechanical parts. Displacements of hair bundles generate electrical responses in hair cells via mechanically sensitive ion channels in the cell membrane.
video (165K)
The tip links are tiny filaments only 2nm in diameter. In the video you can see them pulling open little 'trap doors' - opening the 'mechanically sensitive ion channels in the cell membrane' mentioned earlier. (Aside: It's pretty easy to visualize these filaments getting snapped when listening to music at high volume. No more 'turning the volume up to 11' for me...)
These ion channels are basically pores in the cell membrane that allow charged Potassium (K+) ions to move into the cell, which causes the cell to lose polarization. "In order to be able to process sounds at the highest frequency range of human hearing, hair cells must be able to turn current on and off 20,000 times per second. They are capable of even more astonishing speeds in bats and whales, which can distinguish sounds at frequencies as high as 200,000 cycles per second"(ref.)
From The Neurobiology of Harmony by David Benner:
Once frequency and amplitude are converted into action potentials, the biochemical pathway leads sounds from the inner ear along the auditory nerve which is part of cranial nerve VIII through parts of the medulla, pons, midbrain, thalamus, and finally to the auditory cortex of the temporal lobe. The parts of the brain involved in the perception of sound locate its origin and involve the limbic system in the recognition of a given input.
From Music in Your Head by Eckart O. Altenmüller:
After sound is registered in the ear, the auditory nerve transmits the data to the brain stem. There the information passes through at least four switching stations, which filter the signals, recognize patterns and help to calculate the differences in the sound’s duration between the ears to determine the location from which the noise originates. For example, in the first switching area, called the cochlear nucleus, the nerve cells in the ventral, or more forward, section react mainly to individual sounds and generally pass on incoming signals unchanged; the dorsal, or rear, section processes acoustic patterns, such as the beginning and ending points of a stimulus or changes in frequency. After the switching stations, the thalamus—a structure in the brain that is often referred to as the gateway to the cerebral cortex—either directs information on to the cortex or suppresses it. This gating effect enables us to control our attention selectively so that we can, for instance, pick out one particular instrument from among all the sounds being produced by an orchestra. The auditory nerve pathway terminates at the primary auditory cortex, or Heschl’s gyrus, on the top of the temporal lobe. The auditory cortex is split on both sides of the brain. It seems that the way the music is handled in the brain from this point on differs greatly between non-musicians and musicians, and in fact even between individuals. In imaging studies the same music is represented in multiple ways in the brain of a professional musician: as a sound, as movement (for example, on a piano keyboard), as a symbol (notes on a score) and so on. Not so in the brain of an unpracticed listener. Generally, however, rhythm is handled by the left side of the brain and pitch and melody are handled by the right side of the brain.
Harmonics are a set of frequencies that are integer multiples of a common 'fundamental' root frequency. My guess is that, when enough pulses from the frequency detectors for a particular harmonic series fire at around the same time, a 'harmonic detector' neuron is pushed over a trigger threshold. And then, the outputs of these harmonic detector neurons and frequency detector neurons somehow get compared to harmonic profiles stored in memory (e.g. the sound of a voice or of a musical instrument).
By focusing on the harmonic structure that is present in the sound, we are able to focus in on one voice in a crowd, one instrument in a band, isolate signals from noise, spatially locate a sound in a 3D sound field - lots of things our present-day technology has difficulty doing. It provides key information to the brain that allows it to recognize voices, pick out rhymes and rhythms, melodies and harmonies, associate everything with feelings and meaning.
U2's lead singer Bono has noted that "songs are not like movies where you can see them once, twice, three times - they become part of your life. They're more like smells." Music doesn't seem to get registered into memory the same way that visual images do. What you remember is the way the music makes you feel, and the stuff that is repeated several times (chorus, guitar riff, killer bass line). It takes a while to learn the rest, to hang onto it long enough for you to anticipate it fully. Perhaps it's because the brain needs a certain amount of repetition to convert a short term memory into a long term memory (see "Making Memories Stick" by R. Douglas Fields for more info). For whatever reason, music is very much 'in and of the moment'. And it's deeply rooted - it can trigger an emotional and/or physical response, make you want to dance and sing - such a joyous thing. The essence of now, of life.
See also:
Music and the Brain (Scientific American)
Getting a Leg Up on Land - the evolution of four-limbed animals from fish (includes info on how hearing evolved)
Eaton-Peabody Lab (one of the world's largest basic research facilities dedicated to the study of hearing and deafness)
Research into regeneration of damaged inner ear hair cells
Gene therapy stimulates new hair growth in the cochlea
This idea of 'looking at sound' in different ways has been something I've really enjoyed exploring over the last several years. To help visualize the harmonics in a piece of music, I wrote a program a while back that analyses the frequency content of a sound waveform and creates a spectrogram (spectrum over time) of it, colour coding the intensity levels of each frequency.
I think I've found the bird song shown on the cover - it's 2:25 from the start of the song 'Aerial'. Here's what its spectrogram looks like:
The parallel contour lines that are stacked one on top of each other are the harmonics of the bird song. (A synthesizer's been added to the recording, which has changed the amplitued envelope somewhat and contributed the 'white noise' vertical smears and horizontal tones seen in this spectrogram.)
Here's what a bird singing solo looks like (from the song AerialTal at the 7 second mark):
Once you've learned what to look for, you can look at sound in the frequency domain and sort of recognize individual 'voices' by looking at their harmonic patterns. You can pick out harmonic 'signatures' like this even if there's background noise or other sound sources. It's much tougher to look at a spectrogram and figure out what's going on than it is simply to listen to the sound and figure it out, however. There's must be some pretty awesome signal processing going on in the ear+brain combo...
When I first started writing this, I thought I had a pretty good grasp of how hearing works - you know, vibrations in the air moving the ear drum and getting picked up by little hairs in the inner ear. But this only goes so far... How does the movement of these tiny hairs get turned into something the brain can make sense of? (Especially since there are only around 16,000 of these hair cells in the human cochlea.) This is where it gets totally fascinating. I stumbled across this awesome MIT website that delves into the micromechanics of the inner ear, and has some cool photos and videos of how these tiny hair cells convert sound energy into a form of chemical energy that the brain understands. From the website:
The inner ear performs some very remarkable signal processing. For example, the inner ear can detect motions of the eardrum on the order of a PICOMETER -- i.e., much smaller than the diameter of a hydrogen atom. ... Hair cells are small. But hair cells are themselves complex micromechanical systems whose function relies on an array of even smaller mechanical parts. Displacements of hair bundles generate electrical responses in hair cells via mechanically sensitive ion channels in the cell membrane.
video (165K)
The tip links are tiny filaments only 2nm in diameter. In the video you can see them pulling open little 'trap doors' - opening the 'mechanically sensitive ion channels in the cell membrane' mentioned earlier. (Aside: It's pretty easy to visualize these filaments getting snapped when listening to music at high volume. No more 'turning the volume up to 11' for me...)
These ion channels are basically pores in the cell membrane that allow charged Potassium (K+) ions to move into the cell, which causes the cell to lose polarization. "In order to be able to process sounds at the highest frequency range of human hearing, hair cells must be able to turn current on and off 20,000 times per second. They are capable of even more astonishing speeds in bats and whales, which can distinguish sounds at frequencies as high as 200,000 cycles per second"(ref.)
From The Neurobiology of Harmony by David Benner:
Once frequency and amplitude are converted into action potentials, the biochemical pathway leads sounds from the inner ear along the auditory nerve which is part of cranial nerve VIII through parts of the medulla, pons, midbrain, thalamus, and finally to the auditory cortex of the temporal lobe. The parts of the brain involved in the perception of sound locate its origin and involve the limbic system in the recognition of a given input.
From Music in Your Head by Eckart O. Altenmüller:
After sound is registered in the ear, the auditory nerve transmits the data to the brain stem. There the information passes through at least four switching stations, which filter the signals, recognize patterns and help to calculate the differences in the sound’s duration between the ears to determine the location from which the noise originates. For example, in the first switching area, called the cochlear nucleus, the nerve cells in the ventral, or more forward, section react mainly to individual sounds and generally pass on incoming signals unchanged; the dorsal, or rear, section processes acoustic patterns, such as the beginning and ending points of a stimulus or changes in frequency. After the switching stations, the thalamus—a structure in the brain that is often referred to as the gateway to the cerebral cortex—either directs information on to the cortex or suppresses it. This gating effect enables us to control our attention selectively so that we can, for instance, pick out one particular instrument from among all the sounds being produced by an orchestra. The auditory nerve pathway terminates at the primary auditory cortex, or Heschl’s gyrus, on the top of the temporal lobe. The auditory cortex is split on both sides of the brain. It seems that the way the music is handled in the brain from this point on differs greatly between non-musicians and musicians, and in fact even between individuals. In imaging studies the same music is represented in multiple ways in the brain of a professional musician: as a sound, as movement (for example, on a piano keyboard), as a symbol (notes on a score) and so on. Not so in the brain of an unpracticed listener. Generally, however, rhythm is handled by the left side of the brain and pitch and melody are handled by the right side of the brain.
Harmonics are a set of frequencies that are integer multiples of a common 'fundamental' root frequency. My guess is that, when enough pulses from the frequency detectors for a particular harmonic series fire at around the same time, a 'harmonic detector' neuron is pushed over a trigger threshold. And then, the outputs of these harmonic detector neurons and frequency detector neurons somehow get compared to harmonic profiles stored in memory (e.g. the sound of a voice or of a musical instrument).
By focusing on the harmonic structure that is present in the sound, we are able to focus in on one voice in a crowd, one instrument in a band, isolate signals from noise, spatially locate a sound in a 3D sound field - lots of things our present-day technology has difficulty doing. It provides key information to the brain that allows it to recognize voices, pick out rhymes and rhythms, melodies and harmonies, associate everything with feelings and meaning.
U2's lead singer Bono has noted that "songs are not like movies where you can see them once, twice, three times - they become part of your life. They're more like smells." Music doesn't seem to get registered into memory the same way that visual images do. What you remember is the way the music makes you feel, and the stuff that is repeated several times (chorus, guitar riff, killer bass line). It takes a while to learn the rest, to hang onto it long enough for you to anticipate it fully. Perhaps it's because the brain needs a certain amount of repetition to convert a short term memory into a long term memory (see "Making Memories Stick" by R. Douglas Fields for more info). For whatever reason, music is very much 'in and of the moment'. And it's deeply rooted - it can trigger an emotional and/or physical response, make you want to dance and sing - such a joyous thing. The essence of now, of life.
See also:
Music and the Brain (Scientific American)
Getting a Leg Up on Land - the evolution of four-limbed animals from fish (includes info on how hearing evolved)
Eaton-Peabody Lab (one of the world's largest basic research facilities dedicated to the study of hearing and deafness)
Research into regeneration of damaged inner ear hair cells
Gene therapy stimulates new hair growth in the cochlea
Comments
Utterly fascinating!
There may be some grounding in the old psychedelic canard:
"Wow! These colors taste like music!"