How Do We Hear Sounds?
Properties of Sounds
Sounds, or sound waves, are air vibrations processed by the auditory system. Sometimes, these air vibrations can be felt, as with bass instruments, where mechanical pulsations cause both the air and the floor to vibrate. Just like light, sounds may be classified in terms of frequency, amplitude and complexity. Frequency denotes the number of sound waves that occur in a period of time, and is more commonly known as pitch. Low-frequency sound waves have low pitch, while high-frequency sound waves have high pitch. Amplitude refers to the height of sound waves, and is more commonly known as loudness. High-amplitude sounds are loud, while low-amplitude sounds are soft. Lastly, complexity, or timbre, differentiates between sounds of the same pitch. For example, if your mother and sister are singing the same pitch, you will still be able to distinguish which voices come from whom because their voices have different timbres.
Divisions and Parts of the Ear
The ear is divided into three - outer, middle and inner - and each division has its own purpose or role.
The function of the outer ear is to collect and direct sound waves towards the middle and inner divisions of the ear. It is composed of the pinna and the auditory canal. The pinnae in some animals are movable, like with cats and dogs, to help them locate sounds better. The pinnae across the animal kingdom also vary in size; for example, elephants have very large pinnae (in relation to their heads), compared to humans.
The function of the middle ear is to amplify sounds. It is composed of the ear drum, and three of the smallest bones in the body - the hammer, anvil and stirrup.
The function of the inner ear is to convert sound waves, or mechanical energy, into electrochemical energy. It is composed of the oval window, the cochlea, the auditory nerve and semicircular canals. Inside the cochlea are fluid-filled canals, the basilar membrane, hair cells and the tectorial membrane. Hair cells act as sensory receptors, triggering action potential to the auditory nerve, which, in turn, transmits auditory sensory information to the brain.
Two important theories explain how the ear registers sound frequency. According to the Place Theory, high-frequency sounds are registered near the oval window, while low-frequency sounds are registered at the tip of the cochlea. The place theory was formulated by Georg von Bekesy (1960) from his experiments with oval windows of human cadavers, which won him the Nobel Prize in 1961. The Frequency Theory, on the other hand, states that low-frequency sounds are vaguely displaced and not precisely registered at the tip of the cochlea. The theory suggests that the frequency of sound registered is more or less similar to, and therefore can be measured according to, the rate of neuronal firing in hair cells. The problem with this theory, however, is that neuronal firing is limited to a thousand, posing problem in explaining high-frequency sounds. Together, the place theory and frequency theory shed light into where and how sounds are registered in the ear.
Auditory Processing in the Brain
The brain interprets auditory sensory information from both ears to locate where sounds are coming from. This is because timing is naturally different for both ears. Furthermore, our face serves as a wall that creates a sound shadow, changing the sounds' timbre from one ear to another. Bats are nocturnal hunters, and they rely primarily on echolocation to see at night. This shows that bats are quite adept at utilizing both their ears not only to locate sounds, but also to avoid obstacles and catch preys. Humans, however, are day-hunters and are more dependent on visual information, making them less at par on sound localization with the rest of the animal kingdom.
Just like with the visual system, auditory nerves from the left and right ears cross over, so that most auditory information coming from the left ear are registered at the right auditory cortex of the temporal lobe, and vice versa. Similarly, specialized neurons in the auditory cortex process different sound features, and parallel processing and binding also occur for both the "what" and "where" pathway - pathways for identifying and locating sound waves.
Noise and Decibels
Sound amplitude is measured in decibels (dB). Six additional decibels means that sound is twice as loud as before. A quiet library is measured 30 dB on average, a typical conversation is 63 dB, heavy city traffic is 90 dB, a car horn is 100 dB, a jack-hammer is 120 dB, rock band at close range is measured 130 dB and 140 dB if measured in front of the speaker, and a rocket launch produce around 180 dB sound waves. Ideally, sounds reaching 120 dB or more are considered "noise".
The problem with auditory noise is the lack of control we have over them. We cannot just fold our ears to keep sounds from coming in. Furthermore, psychological research shows that noisy environments negatively affect both our physical and mental health. For instance, it was found out that New York children residing at lower floors scored lower on reading tests than those who live in the upper floors (Cohen, Glass & Singer, 1973); and that children who reside near Los Angeles International Airport, where more than 300 jets are launched every day, have higher blood pressure and are more easily distracted, than children who live in quieter environments (Cohen at al., 1981). High-amplitude sounds can also damage the ear, causing blistered and popping hair cells, softened and swollen hair tissues, and scarred and degenerating auditory nerves. Symptoms of ear damage include hearing ringing, buzzing and muffling sounds, difficulty perceiving words, and trouble hearing conversations under background noise.