The human ear is a complex organ that plays a crucial role in our perception of sound. Understanding its anatomy and function is essential for architects and acousticians designing spaces that optimize hearing experiences. From the outer ear to the inner ear, each part contributes to our ability to detect and process sound waves.

The hearing process involves converting sound waves into electrical signals that the brain can interpret. This intricate process, along with the ear's sensitivity to different frequencies and intensities, shapes how we perceive sound in various environments. Architects must consider these factors when designing spaces for optimal acoustics and speech intelligibility.

Anatomy of the ear

  • Understanding the anatomy of the ear is crucial for architectural acoustics as it helps in designing spaces that optimize sound perception and minimize hearing damage
  • The ear is divided into three main parts: the outer ear, middle ear, and inner ear, each playing a specific role in the hearing process

Outer ear

Top images from around the web for Outer ear
Top images from around the web for Outer ear
  • Consists of the pinna (visible part of the ear) and the ear canal
  • Pinna collects and funnels sound waves into the ear canal (meatus)
  • Ear canal amplifies sound waves in the frequency range of human speech (2-5 kHz)
  • Acts as a resonator, boosting frequencies around 3 kHz by up to 10 dB

Middle ear

  • Contains the (tympanic membrane) and three tiny bones called (malleus, incus, and stapes)
  • Eardrum vibrates in response to sound waves, converting them into mechanical vibrations
  • Ossicles amplify and transmit these vibrations to the inner ear
  • Eustachian tube equalizes pressure between the middle ear and the environment

Inner ear

  • Houses the , a fluid-filled, snail-shaped structure that converts mechanical vibrations into electrical signals
  • Contains the semicircular canals, which are responsible for balance and spatial orientation
  • Cochlea is lined with hair cells that respond to specific frequencies, allowing for frequency discrimination
  • in the cochlea vibrates at different locations depending on the frequency of the sound

Hearing process

  • The hearing process involves the conversion of sound waves into electrical signals that the brain can interpret
  • It is essential to understand this process when designing acoustically optimized spaces for speech intelligibility and music perception

Sound waves to the eardrum

  • Sound waves enter the outer ear and travel through the ear canal
  • Ear canal acts as a resonator, amplifying frequencies around 3 kHz
  • Sound waves strike the eardrum, causing it to vibrate

Eardrum to ossicles

  • Eardrum vibrations are transmitted to the ossicles (malleus, incus, and stapes)
  • Ossicles act as a mechanical amplifier, increasing the force of the vibrations
  • Stapes (the smallest bone in the human body) transfers vibrations to the oval window of the cochlea

Ossicles to cochlea

  • Vibrations from the stapes cause the fluid in the cochlea to move
  • Basilar membrane in the cochlea vibrates at different locations depending on the frequency of the sound
  • High frequencies cause vibrations near the base of the cochlea, while low frequencies cause vibrations near the apex

Hair cells in cochlea

  • Vibrations of the basilar membrane cause the hair cells to bend
  • convert the mechanical vibrations into electrical signals
  • amplify and fine-tune the vibrations, improving frequency selectivity

Auditory nerve to brain

  • Electrical signals from the hair cells are transmitted via the auditory nerve to the brainstem
  • Signals are processed in the auditory cortex of the brain, allowing for the perception of sound
  • Different regions of the auditory cortex respond to specific features of sound (pitch, timbre, location)

Hearing range

  • The human hearing range is the range of frequencies and intensities that the human ear can detect
  • Understanding the hearing range is crucial for designing spaces that cater to the limitations and capabilities of human hearing

Frequency range

  • Human ear can typically detect frequencies between 20 Hz and 20 kHz
  • Sensitivity to frequencies varies across the range, with the ear being most sensitive between 2-5 kHz
  • Age-related hearing loss () can reduce the upper frequency limit

Intensity range

  • Human ear can detect a wide range of sound intensities, from the to the
  • Intensity is measured in decibels (dB), a logarithmic scale
  • Threshold of hearing is around 0 dB, while the threshold of pain is around 120-140 dB

Loudness vs intensity

  • Loudness is the subjective perception of sound intensity
  • Doubling the intensity of a sound leads to a 3 dB increase, but a 10 dB increase is needed for the sound to be perceived as twice as loud
  • (Fletcher-Munson curves) show how the perceived loudness varies with frequency at different intensity levels

Pitch vs frequency

  • Pitch is the subjective perception of frequency
  • Higher frequencies are generally perceived as higher pitches, while lower frequencies are perceived as lower pitches
  • The relationship between pitch and frequency is not linear, with the ear being more sensitive to changes in pitch at lower frequencies

Hearing sensitivity

  • Hearing sensitivity refers to the ear's ability to detect and discriminate between different sounds
  • It is affected by factors such as frequency, intensity, age, and exposure to noise

Threshold of hearing

  • The lowest intensity level at which a sound can be detected 50% of the time
  • Varies with frequency, with the ear being most sensitive around 2-5 kHz
  • Can be measured using

Threshold of pain

  • The intensity level at which sound becomes painfully loud
  • Typically around 120-140 dB, but can vary between individuals
  • Exposure to sounds above this level can cause immediate and permanent hearing damage

Equal-loudness contours

  • Also known as Fletcher-Munson curves
  • Show how the perceived loudness of a sound varies with frequency at different intensity levels
  • Demonstrates that the ear's sensitivity to different frequencies changes with intensity
  • Used in the design of audio systems and
  • Presbycusis is the gradual loss of hearing sensitivity that occurs with age
  • Typically affects high frequencies first, leading to difficulty understanding speech in noisy environments
  • Can be exacerbated by exposure to loud noises throughout life
  • Highlights the importance of designing spaces with good acoustic properties for an aging population

Binaural hearing

  • refers to the ability to use both ears to localize sounds and understand speech in noisy environments
  • It is crucial for designing spaces that promote effective communication and spatial awareness

Localization of sound

  • The brain uses differences in the signals received by the two ears to determine the location of a sound source
  • (ITDs) and (ILDs) are the main cues used for localization
  • Localization accuracy is best for sounds in front of the listener and decreases for sounds to the sides or behind

Interaural time differences

  • ITDs occur when a sound reaches one ear before the other
  • The brain uses the timing difference to estimate the location of the sound source in the horizontal plane
  • ITDs are most effective for low-frequency sounds (below 1.5 kHz)

Interaural level differences

  • ILDs occur when a sound is louder in one ear than the other
  • The brain uses the level difference to estimate the location of the sound source in the horizontal plane
  • ILDs are most effective for high-frequency sounds (above 1.5 kHz)
  • The head acts as an acoustic shadow, attenuating high frequencies on the far side

Precedence effect

  • Also known as the Haas effect or the law of the first wavefront
  • Occurs when a sound is followed by its reflection within a short time delay (up to 50 ms)
  • The brain suppresses the perception of the reflection, giving precedence to the first-arriving sound
  • Helps to maintain the localization of a sound source in reverberant environments

Hearing protection

  • Hearing protection is essential for preventing and maintaining hearing health
  • It is particularly important in architectural acoustics when designing spaces with high noise levels (industrial facilities, music venues)

Noise-induced hearing loss

  • Caused by exposure to loud sounds over time or a single exposure to an extremely loud sound
  • Can be temporary (temporary threshold shift) or permanent (permanent threshold shift)
  • Affects high frequencies first, leading to difficulty understanding speech in noisy environments
  • (ringing in the ears) is a common symptom of noise-induced hearing loss

Occupational exposure limits

  • Many countries have regulations that specify the maximum permissible exposure levels to noise in the workplace
  • In the US, the Occupational Safety and Health Administration (OSHA) sets the limit at 90 dBA for an 8-hour workday
  • Exposure limits are based on the principle of equal energy: halving the exposure time allows for a 3 dB increase in noise level

Hearing protection devices

  • Earplugs and earmuffs are the most common types of hearing protection devices
  • Earplugs are inserted into the ear canal, while earmuffs cover the entire ear
  • Hearing protection devices attenuate sound by reducing the amount of energy that reaches the inner ear
  • The effectiveness of a hearing protection device is measured by its (NRR)

Noise reduction rating (NRR)

  • NRR is a measure of the amount of sound attenuation provided by a hearing protection device
  • Measured in decibels, with higher values indicating greater sound attenuation
  • NRR values are determined through laboratory testing and may not reflect real-world performance
  • To estimate the effective noise level, subtract 7 from the NRR and then subtract the result from the original noise level

Psychoacoustics

  • is the study of the psychological and physiological responses to sound
  • It is essential for understanding how humans perceive and interpret sound in different environments

Masking of sounds

  • occurs when the presence of one sound makes it difficult to hear another sound
  • Can be simultaneous (sounds occur at the same time) or non-simultaneous (sounds occur at different times)
  • The masking threshold is the level at which a sound becomes inaudible due to the presence of another sound
  • Masking is frequency-dependent, with sounds closer in frequency being more easily masked

Critical bands

  • The basilar membrane in the cochlea can be divided into , each responding to a specific range of frequencies
  • Sounds within the same critical band are more likely to mask each other than sounds in different critical bands
  • The width of a critical band varies with frequency, being narrower at low frequencies and wider at high frequencies
  • The concept of critical bands is used in audio compression algorithms (MP3) and hearing aid design

Auditory filters

  • The auditory system can be modeled as a series of bandpass filters, each tuned to a specific frequency range
  • These filters are known as or auditory filter banks
  • The shape and bandwidth of auditory filters determine the frequency selectivity of the ear
  • Auditory filters are broader at high frequencies, leading to reduced frequency resolution

Temporal integration

  • refers to the ear's ability to sum sound energy over time
  • The ear integrates sound energy over a period of about 200 ms, known as the temporal window
  • Sounds shorter than the temporal window are perceived as less loud than sounds of equal energy that are longer than the window
  • Temporal integration is used in the measurement of sound exposure levels and in the design of warning signals

Measurement of hearing

  • Measuring hearing is essential for diagnosing hearing loss, evaluating the effectiveness of hearing protection devices, and assessing the impact of noise on hearing health

Audiometry

  • is the measurement of hearing sensitivity using standardized procedures and equipment
  • Involves presenting sounds at different frequencies and intensities and measuring the listener's response
  • Can be performed using pure tones (pure-tone audiometry) or speech ()
  • Results are plotted on an audiogram, which shows hearing thresholds as a function of frequency

Pure-tone audiometry

  • Measures hearing sensitivity using pure tones (sinusoidal sounds) at specific frequencies
  • Typically tests frequencies between 250 Hz and 8 kHz, which are important for speech understanding
  • Listener indicates when they hear a sound, and the lowest intensity level at which they respond is recorded as their hearing threshold
  • Can be conducted using air conduction (through headphones) or bone conduction (through a vibrator on the mastoid bone)

Speech audiometry

  • Measures the ability to understand speech at different intensity levels
  • Involves presenting standardized word lists or sentences and measuring the percentage of words correctly identified
  • Can be performed in quiet or in the presence of background noise
  • Speech reception threshold (SRT) is the lowest intensity level at which 50% of words are correctly identified

Audiograms

  • An audiogram is a graphical representation of hearing thresholds as a function of frequency
  • Frequency is plotted on the horizontal axis (logarithmic scale), and hearing threshold level is plotted on the vertical axis (linear scale)
  • Normal hearing is typically defined as thresholds between -10 and 25 dB HL across all frequencies
  • Different types of hearing loss (conductive, sensorineural, mixed) have characteristic patterns on the audiogram
  • are used to prescribe , monitor changes in hearing over time, and assess the effectiveness of hearing conservation programs

Key Terms to Review (41)

Audiograms: Audiograms are graphical representations of an individual's hearing ability across different frequencies, typically measured in decibels. They provide crucial information about a person's hearing sensitivity and are essential for diagnosing hearing impairments and determining appropriate interventions. By plotting results from hearing tests, audiograms help visualize the softest sounds a person can hear at various pitches, which is vital for understanding the overall health of the auditory system.
Audiometry: Audiometry is the measurement of hearing ability through various tests that assess an individual's ability to perceive sound. It plays a crucial role in diagnosing hearing impairments and understanding how well the auditory system functions. By using different frequencies and intensities of sounds, audiometry helps to evaluate the integrity of the ear, including the outer, middle, and inner ear components.
Auditory attention: Auditory attention refers to the cognitive process that enables individuals to focus on specific sounds or auditory stimuli in their environment while filtering out others. This ability is crucial for effective communication and interaction, allowing a person to discern important information from background noise. It involves selective listening, which is essential for understanding speech in various acoustic settings.
Auditory filters: Auditory filters are mechanisms in the auditory system that help isolate specific frequency components of sound while reducing the influence of other frequencies. This allows the auditory system to focus on particular sounds, like speech, amidst background noise. The way these filters work is crucial for understanding how we perceive sound and how our ear processes complex auditory information.
Auditory masking: Auditory masking occurs when the perception of one sound is affected by the presence of another sound, making it harder to hear the first sound. This phenomenon plays a crucial role in how sounds propagate outdoors, influencing what we can hear in different environments. Additionally, understanding auditory masking helps in comprehending how our ears and brain process sounds, specifically regarding how certain frequencies can obscure others within critical bands.
Basilar membrane: The basilar membrane is a flexible structure located within the cochlea of the inner ear that plays a crucial role in the auditory process by responding to sound vibrations. This membrane runs along the length of the cochlea and supports the organ of Corti, where hair cells convert sound waves into neural signals. The unique properties of the basilar membrane, such as its varying width and stiffness, enable it to differentiate between various sound frequencies.
Binaural hearing: Binaural hearing refers to the ability of humans to perceive sound using both ears, which helps in determining the direction and distance of sounds in our environment. This dual ear input enhances our auditory experience, allowing us to localize sound sources more accurately and distinguish between different pitches and frequencies more effectively, significantly impacting our overall hearing capabilities.
Cochlea: The cochlea is a spiral-shaped, fluid-filled structure in the inner ear that plays a critical role in the process of hearing. It converts sound vibrations into neural signals, which are then transmitted to the brain for interpretation. The cochlea's unique structure allows it to separate different frequencies of sound, making it essential for distinguishing between various pitches.
Cochlear Implants: Cochlear implants are electronic medical devices that bypass damaged portions of the inner ear and directly stimulate the auditory nerve to provide sound perception to individuals with severe to profound hearing loss. They consist of both external components, which include a microphone and a processor, and internal components that are surgically implanted. This technology enables users to perceive sounds, thus facilitating communication and improving quality of life for those who may not benefit from traditional hearing aids.
Conductive hearing loss: Conductive hearing loss is a type of hearing impairment that occurs when sound waves cannot efficiently travel through the outer ear canal to the eardrum and the tiny bones of the middle ear. This condition can be caused by various factors, such as ear infections, fluid in the middle ear, earwax buildup, or abnormalities in the ear structure. Understanding this type of hearing loss is crucial as it highlights the mechanical aspects of hearing and how disruptions in these processes can affect overall auditory perception.
Critical Bands: Critical bands are frequency ranges in which the human ear processes sounds. Within these bands, sounds can interfere with each other, affecting our perception of loudness and pitch. This concept is vital to understanding how we hear, particularly how sounds are blended or masked when they occur simultaneously, influencing our overall auditory experience.
Decibel: A decibel is a logarithmic unit used to measure the intensity or level of sound, providing a way to compare different sound pressures and intensities. This unit allows for a convenient representation of the wide range of sound levels that humans can hear, connecting various aspects of sound waves, their properties, and how they propagate through different materials. It also relates to how we perceive sound, the impact of noise on communities, and how our hearing mechanisms respond to changes in sound pressure.
Eardrum: The eardrum, also known as the tympanic membrane, is a thin, cone-shaped membrane that separates the outer ear from the middle ear and plays a critical role in the process of hearing. When sound waves hit the eardrum, it vibrates and converts these sound waves into mechanical energy, which is then transmitted to the ossicles in the middle ear. The proper functioning of the eardrum is essential for effective hearing, as it amplifies and relays sound vibrations to the inner ear structures.
Equal-loudness contours: Equal-loudness contours are graphical representations that show the sound pressure level at which different frequencies are perceived to have the same loudness by the average human ear. These contours demonstrate that our ears do not perceive all frequencies equally, meaning that certain sounds must be louder than others to be perceived as equally loud. This concept is essential in understanding how humans experience sound, especially in relation to the ear's sensitivity across various frequencies and its impact on loudness perception.
Frequency theory: Frequency theory is a concept in auditory perception that suggests that the frequency of a sound wave determines the pitch that we perceive. This theory posits that the rate at which neurons in the auditory system fire corresponds to the frequency of the incoming sound, meaning higher frequency sounds result in faster firing rates and lower frequency sounds result in slower firing rates.
Hearing aids: Hearing aids are small electronic devices designed to improve hearing for individuals with hearing loss. They amplify sound, making it easier for users to hear conversations and environmental sounds, enhancing their overall auditory experience. These devices consist of a microphone, amplifier, and speaker, and can be customized to meet the specific needs of the user.
Hearing protection devices: Hearing protection devices are tools designed to reduce the risk of hearing damage caused by exposure to loud sounds. They are essential in environments where noise levels exceed safe limits, helping to prevent noise-induced hearing loss and other auditory issues. Understanding how these devices work and their proper use is crucial for anyone who may be exposed to hazardous noise levels.
Hertz: Hertz (Hz) is the unit of frequency that measures the number of cycles of a periodic wave occurring in one second. This term is crucial for understanding sound waves, as it directly relates to their frequency, which determines how we perceive sound in terms of pitch and tone. Hertz connects the properties of sound waves to how they propagate through various media and how our ears perceive them, influencing our experience of music and speech.
Inner hair cells: Inner hair cells are specialized sensory cells located in the cochlea of the inner ear that play a crucial role in the process of hearing. They convert sound vibrations into electrical signals that are sent to the brain, allowing us to perceive sound. These cells are essential for normal auditory function and are responsible for transmitting most of the auditory information to the auditory nerve.
Interaural level differences: Interaural level differences refer to the differences in the sound pressure level reaching each ear when a sound source is located off to one side. This acoustic phenomenon is crucial for spatial hearing, helping individuals determine the direction of a sound based on which ear receives it with greater intensity. Understanding these differences enhances our perception of sound directionality and is vital for effective auditory processing.
Interaural time differences: Interaural time differences (ITD) refer to the small differences in the time it takes for a sound to reach each ear, which help us locate the direction of sounds in our environment. This phenomenon arises due to the physical separation of the ears, leading to a delay in sound waves reaching the ear that is farther from the sound source. ITD is a crucial element in spatial hearing, allowing for the perception of sound location in three-dimensional space.
Loudness Perception: Loudness perception refers to the subjective experience of the intensity of sound, influenced by various factors including sound frequency, duration, and the listener's hearing abilities. It connects to how we perceive music and speech, as well as the physiological aspects of hearing, the impact of background noise, and individual evaluations of sound environments.
Masking: Masking is the process by which the perception of one sound is affected by the presence of another sound, often making the first sound less audible. This phenomenon is important in various areas such as room acoustics, noise control, and audio engineering, as it can influence how sounds are experienced in a space, how noise levels are rated, and how sounds are processed by the human ear.
Noise Reduction Rating: Noise Reduction Rating (NRR) is a measure used to indicate the effectiveness of hearing protection devices in reducing noise exposure. It quantifies how much noise is reduced in decibels (dB) when the device is properly worn, providing a standardized method for comparing different types of hearing protection. Understanding NRR is crucial for ensuring adequate protection against harmful noise levels, which can lead to hearing loss and other auditory health issues.
Noise-induced hearing loss: Noise-induced hearing loss (NIHL) is a permanent or temporary reduction in hearing ability caused by exposure to loud sounds, often resulting from prolonged noise exposure in various environments. This condition can occur due to industrial environments where heavy machinery operates, community settings with traffic and loud events, or even the physiological aspects of hearing where excessive noise damages sensitive structures in the ear. Understanding NIHL is essential for developing strategies to protect hearing and mitigate its impacts across different contexts.
Occupational Exposure Limits: Occupational exposure limits (OELs) are regulatory guidelines that establish the maximum allowable concentration of hazardous substances that workers can be exposed to in the workplace over a specific time period. These limits are designed to protect workers' health by preventing excessive exposure to harmful agents, including chemicals, noise, and biological factors, which can lead to adverse health effects, particularly in relation to hearing and the ear.
Ossicles: Ossicles are three tiny bones located in the middle ear known as the malleus, incus, and stapes. They play a crucial role in the process of hearing by transmitting sound vibrations from the eardrum to the inner ear. These bones amplify sound waves, enabling us to hear a wide range of frequencies and intensities. The ossicles form a lever system that enhances the efficiency of sound transmission to the cochlea.
Outer hair cells: Outer hair cells are specialized sensory cells located in the cochlea of the inner ear that play a crucial role in hearing by amplifying sound vibrations and enhancing auditory sensitivity. These cells are key players in the process of mechanotransduction, where sound waves are converted into electrical signals, allowing us to perceive different pitches and volumes.
Perceptual organization: Perceptual organization is the process by which the brain organizes sensory input into meaningful patterns and forms, allowing us to interpret and understand our environment. This process plays a crucial role in how we perceive sounds, particularly in distinguishing different sources and understanding complex auditory scenes. It involves mechanisms like grouping and segregation that help us make sense of the sounds we hear.
Place Theory: Place theory is a fundamental concept in auditory perception that explains how we perceive different pitches based on the specific location where sound waves stimulate the cochlea's hair cells. This theory suggests that different frequencies of sound activate different areas along the basilar membrane in the cochlea, allowing the brain to identify and interpret various pitches. It connects to how we hear, how we perceive pitch and frequency, and how certain sounds can mask others.
Precedence effect: The precedence effect is a perceptual phenomenon in auditory processing where the brain prioritizes the first sound it hears in a sequence, allowing for better localization of sound sources. This effect occurs because our auditory system uses timing and intensity differences between sounds arriving at each ear to determine their origin, helping us to focus on specific sounds in complex acoustic environments. It plays a crucial role in how we perceive and localize sounds, especially in environments with multiple sound sources.
Presbycusis: Presbycusis is the gradual loss of hearing that occurs as people age, often affecting the ability to hear higher frequencies. This condition is linked to changes in the inner ear structures, auditory nerve, and brain processes that influence hearing. It typically manifests in both ears and can significantly impact communication and social interactions, leading to feelings of isolation or frustration.
Psychoacoustics: Psychoacoustics is the study of how humans perceive sound, including the psychological and physiological effects that sound can have on listeners. It involves understanding how sound waves are processed by the ear and interpreted by the brain, influencing our emotional responses, perception of loudness, pitch, and timbre. This understanding is crucial in designing spaces such as concert halls and opera houses, where the listener's experience is deeply tied to how sound is perceived and enjoyed.
Pure-tone audiometry: Pure-tone audiometry is a hearing test that measures an individual's ability to hear specific frequencies of sound, using pure tones at various pitches and volumes. This test helps determine the type and degree of hearing loss, making it essential for diagnosing auditory issues and understanding how the ear functions in relation to sound perception.
Sensorineural hearing loss: Sensorineural hearing loss is a type of hearing impairment caused by damage to the inner ear or the auditory nerve pathways leading to the brain. This condition affects the ability to hear faint sounds and can impact sound clarity, often making it difficult to understand speech, especially in noisy environments. It can result from various factors, including aging, exposure to loud noises, genetic predisposition, or illnesses affecting the inner ear.
Sound Localization: Sound localization is the process by which humans and animals can identify the origin of a sound in their environment. This capability is essential for navigating the auditory world, allowing individuals to determine where sounds are coming from, which can be crucial for communication, awareness of surroundings, and survival. Understanding sound localization involves exploring how sound interacts with physical spaces and how our auditory system processes these cues.
Speech audiometry: Speech audiometry is a clinical assessment used to evaluate an individual's ability to hear and understand speech. This process involves presenting spoken words or sentences at various intensity levels to determine the listener's threshold for speech recognition and comprehension, helping to identify hearing impairments and their impact on communication abilities.
Temporal Integration: Temporal integration is the process by which the auditory system combines sounds occurring over time to form a cohesive perception of a sound. This phenomenon allows us to perceive sounds as a continuous flow rather than as discrete events, helping in recognizing patterns in speech, music, and environmental sounds. It plays a crucial role in our ability to discern information and engage in communication effectively.
Threshold of Hearing: The threshold of hearing is the minimum sound pressure level of a sound that can be perceived by the average human ear, typically measured at a frequency of 1 kHz. This concept is crucial for understanding how we perceive sound and is influenced by various factors including the physiology of the ear and the characteristics of sound waves. It serves as a reference point in acoustics for defining sound intensity levels and helps establish a baseline for measuring auditory sensitivity.
Threshold of pain: The threshold of pain refers to the level of sound intensity at which auditory sensations become unpleasant or painful for humans. This is typically around 120 to 130 decibels, and sounds above this level can cause immediate discomfort and potential hearing damage. Understanding this threshold is crucial for evaluating noise exposure in environments, as it relates directly to hearing safety and the limits of human auditory perception.
Tinnitus: Tinnitus is a condition characterized by the perception of noise or ringing in the ears when no external sound is present. This phenomenon can arise from various causes, such as exposure to loud noises, ear infections, or age-related hearing loss. Understanding tinnitus is important because it can impact an individual's quality of life and signal underlying auditory system issues.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.