Module 2.4 · Foundation Track · How You Hear
Critical Bands & Masking
Why two great-sounding instruments can fight in a mix — and how to give each one a frequency pocket where it can be heard.
You record a vocal — sounds great in solo. You record a bass — also sounds great. You bring them up together in the mix… and somehow the vocal feels muddy, indistinct, buried. The bass feels vague, like it can't quite hold its place. Both tracks are clean, both are well-recorded, and yet together they're worse than apart. What just happened?
Welcome to auditory masking. It's not a recording problem; it's a perception problem — and it's hardwired into how your inner ear works. Your cochlea (the spiral-shaped organ that converts sound into nerve signals) has limited frequency resolution: it groups the audible spectrum into about 24 critical bands. Within each band, two competing sounds don't cleanly add — the louder one masks the quieter one, hiding it from your perception entirely. Same band = same fight. Different bands = both audible. The widget below lets you hear this for yourself with two pure tones; the lesson generalizes immediately to instruments, vocals, and full mixes.
This is the foundation behind every "EQ for clarity" move in mixing. When you cut 250 Hz on a bass to make room for a vocal, you're moving the bass out of the vocal's critical band. When you scoop the muddy 400 Hz on an acoustic guitar in a band mix, you're carving a pocket where the kick can live. Every working engineer is, consciously or not, an architect of frequency space — making sure each important element has its own band to breathe in. Stereo placement (Module 2.3) gave instruments a place in the 3D auditory space; masking awareness gives them a place in the frequency.
Tap to start
This widget plays two pure tones — a louder masker (A) and a quieter probe (B). Move B around and listen for it to disappear when it's in A's critical band, then reappear when it escapes. Headphones recommended.
Foundation · Module 2.4 · How You Hear
Critical Bands & Masking
Why two sounds in the same frequency band fight each other — and how to give them their own pockets.
Hear it for yourself
Press Play both. You'll hear mostly Tone A — the louder masker. Tone B is there at −25 dB, but in the same critical band as A, so your ear treats them as one signal. Press Solo Tone B to verify that B really is playing. Now move B's frequency slider far away (toward 5k or 10k) and press Play both again — Tone B will reappear because it has its own critical band.
Going deeper
The cochlea — your inner ear's spectrum analyzer
Inside your inner ear, the cochlea is a spiraled tube about the size of a pea, lined with thousands of tiny hair cells. Each hair cell is mechanically tuned to a specific frequency — the ones near the entrance of the spiral respond to high frequencies, the ones deep inside respond to low frequencies. When sound enters your ear, different parts of the cochlea vibrate, and the corresponding hair cells fire nerve impulses that your brain interprets as pitch and timbre.
But the cochlea doesn't have perfect resolution. Each hair cell responds to a range of frequencies, not just one — and adjacent hair cells overlap in their sensitivity. The result is that two frequencies close together physically excite some of the same hair cells, and the brain can't fully separate the signals. This isn't a defect; it's how mammalian hearing evolved. But it has consequences for mixing.
Critical bands — the Bark scale
In 1961, German psychoacoustician Eberhard Zwicker formalized this with the concept of critical bands. He measured how the cochlea groups frequencies into 24 perceptual zones across the audible spectrum, named them the Bark scale (after Heinrich Barkhausen, an earlier acoustic researcher), and showed that masking effects are far stronger within a single band than across bands.
Critical band widths are not equal. They get wider at higher frequencies because the cochlea's resolution is logarithmic, similar to musical pitch:
| Frequency range | Approximate band width | Mixing implication |
|---|---|---|
| 20–500 Hz | ~100 Hz | Bass region — narrow bands. Kick and bass fundamentals can mask each other within tight frequency ranges. Carve carefully. |
| 500 Hz–1.5 kHz | ~150–250 Hz | Vocal fundamentals and instrument body live here. Critical bands are tight; small EQ moves matter a lot. |
| 1.5–5 kHz | ~250–700 Hz | Presence range. Bands are wider. Vocal consonants and snare crack share critical bands with cymbal clarity — very common conflict zone. |
| 5–15 kHz | ~700–2500 Hz | Air and brilliance. Wide bands mean more separation possible. Less commonly a masking battleground. |
The widget you just used shows these bands as tick marks across the rainbow spectrum. When two tones fall in the same band, they fight; when they're in different bands, they don't.
Upward spread of masking
One important asymmetry: masking is not symmetrical across frequency. A loud low-frequency masker spreads its masking effect upward (toward higher frequencies) much more than the reverse. A 200 Hz masker can mask probes up to 800 Hz; a 2 kHz masker masks probes between roughly 1 kHz and 4 kHz, but barely affects 200 Hz at all.
Practical consequence for mixing: low-end energy masks midrange and high-mid more than vice versa. This is why a muddy bass or kick will obscure vocal clarity, but a too-bright cymbal won't typically hide the bass. It's also why high-pass filters on every non-bass instrument are such a common move — removing low-end energy that wasn't doing anything useful but was masking everything above it.
Pre-masking and post-masking (temporal masking)
Frequency-domain masking isn't the only kind. The cochlea also has limited time resolution, which gives rise to temporal masking:
- Pre-masking — a loud sound can mask a quieter sound that occurs just before it (up to ~20 ms earlier). Counterintuitive, but the brain's processing window is finite.
- Post-masking — a loud sound can mask a quieter sound that occurs just after it (up to ~200 ms later). This is much stronger than pre-masking.
Practical consequence: a snare hit can mask quieter sounds for tens of milliseconds after it. This is why "transient design" is a real category of plugin (Sonnox Oxford TransMod, SPL Transient Designer, iZotope Neutron Transient Shaper) — adjusting the attack and sustain of percussive elements changes how much they mask other content. We'll cover this further in Track 2.
"If you can't hear it in the mix, it's probably not gone — it's masked. Cut what's hiding it before reaching for a boost." — FTM, paraphrasing the spirit of Mike Senior's Mixing Secrets
The classic masking conflicts in real mixes
Certain frequency conflicts come up in nearly every mixing session. Recognize these and you'll spot masking faster than your beginner mixes ever did:
- Kick vs. bass — both fight for the 60–250 Hz pocket. Standard solution: kick takes 80–100 Hz fundamental, bass takes 100–200 Hz. Or sidechain bass to kick for rhythmic separation in time rather than frequency.
- Vocal vs. bass / kick — the vocal's chest body (200–400 Hz) is the same range as the bass's "warmth" and the kick's "boxiness." Cut the bass and kick at 250 Hz to make room for the vocal.
- Vocal vs. snare — the vocal's presence range (3–5 kHz) is the same range as the snare's crack. The snare often wins in pop mixes; the vocal needs careful EQ in this range to maintain consonant clarity.
- Acoustic guitar vs. piano vs. vocal — these three instruments all cluster in the 200 Hz–2 kHz range. In an arrangement with all three, every one needs surgical EQ. Often a producer will simplify the arrangement instead of trying to EQ all three apart.
- Cymbals vs. high-mid content — overheads spread broadband energy from 1 kHz up. They mask vocal "air" and string-bow noise. Often controlled by EQ shelving on the overheads.
- Pad vs. everything — synth pads cover huge spectrum because they're harmonically dense. Aggressive HPF (200 Hz+), wide cuts in the 200–600 Hz range, and careful mid sculpting are essential to keep pads from drowning the mix.
Track 2 (Mixing) treats these conflicts in detail — module by module, instrument by instrument. For now, the key is recognizing that masking is the underlying mechanism: every conflict above is two instruments fighting for the same critical band(s).
Why solo mixing is dangerous
The most common amateur mistake is mixing in solo. A track sounded great alone, so you decide it's "good." But the moment it's in a full mix, masking changes everything. The track that sounded crisp and clear in solo might be entirely buried; or it might be masking a more important element.
Working engineers' rule: solo to identify problems, evaluate in context. Solo a track to find a specific issue (a click, a noise, a wrong note). But never decide whether it sounds "right" without listening to it in the mix. Masking is contextual — you can't predict it from solo listening.
The "every instrument has a pocket" principle
Module 2.3 introduced the idea of stereo placement — every instrument has a place in the 3D auditory space. Now we add the frequency dimension: every instrument also needs a place in the spectrum. The two together define the instrument's position in the mix:
- Stereo position — where it sits L to R
- Frequency position — which critical bands it primarily occupies
- Depth (covered in Module 6 — Reverb & Delay) — how forward or far back it feels
- Dynamics position (covered in Module 4 — Compression) — how prominent it is moment-to-moment
A great mix is a 3D-plus-frequency-plus-time arrangement where every important element has its own pocket across all four dimensions. When something feels lost, ask which dimension it's losing in. Often it's masking — and the fix is in frequency, not volume.
In your DAW — visualizing masking with spectrum analyzers
The widget showed masking with two clean tones. In a real mix, masking happens between complex sources with hundreds of frequencies each. A spectrum analyzer reveals where two tracks overlap visually — making invisible conflicts visible. Workflow:
- Insert a spectrum analyzer on each of two suspected-conflicting tracks (e.g., vocal and bass).
- Use an analyzer that supports spectrum overlay — showing both tracks' frequency content on the same chart.
- Identify regions where both tracks have significant energy at the same frequencies.
- Cut the less-important track in those regions (carving).
- Listen — both should now have their own distinct presence.
| DAW / Tool | Stock spectrum analyzer | Notes |
|---|---|---|
| GarageBand | (none stock — use free third-party) | Use Voxengo SPAN (free) — drop on each track, overlay views. |
| Logic Pro | Multimeter, Spectrum Analyzer (built into Channel EQ) | Channel EQ has a real-time spectrum analyzer built in — toggle the analyzer button. Multimeter for full-spectrum visualization. |
| Ableton Live | Spectrum, Spectrogram | Spectrum (Audio Effect) gives real-time analysis. Spectrogram (in Live 11+) shows frequency content over time. Both excellent. |
| Pro Tools | (none truly stock — see Voxengo SPAN below) | Pro Tools' built-in metering is loudness-focused, not spectral. Add a free third-party. |
| Reaper | JS: Spectrum Analyzer Meter | Stock JS plugin. Functional. ReaEQ also has spectrum analysis built in. |
| FL Studio | Wave Candy | Stock plugin with spectrum analyzer mode. Functional. |
Free cross-DAW reference: Voxengo SPAN is universally available, free, and supports group/overlay analysis (analyze multiple tracks simultaneously to see masking). FabFilter Pro-Q 3 (paid) has the most advanced masking-detection features — its Spectrum Grab and EQ Match tools surface masking conflicts directly on the EQ interface. iZotope Neutron 4 (paid) has a Mix Assistant that automatically identifies masking conflicts across multiple tracks and suggests EQ moves. None of these are required to do good mixing — but if you're working on dense arrangements, a good analyzer pays for itself.
Common mistakes
- Boosting in the mix when masking is the issue. A buried vocal isn't usually fixed with +3 dB at 4 kHz. It's fixed by cutting whatever's masking it (bass, kick, busy guitars). Always try cuts on the masker before boosts on the masked.
- Trusting solo decisions. Every track sounds great alone. The mix is where masking lives. Make decisions in context, not in solo.
- Trying to fix everything with EQ on one track. Sometimes the OTHER track needs to move out of the way. If two tracks fight, cut the less-important one, not the more-important one.
- Not high-pass filtering. Low-end energy spreads upward in masking. A vocal track with low-frequency rumble isn't just adding rumble — it's masking everything from 200 Hz up. HPF aggressively (100 Hz+ on most non-bass sources).
- Ignoring spectrum analyzers. Some engineers pride themselves on "mixing by ear only." That's fine for experienced mixers — but for everyone else, an analyzer reveals masking conflicts faster than ear training can.
- Trying to make everything bright/present. When every instrument is competing for the 3–6 kHz range, the mix sounds harsh and confused. Not every element needs presence; some elements should sit BACK to leave room for the leads.
Next up · Module 2.5
Phase & Polarity