Mastering Music Explained: Loudness, Format, and Final Polish
Mastering is the final technical and aesthetic stage before a recording reaches listeners — the step that transforms a finished mix into a release-ready master. It covers loudness optimization, format preparation, and the subtle sonic adjustments that help a track translate across streaming platforms, vinyl cutting lathes, and broadcast systems alike. Understanding how mastering works, and where it can go wrong, matters whether the goal is a major-label release or a self-distributed single on Bandcamp.
- Definition and scope
- Core mechanics or structure
- Causal relationships or drivers
- Classification boundaries
- Tradeoffs and tensions
- Common misconceptions
- Checklist or steps (non-advisory)
- Reference table or matrix
Definition and scope
Mastering sits at the intersection of engineering and quality control. A mastered audio file is not simply a louder version of the mix — it is a version optimized for a specific delivery context, verified against technical specifications, and formatted to meet playback system requirements.
The Audio Engineering Society defines mastering as the preparation and transfer of recorded audio from a source containing the final mix to a data storage device, the master, from which all copies are made (AES). In practice, the scope has expanded considerably. A modern mastering session addresses loudness normalization for streaming, format conversion (24-bit WAV to 16-bit/44.1 kHz for CD Red Book, or DDP for replication), ISRC code embedding, sequencing for album releases, and metadata tagging.
The discipline sits downstream of music mixing fundamentals and upstream of streaming and distribution for producers. Everything a mixing engineer left unresolved — a mid-range buildup that sounds fine on studio monitors but harsh on earbuds, a stereo image that collapses in mono — arrives at the mastering stage for final correction or acceptance.
Core mechanics or structure
A typical mastering chain processes audio through a sequence of stages, each addressing a specific dimension of sound.
Equalization corrects tonal balance across the full frequency spectrum. Where mix EQ operates on individual tracks, mastering EQ operates on the sum — small moves, typically ±1 to 3 dB, with broad curves rather than surgical cuts. The goal is correcting coloration introduced by the mix room, not redesigning the sound.
Stereo imaging and mid-side processing adjusts the width and mono compatibility of the mix. Mid-side EQ — which processes the center (mid) channel and the sides independently — allows a mastering engineer to add air to the high-end of the room ambience without brightening the lead vocal, or to tighten low-frequency content in the mono center where bass instruments live.
Dynamic range processing encompasses limiting, compression, and multiband compression. A brickwall limiter is the final stage in virtually every mastering chain, preventing intersample peaks from exceeding 0 dBFS and setting the integrated loudness target. The ITU-R BS.1770-4 standard, adopted by streaming platforms including Spotify (target: −14 LUFS integrated) and Apple Music (target: −16 LUFS integrated), defines how loudness is measured (ITU-R BS.1770-4).
Dithering is applied when reducing bit depth from a 32-bit or 24-bit working file to the 16-bit required for CD. Dithering adds a low-level noise signal that prevents quantization distortion artifacts — an invisible step, but its absence produces audible degradation on quieter passages.
Causal relationships or drivers
Platform loudness normalization changed the economics of the loudness war. Before streaming normalization became standard, commercial masters were routinely pushed to −6 LUFS or louder — a practice that crushed dynamic range in the race for perceived loudness on radio and in retail stores. When Spotify implemented loudness normalization based on ITU-R BS.1770 in 2013, tracks mastered louder than the target were turned down to match. The competitive loudness advantage evaporated.
The causal logic now runs in reverse: an overly compressed master gets turned down to −14 LUFS, and the listener hears a flat, dynamically lifeless track at the same volume as everything else. A track mastered at −14 LUFS with genuine dynamic range retains its transient impact and punch at exactly the same playback level.
Format requirements drive technical decisions in predictable ways. Vinyl cutting introduces a hard physical constraint: bass frequencies below roughly 300 Hz must be in mono to prevent the cutting stylus from jumping the groove. A stereo mix with wide bass requires mid-side processing specifically for the vinyl master — a separate deliverable from the streaming master. DDP (Disc Description Protocol) image files remain the format required by most CD replication plants, while broadcast delivery often requires −23 LUFS integrated per EBU R128 (EBU R128).
Classification boundaries
Mastering splits into distinct categories based on source material and delivery target.
Stereo mastering processes a single stereo mix file — the most common scenario for independent releases.
Stem mastering processes grouped elements (drums, bass, melodic instruments, vocals) as separate files, giving the mastering engineer more corrective flexibility. This is covered in depth at stem mastering vs full mix mastering.
Vinyl mastering involves RIAA equalization, mono bass summation, level adjustments to accommodate side length (longer sides require lower levels), and close collaboration with the cutting engineer.
Broadcast mastering targets −23 LUFS integrated (EBU R128) or −24 LUFS (ATSC A/85 for US broadcast, per the Advanced Television Systems Committee) — significantly quieter than streaming targets, with strict true peak ceilings of −1 dBTP.
Restoration mastering — applied to archival or live recordings — prioritizes noise reduction, click removal, and spectral repair alongside conventional processing.
Tradeoffs and tensions
The central tension in mastering is between loudness and dynamics. Higher integrated loudness requires more limiting, which reduces dynamic range — the difference in level between the quietest and loudest passages. A track with a dynamic range (DR) value of 6, as measured by the Pleasurize Music Foundation's DR Meter tool, has been compressed significantly harder than one with a DR of 12. Neither is objectively correct, but the choice has audible consequences.
A second tension exists between mono compatibility and stereo width. Aggressive stereo widening — achieved through M/S processing or harmonic exciters — improves the listening experience on headphones but can cause phase cancellation when the track is summed to mono, which still occurs in some club PA configurations, phone speakers, and smart home devices.
A third friction point: mastering cannot fix a bad mix. Low-end muddiness, a buried lead vocal, or a snare with no body are mix problems. Mastering can modestly address the edges of these issues, but the music mixing fundamentals stage is where those problems belong. A mastering engineer working on an unfixable mix faces the uncomfortable choice of doing less or overdoing it — neither outcome serves the music.
Common misconceptions
"Mastering makes everything louder." Streaming normalization means a louder master is turned down to match the platform target. Loudness is managed, not simply maximized.
"Mastering fixes the mix." A mastering engineer processes a stereo file — two channels. Separating a muddy kick drum from a boomy bass guitar is not possible without stem files. The audio editing fundamentals and mix stages are the correction windows.
"Any limiter setting produces a quality master." True peak limiting to −1 dBTP is standard for streaming delivery because intersample peaks — peaks that occur between digital samples and are only revealed during D/A conversion — can exceed 0 dBFS even when the waveform shows no clipping. The AES and EBU both recommend a −1 dBTP ceiling for this reason.
"Mastering is only about level." Tonal correction, stereo imaging, format conversion, metadata, and sequencing are all within mastering's scope. Level is one dimension.
Checklist or steps (non-advisory)
A standard mastering workflow proceeds through the following stages:
- Source file verification — confirm 24-bit or 32-bit WAV, appropriate headroom (typically −6 dBFS or more), no clipping, no normalization applied to the mix file.
- Reference listening — full playback on calibrated monitors and headphones before any processing begins.
- Tonal correction via EQ — identify and address frequency imbalances using broadband curves.
- Stereo imaging assessment — check mono compatibility; apply M/S processing if needed.
- Dynamic range processing — apply compression sparingly if the mix requires glue; set limiting threshold for the target integrated loudness.
- Loudness measurement — verify integrated LUFS, true peak ceiling, and short-term/momentary peaks against delivery specification.
- Dithering — apply appropriate dither algorithm (TPDF or noise-shaped) when reducing bit depth to 16-bit.
- Format rendering — export delivery files: 16-bit/44.1 kHz WAV for CD or streaming, 24-bit/96 kHz for archive, DDP image for CD replication as required.
- Metadata embedding — ISRC codes, artist, title, album, copyright year embedded in BWF chunk or ID3 tags depending on format.
- Final QA playback — listen to the rendered master, not the session, from start to finish.
Reference table or matrix
Loudness and format targets by delivery platform (as of published platform specifications)
| Platform / Format | Integrated Loudness Target | True Peak Ceiling | Bit Depth / Sample Rate | Standard |
|---|---|---|---|---|
| Spotify | −14 LUFS | −1 dBTP | 16-bit / 44.1 kHz | ITU-R BS.1770-4 |
| Apple Music | −16 LUFS | −1 dBTP | 24-bit / 44.1 kHz (lossless) | ITU-R BS.1770-4 |
| YouTube | −14 LUFS | −1 dBTP | 16-bit / 44.1 kHz | ITU-R BS.1770-4 |
| CD (Red Book) | No normalization | 0 dBFS | 16-bit / 44.1 kHz | IEC 60908 |
| Broadcast (US) | −24 LUFS | −2 dBTP | Per facility spec | ATSC A/85 |
| Broadcast (EU) | −23 LUFS | −1 dBTP | Per facility spec | EBU R128 |
| Vinyl | No fixed standard | Cutting-engineer dependent | Analog | RIAA equalization curve |
Platform specifications are subject to revision; the AES streaming loudness recommendations and individual platform help centers are the authoritative sources for current requirements.
The full landscape of production stages — from initial tracking through mix to master — is mapped at the musicproductionauthority.com home, where each discipline connects to its upstream and downstream context.