Mixing Stereo for Broadcast: What’s Fake, What’s Real, What Matters, and What Doesn’t

by | Audio, Audio Connections, CFX Community, Production, Streaming

In addition to my work in professional audio, I’m a bit of a Hi-Fi enthusiast. I have a modest listening room at home and enjoy exploring different styles and eras of music and studio production. The Hi-Fi world, often referred to as the “audiophile” community, is full of people who actually hate the term. Just like I don’t really call myself an “engineer.” I don’t even have the cool hat. I prefer the term “mixer.” In my writing, I only use “audio engineer” as a convenient way to distinguish me (a mixer, one who mixes) from the console (a mixer, the thing I mix on).

Here’s the thing: many audiophiles have little to no understanding of how recordings are actually made. A common misconception in that world is that the highest goal is to reproduce the “live” experience in your living room. And some of these folks have spent as much as a medium-sized church does on their PA to try to get there. I attended a demo at a Hi-Fi show where a guy commented that he could “hear the guitarist step forward to take his solo.” Odds are, if you’re reading this, you already know, “bro, that’s not how it works.”

Most stereo imaging, especially when it comes to things like “depth of field” or left/right placement, is simply a result of mixing decisions. Microphone placement plays a role, but even that is less of a factor in modern recordings. Since the invention of the microphone, amplification, consoles, and multi-track recording, music reproduction has steadily become more “fake.” I don’t mean that as a criticism. It’s just a reality. And we like what we hear.

There’s some debate about mixing mono vs stereo in auditoriums (see my article on Four Church PA Controversies). But today we’re talking about stereo mixing for broadcast. While people can argue about mono in live spaces, I don’t know anyone who advocates for mono in a broadcast mix. As one well-respected FOH mixer put it, “God gave us two ears and I like to use them both.” Immersive formats like Dolby Atmos are certainly interesting, but for now, stereo is the standard. So let’s talk about how we approach stereo for broadcast and what really matters.

What’s Fake and What’s Real.

Spoiler: most of it is fake. Let’s start with a couple of examples.

Back in the early days of stereo, EMI famously re-mixed and released “stereo” versions of Beatles albums that were originally tracked and mixed in mono. The result was sometimes fascinating, sometimes confusing; vocals in one ear, drums in the other, bass floating off to the side. There was often no attempt to complement the music or preserve realism. Who am I to criticize the Beatles? But most serious musicians (and audiophiles) still prefer the mono versions. Because as it turns out, random panning is just ear candy.

Over time, engineers and producers began to adopt what we now recognize as studio standards—a kind of “factory default” for stereo mixing. It’s like the digital speedometer on your car. Is it perfectly accurate? I don’t know. But Honda tells me it is, and it seems to work for most of us. Our generally accepted stereo principles are kind of like that.

Take the grand piano. The typical mic setup for recording a grand is a pair of condensers, usually in an XY or AB pattern or some variation, placed over the harp. That signal is then hard-panned left and right on the console. The result is a stereo image of low-end on the left, high-end on the right, with mids through the center. But if you’ve ever stood near an unamplified grand piano, you know that’s not how it actually sounds. To hear what the mics are capturing, you’d have to stick your head into the piano itself, your nose just above middle C, and hope the lid doesn’t slam shut. Even sampled keyboards replicate this sound now, because it’s what we’ve come to expect. Nearly every piano recording uses this approach, including artists who consider themselves purists. It’s not how pianos really sound. But it does sound very good. So we keep doing it.

Now for drums. Nobody in the room hears a drum kit like a spaced pair of overheads panned hard left and right. If the hi-hat is all the way in your right ear and the floor tom all the way in your left, congratulations, you are either the world’s shortest left-handed drummer or you’ve been mixed that way. It’s fake. But it’s fun. We pan toms from right to left (high to low) because it gives motion and energy. Nobody thinks the toms are really that big and that far away from each other. It’s just the standard. And it works.

Electric guitar players often send stereo pairs from their pedalboards, packed with delay and reverb. Odds are, when they were developing their sounds, they were listening on headphones. Their stereo effects often rely on slightly different signals in the left and right channels, and some of those effects can disappear entirely when summed to mono.

That’s where broadcast mixers have to start making decisions.

Let’s accept the industry standard on piano and drums. But when I’ve got two electric guitarists, each sending stereo from their boards, I’ll sometimes narrow the spread on one or both of them for the sake of clarity. And I always check for phasing issues when collapsing stereo images tighter.

Bass guitar? No debate. No matter where it is on stage, it’s always straight up the middle.

So what about vocals?

We’ve already accepted that most stereo imaging choices aren’t literal. Whether it’s pianos, drums, or electric guitars, most of it is shaped to sound good, not to match what’s visible on stage. So should vocals be the exception? Do we suddenly start panning each singer based on their camera shot or where they’re standing?

My answer: sometimes yes, but very often no.

If it helps create separation or improves clarity, I’m all for it. Regardless of the visuals, I tend to pan background vocals conservatively, rarely more than a 10 and 2 position. And remember this: if everything is wide, then nothing is. Here’s the spoiler , if you spread your vocals too far, you might actually be hurting the mix for most of your online audience. Ask yourself: how wide is the stereo image on an iPhone?

Now for the practical side.

In church, we rarely have a single lead vocalist. Different songs mean different leaders. If you’re like me and prefer the lead vocal in the center, with others spread slightly based on their stage position, then you need a way to adapt. I use scenes to manage this. I also use song-specific delay settings on the lead vocal only. (See my article When Church Audio Engineers Make a Scene.)

Let’s say there are seven singers on stage. The center-stage vocalist leads the first song, so I pan them dead center. The others are placed around them, roughly matching their physical locations. But then the next song features the person on far stage left. At that point, I pan them center and shift the previous lead back to their side position.

So yes, I try to honor the stage layout, until it no longer serves the mix. When it doesn’t, I do what’s best for clarity. I tend to pan background vocals around the lead to keep the vocal blend tight. Matching the stage layout can be fun, but sometimes that third wall has to be broken.

If I’ve got two male background vocalists, I’ll usually pan them to opposite sides for balance. Same with two female vocalists. I’m also thinking about who’s singing which part. If someone is doubling the lead, I might spread them a little wider. Harmony parts tend to sit a little closer to the center. Some mixers flip that approach, and that’s fine too.

The goal is never to show off the stereo field. It’s to support the lead, protect clarity, and make sure the vocals feel unified, no matter where they’re standing.

So What Can Be Panned Wide?

One often overlooked contributor to a wide mix is the acoustic guitar. A simple trick I like is to double-patch the acoustic into two channels, creating a fake stereo image. Treat both channels identically in terms of EQ and compression, but delay one of them by a few milliseconds, anywhere from 6 to 12 ms, depending on your system, then pan them hard left and right.

The result is a subtle doubling effect that adds shimmer and width to the mix. It lets the acoustic guitar sit comfortably without needing to be loud. While electric guitars often bring weight and depth, this kind of acoustic treatment helps frame the mix with width and adds a sense of space that reaches beyond the speakers.

A word of caution: test this in mono to make sure you’re not introducing phase cancellation. The goal is for both sides to carry the same program content, with just enough delay to create separation without causing artifacts. (See my article Whatever Happened to Acoustic Guitars in Worship?)

I also find that auxiliary percussion — like shakers and tambourines — works well when panned wide. These aren’t essential rhythm drivers, so you can get creative without muddying the core of your mix.

Another creative tool for width, depth, and space is a well-placed vocal delay, exclusively on the lead vocal. Try a quarter-note delay on one side and a dotted eighth on the other, matched to the tempo of the song. Done right, it doesn’t distract. It just adds dimension. It feels good, opens up the mix, and helps a strong vocal live in a fuller stereo environment without simply turning it up.

And if you’re lucky enough to have a choir or a string section (even if it’s in your tracks), that’s a perfect chance to stretch out and use the full stereo field. I love mixing choir and strings whenever I get the opportunity. Nice and wide.

Room mics bring natural space and realism that reverb plugins can’t replicate. A little real air from well-placed ambient mics can do more than a truckload of artificial space.

When audience or ambient mics are blended in subtly and panned wide, they add just enough natural depth, space, and energy to help the listener feel like they’re in the room without muddying the mix. I don’t push them forward, but I do let them sort of frame the mix. I try to land in that sweet spot where no one really notices them, similar to the vocal delay, not the main focus, but you’d miss them if they weren’t there.

What Works and Why We Do It.

Mix decisions should always serve the listener. But we’re still artists at heart. So yes, we notice the little things, even if no one else does. That attention to detail is how we stay sharp. It’s how we bring our best every time.

What Matters Most.

Vocals. Stereo imaging is fun, creative, and impactful, but too much width can hurt clarity, especially in the center where the lead vocal lives. That doesn’t mean collapse everything to mono. It just means protect the anchor. Create space with intention, not just because you can.

One Last Thing.

A while back, I wrote an article called Does Your Broadcast Mix Pass the iPhone Test? At the time, more than 80 percent of people were consuming church services on phones, tablets, laptops, or desktop speakers. That’s still true today.

It’s funny, and a little sad, that those Beatles “stereo” mixes didn’t exactly shine on the 3-inch speakers inside transistor radios in the 60s. Today, we’re streaming full mixes through phones with speakers ten times smaller. Different decade, same problem. Another generation, missing half the picture. 

So yes, enjoy stereo. Use it wisely. Just remember: you’re not mixing for the guy in the boutique headphones. You’re mixing for everyone. And most of them are in the digital equivalent of the cheap seats. And how do we do that? 

We don’t forget to listen.

Interested in getting other Audio Connections articles delivered right to your inbox?

SUBSCRIBE NOW

Sign Up for the Worship Facility Newsletter!

NEW THIS WEEK

Do Cables Matter & Is Burn-in Real?

Recently, I wrote an article discussing the benefits and challenges of churches moving back to wired microphones and instruments, stepping away from the default of "everything wireless." I mentioned that if this becomes a genuine trend, it’s one I can support. The...

Waves Audio Launches ILLUGEN: Text-to-Sound Engine for Music

Waves Audio, the world’s leading developer of professional audio signal processing technologies and plugins, announces ILLUGEN, an unprecedented AI-powered text-to-sound engine desktop app that enables you to turn your text prompts into entirely original,...

Security Team at CrossPointe Church Stops Armed Attacker in Seconds

Houses of worship are meant to be sanctuaries. Increasingly, they are becoming targets for gun violence. The attempted mass shooting at CrossPointe Community Church in Wayne, Michigan, is a stark reminder of the threats facing faith-based communities in America. It...