Grammy Winner Buckley Miller Talks About Designing A Soundstage From the Mix Engineer’s Perspective

 

Buckley Miller is a mix and recording engineer working out of Nashville. In addition to engineering Zach Williams’ 2018 Grammy Award winning album Chain Breaker, Miller has worked with the likes of Josh Groban, Lady Antebellum, Mindy Smith, David Archuleta, Eagle-Eye Cherry, Ingrid Michaelson, Dierks Bentley, Alternate Routes among scores of other artists.

 

Originally from Atlanta, Buckley worked with Reid Shippen (Shania Twain, Uncle Kracker, CeCe Winans, Death Cab for Cutie, Robert Randolph, Kenny Chesney, Backstreet Boys among others) before breaking out on his own in 2006. Smith lives in East Nashville with his wife Shannon and their three children and a cat with no name. 

 

Buckley MillerKEF: When you prepare a soundstage during a mix, what do you want the listener to get out of it?

Miller: I think it really varies depending on what I’m doing. If it’s a big, dense pop mix that’s got a lot of stuff going on, where everything is pushed up front, sometimes that’s more about width than the sounds coming from the sides.

 

KEF: What’s the biggest challenge in a dense mix?

Miller: It’s really challenging to make everything fit together. Sometimes you can get bogged down in trying to make everything impactful and sounding really cool and you can sort of lose the song. For me, it’s about finding the stuff that’s really important [to the song] and featuring that.

 

KEF: Are you getting guidance from the produce or artist, or are you using your own ears, or is it a mixture of both?

Miller: A lot of times I have a pretty good idea what their balances were. I always ask for a rough mix so I can hear what they’ve been listening to. Sometimes I’ll dig into a track and I’ll have a certain part buried and I’ll flip over to the rough mix and realize they’ve got the same part really loud, and that it was a big driving force for them and so I’ve got to recalculate. 

 

If they’ve been in Pro-Tools, when I get the tracks I’ll try to start from where they’ve left off. So much of the production is done with plug-ins after its been recorded that taking a session and just chucking all of their plug-ins just doesn’t work anymore.

 

And sometimes I get stuff that’s so well-done and so well-presented that it’s more about just seeing if I can take it up a notch from there. It’s sort of an undefinable thing, especially in those cases where it’s just a matter of taking up another 10%, then it’s about depth and width and impact.  

 

KEF: For KEF, a three-dimensional soundstage is an extremely important part of the listening experience. As a mix engineer, is making the soundstage impactful a conscious effort?

Miller: Absolutely. I think that’s a big part of what I do. A lot of what I focus on is just that: making [the mix] more expansive, deeper, more impactful.

 

KEF: So how do you take something that is ostensibly a two-dimensional thing – a pair of speakers – and make it so a listener can feel the height and depth as well as the width?

Miller: There are a number of ways to do that depending on genre. Especially if there are live instruments a lot of that depth is already in there. With live instruments there’s an inherent depth just because of the natural timbre of a drum kit in a room or an acoustic guitar being captured by a microphone. When there are rooms and microphones involved you sort of get that naturally.

 

In a lot of productions nowadays there aren’t a lot of live tracks, and that’s where it can get more challenging and sometimes more fun to put stuff in a specific space. Sometimes that’s done with delay, or reverb and sometimes a mixture of reverbs to put the sounds into a different place in the soundstage.

 

KEF: Does EQ play a role in that?

Miller: It does, but for me in terms of depth, if I’m thinking about trying to push something forward or back, before EQ I would usually use a lot of harmonic saturation, not distortion, but harmonics for colorization and saturation. That’s a big way that I can bring something forward. It sort of acts like EQ but with harmonics you can really push a certain frequency in that instrument forward. With time-based effects (delay and reverb) and harmonic saturation you can really affect the depth of a mix.

 

KEF: When a listener sits down to seriously listen to a piece of music, what are some of the things they should listen for in a mix that they might otherwise overlook?

Miller: What comes with that question is how people are listening now: on their laptops or on their phones. In 2006, when I was first getting started, people were still mixing on consoles and people were still listening to music on stereos. Because people were listening on nice stereos – or at least two speakers that were separated from each other – you could hear width. That used to be a big thing, like ‘this guy’s mixes are super-wide. That’s still a thing but its not nearly as much of a thing as it used to be. To me, it’s not something I worry about as much as I do depth because a lot of people are listening in essentially mono environments. Even if the speakers are very close, panning and width can still mask or unmask certain things – it’s still important – but I focus more on depth and height. So I would probably tell someone who’s sitting in front of a nice pair of speakers to listen to front to back and top to bottom.

 

KEF: Are you able to turn your ears off from being professional mixing guy to regular music fan or are there things you can’t ignore that driver you crazy?

Miller: It’s harder for me to do it now. I think, like anyone who does this all the time, you’re kind of obsessed so you’re always analyzing. A lot of times what I’ll listen to for fun is so different from what I’m working on and that helps me turn my brain off. Lately I’ve been listening to a lot of Cuban music – which talking about depth was recorded live in the 50s and 60s in these huge open studios and you can hear the room in each recording.

 

KEF: Does it bother you on any level that there’s so much that goes into [producing and mixing a song] and someone’s listening on a laptop or a small mono Bluetooth speaker or is that just part of the equation now?

Miller: You have to mix for your market. Nothing is really distributed on CD anymore it’s all pretty much streaming and I don’t think you can ignore that.     

 

It’s a little bit of a bummer at times. There are certain projects where you just want [everything in the mix] to be heard but it still has to translate to an iPhone or whatever because that’s what people will be listening to it on. It’s changed the way I approach certain things – especially bass. If you mix bass they way people used to – like even ten years ago – people are not going to hear it [on their devices] so now your record no longer has bass. I spend a lot of time trying to generate upper harmonics or do things that will at least make you sense the presence [of bass] on a speaker that doesn’t go below 800Hz or whatever. That’s where depth really translates. Even on a crappy Smartphone speaker you can hear when something has dimensionality. [laughs] And even if it didn’t translate I would still obsess over it.

 

I’m more annoyed that you can get a 1080p movie streamed to your TV or downloaded really quickly to your laptop but with music we’re still at 16-bit, 44.1. Even just 24, 48 makes a huge difference. People have tried – Neil Young with Pono, and Tidal with Hi-Res – but it doesn’t seem to be catching on and that frustrates me. When something becomes the standard [like lo-res streaming] it seems to hang on for a long time even though there are no technological limitations [to hi-res streaming] anymore.