Less is more: 3 reasons why a simpler arrangement sounds bigger.

Renewsound - аудио звукозаписно студио в София - сесия
photo from a session at RENEWSOUND audio recording studio

Everyone wants their recording to sound great. And the desire to make this happen and achieve “huge volume”, many of us follow the path of adding more and more sound. When we’re in the recording studio and things sound so good, we want to add a few more tracks. If two guitars sound big, four should make the sound even bigger, right?

While this makes sense, the results can be paradoxical. Often what we achieve by adding more tracks to a recording doesn’t just make it “bigger” but quite the opposite.

How does this work? Here are some example situations where you can remove some items from the recording. This way you can achieve better results and thus the desired great and voluminous sound.

  1. Phases. (note that the translation of this word means termination)

If you’ve spent any time in an audio recording studio, you’ve probably heard mention of phase issues. This problem occurs when two or more similar sound sources are received by the microphone and speakers respectively with a slight time difference. In this case, the slight delay is so small that the sound does not sound like a delay, but instead becomes a thinning from a frequency filter.

For example, let’s say you have a kick on a bass drum (snare) that sounds tight and nice on its own. But it lacks high-frequency energy. One of the easiest ways to solve this problem would be to add a sampled similar sound that contains what is needed. The two sounds on their own can sound great. But together they can sound “thin” and weak.

This happens when the two sounds are out of phase with each other. To eliminate the problem, it is necessary to reverse the polarity of one of the sounds or manually level the transients. Polarity is which direction the sine wave of the sound goes. If one goes up and the other goes down, then they have reversed phases. When we change the polarity and both sounds go in the same direction, then the sound that reaches the speakers is louder. The speakers receive and “perform” the amplified electrical oscillations of the sine wave of the sound we feed them. If we give them two exactly the same sounds with different polarities physically, what will be heard will be equal to 0. In practice, we will not hear any sound because one wave “tells” the speaker to move forward with a certain force and number of vibrations per second and the other tells him exactly the same thing in the opposite direction. And he “performs” J

So imagine a song recorded on 60-70 tracks. Each sound source says something to the speakers and they have to reproduce it all. And the way we want it. Let everything be heard and be good. The chance of matching the sine waves of the sound waves in the low frequencies is much greater because the frequency of the oscillations are of a wider amplitude. Right, 30 hertz means we have 30 oscillations per second. At high frequencies, there is not a particularly big problem because even at 500 hertz, the chance of them “matching” in frequency and having the opposite polarity is much smaller. So the main problems are exactly where we expect to achieve voluminous and “thick” sound. So if two guitars don’t seem like enough, think again.

And if you still want a “crushing” guitar sound, use another amp, a guitar. Something that will make the sound wave different to reduce the chance of a phase problem.

And while it can be corrected in the mix to some extent, the best way to solve these problems remains to reduce the number of duplicated audio channels.

  1. Masking. (Overlap)

While phase problems most often come from duplicating instruments, overlap is caused by instruments and sound sources that are in a similar frequency spectrum. For example, a solo vocal can sound great on its own. But when you add the rest of the instruments it loses clarity and presence.

The problem comes from too many sounds occupying a similar frequency range. And although they can be adjusted in frequency with an equalizer, the best solution is to remove similar-sounding channels in a close sound range.

Rather than having a “murky” sound made up of lots of tracks that sound great on their own, it’s better to make a great sound out of the minimally sufficient tracks.

After all, no one cares how many tracks you’ve recorded. If it doesn’t sound good, it just doesn’t sound good.

  1. Dynamics

    Whoever said, “There can be no light without darkness,” most likely did not mean music production. But the context of this thought fits well here.

Simply put, if all parts of your song are “big” loud, it will stop sounding “big” and loud and will sound small, quiet and thin.

Why? Because for something to sound big, there has to be a difference and contrast between loud and soft. Sometimes switching out multiple instruments in verses does a great job in this regard. This makes the next part sound “big” and loud. You might think that this brings down the energy level of the song, but when all the instruments are involved you will actually feel how much more energetic the song sounds as a whole.

With the unlimited possibilities of digital technology, anything is possible. Pretty much everything J  But sometimes it’s good to step back and ask if all this is necessary. Do we need six samples per drum to make a really good sound with just one mic?

The next time you decide to add another guitar or synthesizer take, ask yourself if it will improve the song or distract the listener from the main thing that has already been recorded. You might be surprised to find that more often than not leaving a certain track will have a positive effect on the song than leaving it in and it sounds along with everyone else. We all listen to music as a collective whole. If too many elements distract us, we miss out on the emotion of the song. Isn’t it actually born and exists from this emotion? If we fail to “collect” and present compactly and cleanly the essence of the idea, we risk passing by.