Showing posts with label Mixing. Show all posts
Showing posts with label Mixing. Show all posts

Sunday, May 29, 2016

Mixing : Reverb

Reverb
Reverb is hundreds and hundreds of delays. When a sound first occurs, it travels throughout the room at the snail's pace of around 740 miles per hour. It bounces of the walls, ceiling and floor and comes back to us as hundreds of different delay times. All of these delay times wash together to make the sound we know as reverb.

Reverb is actually like placing hundreds of spheres of sound between the speakers. It takes up a tremendous amount of room in this limited space between the speakers. In a digital reverb, all of these delays are panned to virtually hundreds of different places between the speakers. This is why reverb masks other sounds so much in the mix.

There are certain parameters of control found in units that create reverb. I will explain each setting

Room Types


Modern digital reverbs allow the user to change the "type of room." You can simply imagine different types of rooms between the speakers. There are no strict rules as to the type of room that is used in a mix. Some engineers prefer a plate reverb sound on the snare drum. Some use hall reverbs on saxophones.

It is best to always set the type of reverb while in the mix (with all the sounds on) to make sure it cuts through the mix like you want it to. Different types of sounds will mask the reverb in different ways.

Reverb time


You can also change reverb time: the duration or length of time it lasts.

A common rule is to set the reverb time on a snare drum so that it ends before the next kick lick; this way, the snare reverb does not obscure the attack on the next kick note, which will keep the kick drum sounding clean, punchy and tight. The faster the tempo of a piece, the shorter the reverb time. Again though, rules are made to be broken. (You won't go to jail for this one.)

Pre delay time


When a sound occurs, it takes awhile for the sound to reach the walls and come back. The time of silence before the reverb begins is called the predelay time. On many units it is just called delay.

Different-sized rooms will naturally have different pre delay times. A medium-sized auditorium has around 30ms of pre delay time, while a coliseum might have as much as 100ms of pre delay time. Therefore, it is important to have a bit of pre delay time if you are looking for a truly natural reverb sound. Most times, when you call up a preset in a reverb unit, someone has already programmed a pre delay time. You can adjust this as desired.

The cool thing about longer predelay times (over 60ms or so) is that they help to separate the reverb from the dry sound. With shorter predelay times, reverb will very quickly "mush up" the original dry sound, making it unclear. With longer predelay times, a vocal, for example, will remain clean and clear even with a good amount of reverb. When using extremely long predelay times, it is important to set the delay time to the tempo of the song (as was covered when we discussed delays).

Diffusion


In most effect units, diffusion is the destiny of echoes that makes up the reverb. Low diffusion has fewer echoes.

You can actually hear the individual echoes in a low-difussion setting. It sounds kind of like "Will, il, il, il, il, bur, bur, bur, bur, bur, bur." A hall reverb setting is preset with a very low-diffusion setting. High-diffusion has more echoes - so many that they meld together into an extremely smooth wash of reverb. Plate reverbs often have a very high-diffusion preset.

There are no strict rules of rat use of high- or low-diffusion settings. High-diffusion settings tend to be sweeter, smoother, and more silky, Low-Diffusion tends to be more intense. Some engineers prefer a low-diffusion setting on a snare drum to make it sound more raucous for rock and roll. High-diffusion is often used to make vocals sound smoother.

EQ of Reverb


You can equalize reverb at various points in the signal path. First you can EQ the reverb after the signal comes back into the board (if you are using channels for your reverb returns that have EQ on them). It is usually better to use the EQ in the reverb unit itself.

Not because it is necessarily a better EQ, but because in some units you can place the EQ before or after the reverb. Ideally it is best to EQ the signal going to the reverb. If your reverb unit does not have this capability, you can actually patch in an EQ, after the master auxiliary send, on the way to the reverb unit. The truth is I don't like to EQ reverb because it screws up the natural sounds (which is just fine) should you use EQ on a sound. Sometimes EQ might be used to simply roll off some low-frequency rumble. Normally, if your reverb sounds like it needs EQ, it is often better to go back and EQ the original sound that is going to the reverb.

High- and Low-Frequency Reverb Time


Even better than using EQ on your reverb is to set the duration of the highs and lows. Many reverb units have this setting these days. This is a bit different than EQ, which changes the volume of the frequencies. High- and low-frequency reverb time changes the time that each frequency range lasts. Using these settings will generally make the reverb sound more natural than any type of EQ.

Regardless of whether you EQ your reverb or set the duration, there is a huge difference as to how much space it takes up in the mix-and the resulting masking it creates. Remember that low-frequency sounds take up way more space than high-frequency sounds. And because reverb is also hundreds of sounds, reverb with more low frequencies will take up an enormous amount of space in a mix.

Reverb with more high frequencies still takes up a lot of space, but not nearly as much as when lows are present.

Reverb Envelope


Another setting of reverb is the "envelope"; that is, how the reverb changes its volume over time: Normal reverb has an envelope where the volume fades out smoothly over time.

Engineers (being the bored people they are) thought to put a noise gate on this natural reverb, which chops it off before the volume has a chance to fade out. Therefore, volume stays even, then stops abruptly.

You can put a noise gate on your reverb, but it's much simpler to use the gated reverb settings on your effects unit. If we were to reverse the envelop of normal reverb, the volume would rise then stop abruptly.

If you take the tape, play it backward, add normal reverb, record it on open tracks on the multitrack, and turn the tape around to run forward, you'd get an effect commonly called preverb.

This effect is the most evil one that can be created in the studio; only the devil could put an effect on something before it happens. Furthermore, it has been used in every scary movie made, including The Exorcist and Poltergeist. And of course, it is one of Ozzy Osbourne's favorite effects.

One of reverb's main functions is to connect sounds in a mix and fill in the space between the speakers. 

Like any sound, reverb can be panned in various ways.

Reverb can be spread to any width by how far left and right you pan the reverb return channels on your mixing board. Depending on how you have your effects patched back into your console, or how your computer plug-ins work, you may not have this option.

Final notes:

Reverb can also be brought out front with volume...
....placed in the background by turning down the volume...
....or raised or lowered a bit with EQ.

You can read more about the most important things you need to know about mixing, and how to setup the compressors properly depending on the task you want to realize in Dave Gibson's book "The Art of Mixing 2nd Edition" A visual guide to recording engineer and production.

Wednesday, May 25, 2016

Mixing : Delays - Part 2

You need to learn how each delay time feels and what feelings or emotions each delay time evokes. Then, when you hear a song that has a similar feeling or emotion you will know which delay time might work.

Different delay Times


Let's define specific delay time ranges so that you can get to know them and incorporate them into your memory time banks.

More than 100ms


Professional engineers refer to this length of delay as echo. However the real world (and my mom) use the term echo to refer to reverb. For our purposes, we will use echo to refer to a delay time greater than 100ms, not reverb.

When setting a delay time greater than 100ms, it is important that the delay time fits the tempo of the song, otherwise it will throw off the timing of the song. The delay time should be in time, a multiple of, or an exact fraction of the tempo. If you know the beats per minute of the song, the following chart gives the relationship between tempos and delay times.


                               Beats Per Minute                   Time Between Beats
                               60   bpm                                1000    ms
                               90   bpm                                 666.6  ms
                              120  bpm                                 500     ms
                              150  bpm                                 400     ms
                              180  bpm                                 333.3  ms
                              210  bpm                                 285.7  ms
                              240  bpm                                 250     ms

Tip: / If you know the tempo of the song, you can figure out the delay time with the following formula:

Delay time = 60,000 / beats per minute.

Then any fraction or multiple of that delay time will also fit the tempo of the song. For example if the tempo is 100 BPM then the 600 ms would fit the tempo. But 150 ms, 300 ms, and 1200ms would also fit.

If you don't know the beats per minute (bpm) of the song, use the snare drum (or some other instrument playing on a continuous patterns) to set the delay time. Even if you are going to put the delay on the vocals, for example, put the delay on the snare to set the delay time to the tempo.Again,once you have found a delay time that works, any multiple or fraction of that time might also work.

A delay time over 100ms creates a dreamy effect and is most commonly placed in songs with slower tempos where there is room for the additional sound. Therefore, the more instruments and the more notes in a mix, the less often this delay time is used. This is pretty obvious - if you have no room in the mix don't add more sounds. This is especially true when there is feedback on a long delay time. The delays take up so much space in a mix that they are often only turned up at the ends of a line, where there is enough space to hear echoes by themselves.

Tip:  / Feedback is created by feeding back the delayed signal into the input, so the sound repeats, repeats, repeats.

60 to 100 ms


You can hear this delay time, commonly referred to as slap, on the vocals of Elvis Presley and in rockabilly music. In fact, there is about an 80ms delay between the syllables "rock" and "a" in the word "rockabilly."

This effect can be quite helpful in making a thin or irritating sound (especially for a voice) sound fuller. It can help obscure bad vocal technique or pitch problems. In fact, a slap can be used to bury any bad sound. However, you never want to bury anything to deep. Add too much delay on a bad vocal and not only do you have a bad vocal, but you also have a bd mix. On the other hand a sea can make a vocal seem less personal. If you have an incredible singer, you might forego using any delays. Just put it out there with a little reverb and let it shine.

30 to 60ms


Put your lips together and blow a raspberry, this sound technically called a "motorboat." The time between each flap on your lips is approximately 50ms. Delay time in this range is referred to as "doubling" because it makes a sound seem like it was played twice, or doubletracked. When a part is sung or played twice, there will naturally be a time delay ranging from 30 to 60ms (no one can ever sing or play a part twice exactly in this time). Therefore adding a delay of this length makes it sound like the part has been played twice. The beatles used this effect extensively to simulate more vocals and more instruments.

Just like a slap, doubling helps to obscure a bad sound or bad performance. So it can be used to help bury things in the mix.

Likewise, since it does obscure the purity and clarity of a sound, you should use it selectively, depending on the sound, song and style of music.

Tip: / Although doubling makes a sound seem like it has been played twice, it is a different sound than if you actually doubletrack a sound. In fact doubling often sounds so precise that it sounds somewhat electronic. This is especially true on vocals and simple sounds. However, if a sound is complex, especially if the sound is a combination of sounds (like a bunch of background vocals or a guitar sound with multiple pics), then you don't notice the precision of the delay. Therefore when you put doubling on 20 vocals, it sounds like 50 vocals and it sounds incredibly natural.

1 to 30ms


An unusual thing happens with this type of delay, commonly known as fattening. At the delay time, our brain and ears are not quick enough to hear two sounds; we only hear one fatter sound.

The threshold between hearing one sound or two sounds actually varies depending on the duration of the sound being delayed. Also the farther the sounds are panned separately, left and right the shorter the delay time before you hear two sounds. For example a guitar panned to the center with a delay in the center might require at least 40 milliseconds to hear two sounds; whereas, if the guitar and delay are panned left and right, you might hear two sounds beginning around 20 milliseconds. The following chart gives approximate thresholds for some instruments with different durations (actual thresholds will depend on the particular timbre and playing style of the instrument):


                        Approximate thresholds between hearing one sound versus two

                                                      Hi - hat                  10ms
                                                      Percussion             10ms
                                                      Snare                      15ms
                                                      Kick Drums           15ms
                                                      Piano                      20ms
                                                      Horns                     20ms
                                                      Vocals                    30ms
                                                      Guitars                   30ms
                                                      Bass Guitars          40ms
                                                      Tubas                     40ms 


Besides reverb, fattening is the most-used effect in the studio, mostly because it doesn't sound much like an effect. Fattening is the primary effect used to make a sound stereo, which has a certain magic to it. When you put the original "dry" instrument sound in one speaker and put a delay less than 30ms in the other speaker, it "stretches" the sound in stereo between the speakers.

Fattening can make an already beautiful acoustic guitar or piano sound incredible. Fattening is very effective in making a thin or irritating sound fatter and fuller. It also appears to make a sound more present simply because when a sound is in stereo, it takes up more space between the speakers. This is especially effective when you want to turn a sound down in the mix but still have it be discernible.

You have to be careful with fattening, though, because it uses up your space between the speakers. Fattening will make a mix fuller and denser, so you must make sure there is enough room between the speakers. Therefore, fattening is used most often when there are fewer notes and sounds in the mix. On the other hand if you want to create a wall of sound, even if the mix is already busy you can add fattening to make it more busy. (This blow's people's minds.) This is commonly done in heavy metal, alternative rock and some new age music.

0 to 1ms


This soft of a delay time causes phase cancellation. I will address only the critical aspects of phase cancellation here. But keep in mind that phase cancellation is a very serious problem in recording and I highly recommend that you do further research to gain a complete and clear explanation of the problem it causes.

Phase cancellation happens when two of the exact same sound, like those created with two mins or two speakers, are a little bit out of time. One example is when you switch the positive and negative wires on one of the two speakers. Now, one speaker is pushing out while the other is pulling in. When a speaker pushes out, it creates denser air than normal. When a speakers pulls in, it creates more spaced out air than normal (rarefied air). When the denser air from one speaker meets the spaced-out air from the other speaker, you end up with normal air, normal air equals silence. This means you could have two speakers blasting away and theoretically you could hear nothing.

There are many companies now using phase cancellation to quite the world. This technology is used in automobiles, on free-ways (instead of cement walls on the sides of the freeways), in factories, and even in headphones to cancel out sounds around you. Marriage counselors are selling them by the dozens.

If you have two mins on one sound at two different distances, one mic might be picking up denser air while the other mic is picking up spaced-out air. Put the two mins together in the mix and they will tend to cancel each other out, though not completely. 

Phase cancellation degrades the sound quality in the following ways.

  • Loss of volume. You lose volume when both miss are on, especially when you're in mono (which, by the way, is one of the best ways to detect phase cancellation - put the board in mono or pan both sounds to the center).
  • Loss of Bass. You lose bass frequencies, making the sounds thin.
  • Loss of image. Most importantly, you lose the clarity and precision of the perceived image of the sound between the speakers. The sound seems to be more "spacey." Though some people like this effect, most people are addicted to clarity these days. If the mix is even played back in mono (as on TV or AM radio), the sound will disappear completely.

There are many ways to curb phase cancellation. The primary way is to simply move one of the pics. If both miss are picking up the sound in the same excursion of the wave, there will be no phase cancellation.


2 Mics picking the sound in phase
2 Mics picking the sound in phase

It takes 1ms for a complete wave of 1000Hz to pass by us. If you were to set a delay time of 0.5 on a sound, it would put it out of phase. Therefore, you can use a digital delay to put the sound back in time.

Finally you can remove a large amount of phase cancellation through isolation. Often, the bleed of a sound into a second mic will cause phase cancellation with the first mic. By using baffles or noise gates, you can reduce the bleed in the second mic, voiding the phase cancellation. 

You can read more about the most important things you need to know about mixing, and how to setup the compressors properly depending on the task you want to realize in Dave Gibson's book "The Art of Mixing 2nd Edition" A visual guide to recording engineer and production.

Monday, May 23, 2016

Mixing : Delays part 1

Delays
Time-based effects include all processes where some form of manipulation of time occurs to the signal. This includes things like delays and echos (obvious time manipulation), chorusing and flanging (short delays with modulation), phasing (shifting signals by very small amounts of time), reverbs (essentially numerous delays and echoes), pitch transposers and Harmonizers (slowing or speeding the signal while adding or removing slices to keep it in time with the music), etc. These effects all change the signal’s timing in one way or the other to produce the desired result, thus they are time-based effects.

Delays


After many failed attempts to use outdoor racquetball courts to create delays, engineers realized they could get a delay from a tape player. You could hear a delay by recording a signal on the record head, then listening to the playback head two inches later. The delay time could be set by changing the tape speed. Engineers used this technique for years. There was also a popular unite called the Echoplex, which fed a piece of tape through a maze of tape heads at different distances, each giving different delay times. Not bad, but the problem with tape is that every time you record over it, you get more tape hiss.

Then came analog delays, which would put a signal through a piece of electronics to delay the signal a bit. The more you put the signal through the electronics, the longer the delay. It was a bucket brigade type of system. The only problem was that when you put a signal through a piece of electronics over and over, it also got extremely noisy after awhile.

Then came digital delays, which record the signal digitally onto a chip, then use a clock to tell the unit when to play the sound back. 

Delay times versus distance 


Before we explore different delay settings, it is helpful to understand the relationship between delay time and distance. Sound travels at approximately 1130 feet per second. That's around 740 miles per hour, which is extremely slow compared to the speed of sound in wires - 186,000 miles per second, the speed of light (approximately 670 million miles per hour). Therefore, it is easy to hear a delay between the time a sound occurs and the time it takes for a sound to travel even a few feet to a wall and back. We can also easily hear a delay when we put two microphones at two different distances from one sound. In fact, chasing the distance between two microphones is almost exactly like chasing the delay time on a digital delay.

The following chart illustrates how different distances relate to delay time. Of course, if you are calculating a delay time based on the distance between a sound source and a wall, the distance must be doubled (to and from the wall). 


                                                               Feet     =      Delay(ms)

                                                               1130            1000
                                                               560                500
                                                               280                250
                                                               140                125
                                                               70                 62.5
                                                               35               32.25
                                                               17.5            16.13
                                                               8.75              8.01  
                                                               4.28                   4
                                                               2.14                   2
                                                               1.07                   1

As distances become smaller and smaller, the distance in feet almost equals the milliseconds of delay. This correlation comes into play when using more than one mic on a sound (e.g., piano, guitar amps, acoustic guitars, horns, or background vocals) and is especially helpful when miking drums. For example, the distance you place overhead miss above the drum set will create a corresponding delay time between the overhead miss and the snare mic (or any of the rest of the mics for that matter). It is also important to note the distance between instruments when miking an entire band live (or recording everyone in the same room at once) since miss may be more than 10 feet away from another instrument and still pick it up.

Besides delay time, you must also consider phase cancellation, a problem that happens with extremely short delay times. We'll discuss more about it later.

if you pay attention to the way that something sounds when miked at different distances, you will eventually become aware of what different delay times sound like. Once you become familiar with the way that different delays affect different sounds, you can control their use in a way you deem most appropriate; that is you can do whatever you want.

You need to learn how each delay time feels and what feelings or emotions each delay time evokes. Then, when you hear a song that has a similar feeling or emotion, you will know which delay time might work.

You can read more about the most important things you need to know about mixing, and how to setup the compressors properly depending on the task you want to realize in Dave Gibson's book "The Art of Mixing 2nd Edition" A visual guide to recording engineer and production.

Sunday, May 22, 2016

Mixing : Panpots and Stereo Placement

Panpots
When mixing, you use panpots (balance knobs) to place each sound and effect left to right between the speakers. A panpot is actually two volume controls in one. When you pan to the left the signal going to the right is turned down. When you pan to the right, the volume of the signal going to the left is turned down.

Panning in a mix is mapped out visually as a function of left to right. Panning a sound to one side or the other also seems to make the instrument a bit more distant in the mix. If the sound if panned to the center, it will seem to be a bit closer, a little more about front.

If you think of the space between the speakers as a pallet on which to place instruments left to right, the main objective might be to place each sound in a different place so you can hear each sound more clearly. However, certain styles of music have developed their own traditions for the particular placement of each instrument left to right im the stereo field. Normally, the placement of a sound is static; it stays in the same places throughout the mix. However, the movement of a pant during a mix creates an specially dramatic dynamic.

You can read more about the most important things you need to know about mixing, and how to setup the compressors properly depending on the task you want to realize in Dave Gibson's book "The Art of Mixing 2nd Edition" A visual guide to recording engineer and production.

Monday, May 16, 2016

Mixing : Equalizers

Equalizers
Eq it is one of the least understood aspects of recording and mixing probably

EQ is a change in the volume of a particular frequency of a sound, similar to the bass and treble tone controls on a stereo. it is one of the least understood aspects of recording and mixing probably because there is such a large number of frequencies - from 20 to 20,000 HZ. The real difficulty comes from the fact that boosting or cutting the volume of any one of these frequencies depends on the structure of the sound itself: Each one is different. But even more complex is the fact that different sounds are equalized differently depending o the type of music, the song, end even the people you are working with.

First you m st learn all the frequencies or pitches by name. Then, you will see how boosting or cutting a certain frequency affects different instruments in different ways.

Types of equalizers


There are three main types of equalizers found in the recording studio: graphics, parametrics, and rolloffs (highpass and lowpass filters).

Graphics


Each frequency can be turned up or down by using the volume sliders on a graphic equalizer. The voume controls on an equalizer are called bands. There are different kinds of graphic equalizers that can divide frequencies from five bands up to 31 bands. Five to 10 band graphic equalizers are commonly found in car stereos. Thirty-one band graphics (which will change the volume at 31 different frequencies) are common in recording studios and live sound reinforcement.


Band Graphic EQ
Band Graphic EQ

The primary advantage of a graphic equalizer is that you can make changes in a volume at a number of different frequencies. Graphic EQs got their name from the visual display that's easy to read for reference. (However , these days, you get a much nicer display on a digital parametric EQ.) Also, since the frequencies are mapped out visually from left to right, it is easy to find and manipulate the volume of any particular frequency.

Many people don't realize that when you turn up a particular frequency from a graphic, you are actually turning up a range of frequencies preset by the manufacturer. For example if you turn up 1000 hz, you might actually be turning up a frequency range from around 300 to 50000Hz.

Bandwidth on a graphic EQ
Bandwidth on a graphic EQ

This range of frequencies is called the bandwidth and is preset by the manufacturer. You have no control over the bandwidth on a graphic. Generally, the more bands (or volume controls) there are, the thinner the bandwidth. Therefore a 31-band graphic EQ will have a more precise frequency range for each slider than a 5-band graphic. If you turn the volume of 1000Hz on a 5-band graphic you could be turning up from 100 to 10,000Hz.

Frequency works as a function of up and down, so higher frequencies are reproduced by the tweeters in our speakers and these are placed higher than the subwoofers who usually go at the bottom.

Parametrics


Engineers want to be able to control the range of frequencies, or bandwidth, they are turning up or down. With a parametric, the bandwidth knob gives you control over the width of the frequency range being manipulated. The knob is usually called "Q" because the word "bandwidth" won't fit on the knob ("Q" stands for "quality", which is an electrical term for bandwidth). A thin bandwidth is normally labeled with a peak, whereas a wide bandwidth is often labeled with a hump. Sometimes ranges of musical octaves are used to define the bandwidth - for example from 0.3 octaves to 3 octaves wide. Sometimes a scale of 1-10 or 10-1 (it's not standardized on consoles) is used.

Wide and narrow Bandwidths on Parametric EQ
Wide and narrow Bandwidths on Parametric EQ

On a graphic equalizer, you select the frequency by moving your arm left to right to place your hand on the correct volume slider. On a parametric EQ, you select the frequency by turning the "frequency sweep" knob with two fingers. A separate volume knob is then used to turn the chosen frequency up or down.

Paragraphics


Some less expensive consoles have equalizers with frequency sweep knobs but do not have bandwidth knobs. This type of equalizer is commonly referred to as semi-parametric, or paragraphic. Be careful, though; these days some manufacturers and certain salespeople are now using the term "parametric" to refer to a paragraphic or semi-parametric, even though it has no bandwidth control.


Rolloffs


A rolloff EQ turns down the volume of low or high frequencies. They are commonly found on consoles as highpass and lowpass filters. Larger consoles often have sweepable or variable rolloff knobs so that more of the lows or highs are rolled off. Smaller consoles often have only a button that rolls off a preset amount of lows or highs. A highpass filter rolls off the low frequencies but does nothing to the highs; it passes them.

Highpass filters are especially helpful in getting rid of low-frequency sounds, such as trains, planes, trucks, air conditioners, earthquakes, bleed from bass guitars or kick drums, and serious foot stomping.

Highpass filters can be found on microphones and smaller mixing consoles as switches that simply roll off the lows when the switch is engaged.

A lowpass filter rolls of the hight frequencies and is especially helpful in getting rid of hiss on sounds that don't have a lot of highs in them, such as bass guitar. Lowpass filters are also used to roll off the high frequency attack (click) of a kick drum in order to make it sound more like the classing rap kick drum sound.

You can read more about the most important things you need to know about mixing, and how to setup the compressors properly depending on the task you want to realize in Dave Gibson's book "The Art of Mixing 2nd Edition" A visual guide to recording engineer and production.
  

Sunday, May 15, 2016

Mixing : Volume controls part 3 - Noise gates

Noise gate
Operating similarly to a compressor / limiter, noise gates are often packaged together in one box with them, know the differences

Like compressors / Limiters, a noise gate turns the volume down. The difference is that a compressor / limiter turns the volume down above the threshold, while a noise gate drops the volume when the volume falls bellow the threshold. However, since the volume is being turned down on a sound that is already low in volume, normally a noise gate will turn off the sound completely.

Noise gates have three main functions: to get rid of noise, to get rid of bleed, and to shorten the duration of a sound.

Noise Eradication


The first function of a noise gate is to get rid of noise, hiss or anything annoying that is low in volume. Noise gates only get rid of background noises when a sound is not playing. Noise gates don't get rid of noises when a sound is not playing. Noise gates don't get rid of noises while the main signal is present; however you normally can't hear the noise when the sound is playing.

One function of a noise gate is to get rid of amp noise when a guitar is not playing. Say you have a guitar amp set on 11 with lots of distortion. When not playing, the amp makes a huge "cushhhhhh" sound (when the guitar is playing, you don't hear the amp noise, it gets cut off. Whenever the guitar player is not playing, you now hear silence.

It is important not to chop off any of the guitar sound. All it takes is for the musician to play a soft note, and the noise gate will chop the sound right off. Noise gates can also be used to get rid of noise from tape hiss, cheap effects units, dogs, kids, and crickets. 

Bleed eradication


Another common use of a noise gate is to remove the bleed from other instruments in the room. When a mic is on an instrument, the sound of that instrument will be loudest in the microphone. Therefore, it is easy to set the threshold of a noise gate between the sound and the bleed, so that the bleed gets turned off.

The obvious advantage of isolating a sound like this is that you have more individual control over volume, equalization, panning and effects. Once a sound is isolated with a noise gate, any changes you make with a sound manipulator will only change the one sound you are working on. Gates can be especially effective on drums to isolate each drum. This is especially important on a snare if you are going to put a lot of reverb on it. Without the gate you end up with reverb on the hit as well. Another advantage of isolation is that it helps to eliminate phase cancellation (we'll discuss this more later).

But most importantly, by removing the bleed, you will then hear the sound in only one microphone. This has the effect of putting the instrument in one precise spot between the speakers, instead of being spread in stereo. For example, consider the miking of a hi-hat cymbal. Besides being picked up by the hi-hat mic, the hi-hat is also being picked up by the snare drum mic. If the hi-hat mic is panned to one side and the snare mic (with the hi-hat bleed) is panned to the center, the hi-hat appears to be spread in stereo between the speakers. It is no longer clear and distinct at a single spot in the mix. Putting a noise gate on the snare mix turns off the hi-hat when the snare is not playing. The isolated image of the hi-hat when the snare is not playing. The isolated image of the hi-hat in it's own mic, you will now appear to be the crystal clear and precisely defined wherever the hi-hat mic is placed in the mix.

Shortening the Duration


You can also use the noise gate to shorten the duration of a sound. The noise gate will cut off both the attack and release of a sound because these are commonly the softest parts of the sound. This can be quite an unusual effect.

A noise gate can also be put on reverb to chop off the release resulting in the well-known effect referred to as gated reverb.

Visually, when volume is shown as front to back and the volume is lss than the threshold setting, the sound will disappear. If the low volume sound is noise, bleed or the attack and release of a sound, it gets cut off.

You can read more about the most important things you need to know about mixing, and how to setup the compressors properly depending on the task you want to realize in Dave Gibson's book "The Art of Mixing 2nd Edition" A visual guide to recording engineer and production.


Thursday, May 12, 2016

Mixing : Volume controls part 2 - Compressors / limiters

Compressors
Compressors / limiters were originally introduced into the studio to stop loud volume peaks from distorting or saturating.

Compression and limiting are volume functions; their main purpose is to turn the volume down. They turn down the volume when it gets too loud - that is, when it goes above a certain volume threshold.

When the volume is below the threshold, the compressor / limiter does nothing (unless broken or cheap). The difference between compressors and limiters is explained later in the section on "Ratio Settings"

Compressor / limiter Functions


Compressors / limiters have two main functions (and three other minor ones.) The first function is to get a better signal-to-noise ration, which means less tape hiss. The second function is to stabilize the image of the sound between the speakers, which means more presence.

Better Signal-to-noise Ratio : Less Hiss


Recording extremely dynamic sounds, with a wide variation from soft to loud, requires turning the volume down so that the loud sounds don't overload and cause distortion. Distortion is against the law. Get distortion go to jail. But when you turn the volume down, the soft portions of the sound barely move the needles on the tape player. And if the needles are hardly moving on the multitrack, you hear as much tape hiss as you do signal. This condition is known as bad signal-to-noise ration and sounds very similar to an ocean: "shhhhhhhhhhhhhh."

By using a compressor to turn down the volume when the signal gets too loud, you can then raise the overall volume above the tape noise. By turning down the peaks, you can record the signal hotter on tape. Then the softer portions are loud enough so that you don't hear the tape noise.

When recording digitally, there isn't much noise to worry about. However, if you record too softly the quality of the recording is not as good. You are actually recording at a lower bit rate.
Therefore, compressors / limiters are also good to use when recording digitally. In addition, limiters can be used to keep very quick sound peaks that you might not even hear from going into the red and distorting.

Stabilizing the image of Sounds : More presence

 

After years of using compression to get rid of hiss, people realized that the sounds often appeared more present when compressed. By evening out the volume peaks on a sound, a compressor/limiter stabilizes the image of the sound between the speakers. A sound naturally bounces up and down in volume, as shown by the bouncing VU meter. When a large number of sounds fluctuate naturally, their bouncing up and down can become extremely chaotic - similar to trying to watch a 24 VU meters at once. A compressor / limiter stabilizes or smoothes out, the movements of sounds that result from these moment-to-moment fluctuations in volume. Once compressed, the sound no longer bounces around much, so the mind can focus on it better. Therefore, the sound seems clearer and more present in a mix.

The busier the mix (the more instruments and the more notes per instrument), the more sounds in the mix are normally compressed. This is because the more sounds and notes, the more chaos. It is difficult to keep track of a number of instruments in a busy mix in the first place. By stabilizing the sounds, the entire mix becomes clearer. Most "acoustic" sounds are compressed, although different engineers disagree about whether to compress the drums or not. Often the kick drum is compressed, and then the entire drum set is compressed with an stereo compressor.

Once the sound has been stabilized, you can then turn up the overall volume and put the whole sound right in your face. This is commonly done in radio and TV commercials to make them sound louder, so that they jump out and grab your attention. This might be annoying in radio and TV commercials, but it's great for a lead guitar or any other instrument you want extremely present in the mix.

This also works when putting sounds in the background. The problem with low volume sounds is that they can easily be lost (masked by other sounds) in the mix, especially if the volume of the sound fluctuates very much. Therefore, it is common to seriously stabilize sounds that are going to be placed low in the mix with compression. They can then be placed extremely low in a mix without fear of losing them.

A better signal-to-noise ratio is obtained by compressing the signals on its way to the multitrack. However many engineers will also compress the signal on its way back from the multitrack during mix down to stabilize the sound even more.

Sharper or Slower Attack


Besides less hiss and more presence, a compresor / limiter also makes the attack of a sound sharper. Once you turn down the louder part of a signal, a sound reaches its maximum volume much quicker.

With a shorter and sharper attack, sounds are much tighter, punchier, more distinct, and more precise, which makes them easier to dance to. On the other hand, a higher quality, fast compressor will actually hep to remove sharp "spikes" on the attack of a sound - softening the sound. A good compressor can mellow out the sound of a guitar with a sharp edge on the attack.

More Sustain


A compressor / limiter is also used to create more "sustain." This is commonly used on a guitar sound. Just as a compressor is used to turn down the volume peaks to raise a sound above the tape noise, it can also be used to turn down the louder parts of a guitar sound, so the guitar can be raised above the rest of the mix. Sustain is also specially helpful for obtaining the desired screaming feedback (when the guitar is held directly in front of a guitar amp).

Compressors are sometimes used in the same way to create more sustain on tom and cymbal sounds. The sounds seem to last longer before they fade out or are absorbed into the mix. The tradeoff is that compressing toms and cymbals will bring their level down, so that you actually hear the bleed more. However, depending on your musical values and the project you're working on, you may want to give this a try.

Less resonance


A final solution of a compressor / limiter is that it evens out resonances in a sound. Resonances occur in two places in instruments: hollow spaces and materials. When a hollow space (like the body of an acoustic guitar) has two parallel walls, it will boost the volume of particular resonante frequencies. Tap on the body of an acoustic guitar in different places, and you can hear the resonant frequencies. Materials (like the neck of a bass guitar) will also resonate at certain frequencies, boosting the volume of those frequencies. Play any guitar and you will notice that certain notes are louder and more resonant than others because they are activating the resonances in the body of the instrument.

A compressor / limiter evens out the volume of these resonances by turning down the loudest part of a sound, which just happens to be resonances.

You can read more about the most important things you need to know about mixing, and how to setup the compressors properly depending on the task you want to realize in Dave Gibson's book "The Art of Mixing 2nd Edition" A visual guide to recording engineer and production.


Monday, May 9, 2016

Mixing : Volume controls part 1 - Faders

volume controls
Volume faders control the volume of each sound in the mix, including effects.

David Gibson comes up with a very useful and detailed description about what all the studio equipment does and how does it affect our sounds in his book "The art of Mixing Second Edition"

Faders


Volume faders control the volume of each sound in the mix, including effects.

The level set for each sound is based on it's relationship to the rest of the tracks in the mix. When volume is mapped out as a function of front to back, you can place any sound or effect up front, in the background or anywhere in between by using the faders.

However, the level that you set a sound in the mix is not based solely on the Vader. If the level of the faders was the only thing that affected the volume of a sound in a mix, you could mix without even listening. You could simply look at where the faders are set on the console. There is more to it than that.

When you set volume relationships in a mix, you use apparent volumes to decide on the relative balance - not just the voltage of the signal going through the fader. The apparent volume of a sound in a mix is based on two main things, fader levels and waveform, and another minor one, the Fletcher / Munson Curve (See the "Fletcher/Munson Curve section below). First, the level of the fader does affect the volume of the sound. Change the level of the fader and the sound gets louder or softer.


Fader Level


When you raise a fader on a mixing board, you are raising the voltage of the signal being sent to the amp, which sends more power to the speakers, which increases the sound pressure level (SPL) in the air that your ears hear. Therefore, when you turn up a fader, of course the sound does get louder.

The decibel (dB) is used to measure the amplitude of the signal at each stage of this circuit. In fact, there are very specific relationships between voltage, wattage, and sound pressure level. Decibels are the main variables that affect the apparent volume of a sound. However, there is another important factor: The waveform of the sound.

Waveform (or Harmonic Structure)


The waveform, or harmonic structure, of a sound can make a big difference as to how loud we perceive a sound to be. For example a chainsaw will sound louder than a flute, even if they are exactly at the same level on the VU meters. This is because the chainsaw has harmonics in the sound that are irritating - or exciting, depending on your perspective. These odd harmonics are dissonant to our psyche, which makes them seem louder to us. Therefore, a sound, even if they are at the exact same volume in the mix. A minor factor contributing to the apparent volume of a sound is the Fletcher / Munson Curve.

The Fletcher / Munson Curve


The biggest problem with the human hearing process is that we don't hear all frequencies at the same volume - specially those at low volumes. (Fletcher and Munson did a study that shows just how screwed up our ears are.) This is why there are loudness buttons on stereos - to boost the lows and highs. You are supposed to turn them on while listening at low volumes. However, most people like extra lows and highs, so they leave the switch on all the time.  The main point here is that you should check your mixes at all volumes all of the time because you won't be hearing ass and tremble as much as you should. Also, whenever you do a fade at the end of a song, the bass and treble will drop out first. Technically, your ears give you the flattest frequency response at around 85 decibels.

Apparent volume is, therefore, a combination of decibel level, waveform, and the Fletcher / Munson Curve. But relax. Your brain has all it figured out. Most people has it all figured out. Most people have no trouble telling whether one sound is louder than another (although most of us need to learn to hear smaller and smaller decibel differences). Your brain quickly calculates all of the parameters and comes up with the apparent volume. All you have to do is listen to the overall apparent energy coming from each sound in the mix. It is apparent volume that you use to set volume relationships in the mix. You don't look at the faders; you listen for the relative volumes.

You can read more about the most important things you need to know about mixing, in Dave Gibson's book "The Art of Mixing 2nd Edition" A visual guide to recording engineer and production.

Saturday, May 7, 2016

Mixing : Functions of Studio equipment according to David Gibson

functions of studio equipment
Get the most of your studio equipment knowing the functions of all of the different devices

Dave Gibson brings a comprehensive guide to understanding functions of studio equipment in his book "The art of Mixing" according to him studio equipment can be broken down in the following categories.

To simplify all of the functions of a huge variety of studio equipment, I have broken down all of the functions of studio equipment into categories based on the function of each piece in the recording studio.

      1. Sound Creators : all instruments, accosting to electric, voice to synths
      2. Sound Routers : mixing boards, patch bays, splitters
      3. Storage Devices : recorders, tape players, sequencers, samplers
      4. Sound transducers : miss, pickups, headphones speakers
      5. Sound manipulators : processing and effects.

Effects rack
Effects rack


Sound creators range from acoustic to electric instruments, from voice to synthesizers.


Sound creators
Sound creators


Sound routers route sound from one place to another. Mixing boards route the signal to four places: the multitrack, the monitor speakers, cue headphones (for the band out in the studio), and the effects (so we can have a good time). Patch bays are just the back of effects - the back of the mic panels, the back of the multitrack (inputs / outputs), the back of the console (ins / outs) and the back of the effects (ins / outs) - located next to each other so we can use short cables to connect them.



Sound routers
Sound routers



Storage devices store sound or MIDI information and play it back. Tape players store digital or analog sound; sequencers store MIDI information. Some storage devices can be used to edit the sound while it is stored.



Storage devices
Storage devices


Sound transducers take one form of energy and change it into another. Microphones take mechanical energy, or sound waves and change it into electrical energy. Speakers take electrical energy and change it into mechanical energy, or sound waves. Likewise pickups on guitars take the movement of the strings and change it to electrical energy.


Sound transducers
Sound transducers


You can read more about the functions of studio equipment and the most important things you need to know about mixing, in Dave Gibson's book "The Art of Mixing 2nd Edition" A visual guide to recording engineer and production.

Thursday, April 28, 2016

The six elements of a great Mix according to Bobby Owsinski

Six elements of a great mix
Go from ok mixes to great mixes this these Pro Tips from Bobby Owsinski

Although most engineers ultimately rely on their intuition when doing a mix, they do consciously or unconsciously follow certain mixing procedures.

Every mix, according to Bobby Owsinski’s ‘The Mixing Engineer’s Handbook’ in order to be great, must include six main elements.

By and large, most mixers can hear some version of the final product in their heads before they even begin to mix. Sometimes this is a result of countless rough mixes during the course of a project that gradually become polished thanks to console or digital workstation automation and computer recall if an engineer is mixing a project that he's tracked. Even if an engineer is bright specifically to mix, he might not even begin until he has an idea of where he's going.

Engineers who can hear the finished product before they start normally begin a mix the same way. They become familiar with the song either through a previous rough mix or by putting up all the faders (when using a console) and listening to a few passes. Sometimes this is harder than it seems though. In the case of a complex mix with a lot of tracks (in the old analog days, some tracks shared different elements or synced multitrack) the mix engineer might have to spend some time writing mutes ( a cut pass) before the song begins to pare down and make sense.

So let's get started with the 6 elements to produce a great mix.


Six elements of a mix
The 6 more important elements of a mix



Element 1: Balance - The mixing part of mixing.


The most basic element of a mix is balance. A great mix must start here, for without balance, the other mix elements pale in importance. There's more to balance than just moving some faders though.

Good balance starts with good arrangement. It's important to understand arrangements because so much of mixing is subtractive by nature. This means that the arrangement, and therefore the balance, is changed by the simple act of muting an instrument whose part doesn't fit well with another. If the instruments fit well together arrangement-wise and don't fight one another, the mixer's life becomes immensely easier. But what exactly does "fighting one another" mean?

When two instruments that have essentially the same frequency band play at the same volume, the result is a fight for attention. Think of it this way: You don't usually hear a lead vocal and a guitar solo at the same time do you? That's because a human ear can't decide which to listen and becomes confused and fatigued as a result.

So how do you get around instrument "fighting" First and foremost is a well written arrangement that keeps instruments out of each other's way right from the beginning. The best writers and arrangers have an innate feel for what will work, and the result is an arrangement that automatically lies together without much help.

But it's not uncommon to work with an artist or band that isn't sure of the arrangement or is into experimenting and just allows an instrument to play throughout the entire song, thereby creating numerous conflicts. This is where a mixer gets a chance to rearrange the track by keeping what works and muting the conflicting instrument or instruments. Not only can the mixer influence the arrangement this way, but he can also influence the dynamics and general development of the song.

To understand how arrangement influences balance, we have to understand the mechanics of a well written arrangement.

Most well conceived arrangements are limited in the number of elements that occur at the same time. An element can be a single instrument like a lead guitar or a vocal, or it can be a group of instruments like the bass and drums, a doubled guitar line a group of backing vocals and so on. Generally, a group of instruments playing the same rhythm is considered an element. For example, a doubled lead guitar or doubled vocal is a single element, as is a lead vocal with two additional harmonies. Two lead guitars playing different parts are two elements, however. A lead and a rhythm guitar are also two separate elements.


Element 2: Panorama - Placing the sound in the sound field


One of the most overlooked or taken for granted elements in mixing is panorama, or the placement of a sound element in the sound field. To understand panorama, first we must understand that the stereo sound system (which is two channels for our purposes) represents sound and spatiality. Panning let us select where in that space we place that sound.

In fact panning does more than just that. Panning can create excitement by adding movements to the track and adding clarity to an instrument by moving it out of the way of other sounds that might be clashing with it. Correct panning for a track can also make it sound bigger wider and deeper.

So what is the proper way to pan? Are there rules? Well, just like so many other things in mixing, although panning decisions might sometimes seem arbitrary, there's a method to follow and a reason behind the method.

Imagine that you're at the movies and watching a Western. The scene is a panorama of the Arizona desert, and right in the middle of the screen is a cowboy sitting on his horse in a medium shot from his boots up. A pack of Indians (we'll say six) is attacking him, but we can't see them their impact as a suspense builder is limited, and they cost the production money that just went to ease. Wouldn't it be better if the director moved the Indians to the left out of the shadow of the cowboy so we could see them? Or maybe spread them out across the screen so the attack seems larger and more intimidating?

Of course, that's what we do with the pan control (sometimes called pan pot, which is short for potentiometer, the name of the electronic component used to pan the signal). It allows the engineer (the director) to move the background vocals (Indians) out of the way of the lead vocal (cowboy) so that in this case we can hear (see) each much more distinctly.

Element 3: Frequency range - Equalizing


Even though as an engineer has every intention of making his track sound as big and as clear as possible during tracking and overdubs, the frequency range of some or all of the tracks is often still somewhat limited when it comes to the mix. This could be because the tracks were recorded in a different studio using different monitors, used a different signal path, or were highly influenced by the producer and the musician. As a result the mixing engineer must extend the frequency range of those tracks.

In the quest to make things sound bigger, fatter, brighter and clearer, the equalizer is the chief tool that most mixers use. But perhaps more than any other tool, the use of the equalizer requires a skill that separates the average engineer from the master.

  • There are three primary goals when equalizing:
  • Make an instrument sound clearer and more defined.
  • Make the instrument or mix bigger and larger than life

Make all the elements of a mix fit together better by juggling frequencies so that each instrument has it's own predominate frequency range.

Element 4: Dimension - Adding effects

The fourth element of a mix is dimension, which is the ambient field where the track or tracks sit. Dimension can be captured while recording but usually has to be created or enhanced when mixing by adding effects such as reverb, delay, or any of the modulated delays such as chorusing or flanging, Dimension might be something as simple as re-creating an acoustic environment, but it could also be the process of adding width or depth to a track or trying to spruce up a boring sound.

Actually there are really four reasons why a mixer would add dimension to a track:

  • To create an Aural Space
  • To add Excitement
  • To Move a Track Back in the Mix (Give the impression it's Farther Away)


Element 5: Dynamics - Compression and gating


In years past the control of the volume envelope of a sound (dynamics) would not have been included as a necessary element of a great mix, in fact dynamics control is still not a major part of Classical and Jazz mixing. But in today's modern music, the manipulation of dynamics plays a major role in the sound. In fact, just about nothing else can affect your mix as much as in so many ways as compression.

Compression is an automated level control using the input signal to determine the output level. You set compression by using the Threshold and Ratio controls.

Compressors work on the principle of gain ratio, which is measured on the basis of input level to output level. This means that for every 4dB that goes into the compressor, 1dB will come out, for a ration of 4 to 1 or 4:1. If a gain ratio of 8:1 is set, then for every 8dB that goes into the unit, only 1 will come out of the output. Although this could apply to the entire signal regardless of a level, a compressor is usually not set up that way. A threshold control determines at what signal level the compressor will begin to operate. Therefore, threshold and ration are interrelated, and one affects the way the other works. Some compressors (like LA2-As and UREI LA-3s) have a fixed ration, but on most units the control is variable.

Most compressors also have attack and release parameters. These controls determine how fast or slow the compressor reacts to the beginning (attack) and end (release) of the signal. Many compressors have an Auto mode that sets the track and release in relation to the dynamics of the signal. Although Auto works relatively well, it still doesn't allow for the precise settings required by certain source material. Some compressors (like the dbx 160 series) have a fixed attack and release, which gives it a particular sound.

When a compressor operates, it decreases the gain of the signal, so there is another control called Make-up Gain or Output that allows the signal to be boosted back up to it's original level or beyond.

Most compressors also have an additional input and output called a side chain, which is an input and output back into a compressor for connecting other signals processors to it. The connected processor only gets the signal when the compressor exceeds threshold and begins to compres. Side chains are often to EQs to make a de-esser, which softens the loud SSS and PPP sounds from a vocalist when he exceeds the compressors threshold. But you can connect delays, reverbs, or anything you want to side chain for unusual, program level dependent effects, Side chains are not needed for typical compressor operations, so many manufacturers don't include side chain connectors.


Element 6: Interest - The key to Great (As Opposed to Merely Good) Mixes


Although having control of the previous five elements might be sufficient for many times of audio jobs and might be just fine to get a decent mix, most popular music requires a mix that can take it to another level.

Although is always easier with great tracks, solid arrangements, and spectacular playing, a great mix can take simply okay tracks and transform them into hit material so compelling that people can't get enough of them.

That's been done on some of your all time favorite songs.

To close this post I'll quote Bobby Owsinski: "Although having control of the previous five elements, might be sufficient for many types of audio jobs and might be just fine to get a decent mix, most popular music requires a mix that can take it to another level, Although it's always easier with great tracks, solid arrangements, and spectacular playing, a great mix can take simply okay tracks and transform them into hit material so compelling that people can't get enough of them. That's been done on some of your-all time favorite songs.

Most neophyte mixers have only four or five of these when doing a mix, but all of these elements MUST be present for a GREAT mix, as they are all equally important. You can read about these six elements in more detail in The Mixing Engineer's Handbook".