In this blog post, I want to find out what phase errors are and how you can avoid them in your recordings.
Sound consists of wave peaks (high pressure) and valleys (low pressure). Same thing with voltage, peaks (high voltage) and valleys (low voltage). The idea is that when a wave peak in sound approaches the mic, it will be a wave peak in voltage. If you now have two mics next to each other, say to make a stereo recording, then one can be turned completely, maybe the cable is inverted. When there is a wave valley in one, it becomes a wave peak in the other. Our ears can learn to become almost allergic to faults like this. If you mix these two into a mono, the sound will be “shady”. Another time when you can hear things like this is if you turn the cable to one of the subwoofers in a stereo pair. Turn the speakers towards each other and listen so that the sound is not muted. The easy way to fix such a 180 degree phase error is to reverse one signal.
In his article “Stereo Microphone Techniques”, the microphone manufacturer illustrates Red phase errors in a good way.
Phase errors can also be due to “bad” amplifiers and the like. A bad amplifier may delay some frequencies more than others, causing the signal to have a phase error. The phase error is then different for different frequencies. This is not noticeable on pure sine tones, but virtually no music is pure sine tones. Virtually all analog EQ and tone controls create a “phase error”, then it may not sound bad for it.
We recognize a varied phase error as a flanger effect for, for example, guitar.
If the mics are a bit away from the sound source, say a few meters another phenomenon occurs which is not phase error, but which can play a role. Our ears hear “stereo” both through the difference in volume and the difference in arrival time between the sounds. If you delay the signal to one ear for a few milliseconds, the brain experiences it as a hint of where the sound is physically located. When I make stereo recordings, I usually try to think of just this kind of thing. I usually have a stereo pair and a few mics. To get the stereo image “stable”, I usually delay the mics so they come a little later than the main mics. Then it is also important to take into account the distance between the mics, I usually calculate that 1 meter corresponds to 3 milliseconds. In programs, it is often quite easy to shift the channels a few milliseconds relative to each other. As always, ears are the only tool we can trust. Theory helps to some extent, but if it sounds bad, do it.
This blog post is taken from the following topic: