Careful with that bass, Eugene

I have met Steve Albini in person. It was January 3rd 2019. He was enrolled to record Uzeda band here in Italy at Sotto il Mare Recording Studios, not far away from where I live. Being friend with studio owner Luca, I was able to sneak in the back of the control room and silently witness Albini’s activity. Pretty amazing the tape edit he performed on the fly to create an ending for a song… After work we all went to dinner and I was able to briefly introduce myself to Albini and put in his hands a piece of paper with www.tunedmiking.net written on it, asking if he was willing to leave blessed, golden feedback.
A few days ago youtube has highlighted for me a document that was uploaded on the Electrical Audio recording studio channel six months after said meeting: it’s an educational video with studio owner Albini discussing time aligning and phase “issues”. I guess I will never know wether or not he visited tunedmiking.net and got inspired by what he read to think about giving his own version of the matter… either way, as a recording engineer I have spent many hours during the last five years recording, listening, comparing, listening again, reading, working on phase: I felt obliged to go over the document, highlight and discuss some contents.
Capturing multiple signals from a signal audio source is a known procedure of audio engineering and music recording. But a distinction is due: one situation is setting up two (or more) microphones in front of an electric guitar amp; one VERY DIFFERENT situation is capturing a directly injected version of that guitar plus the microphone(s) at the amp. The former setup is an all-acoustic one, meaning the guitar signal gets transduced into moving air by the amp and the microphone picks that movement up. ALL the microphones, when more than one is setup at the amp, pick up the same moving air, the same rendition of the guitar signal given by the amp. The latter setup though entails duplicating the guitar signal before the amp and recording a version of it that NEVER gets transduced into moving air. If in the first case the phase correlation between signals can be managed quite easily, not so in the second case, where things become less intuitive and more difficult to manage. The second setup, as discussed by Albini up to minute 6’30” is the topic of the present post. I respectfully disagree on 90% of that section.
From my point of view, time alignment has little to do with phase alignment. Time is about… time. whereas phase is about the time of frequencies. Between 20 Hz and 20 kHz there’s 19980 of them each with its own time, properly called “period”: which time is “time” then? About the arrival time difference between the directly injected signal (“D.I.” from now on) and the microphone signal: my current set up these days to record electric bass is a BSS AR-133 active D.I. linked to a Gallien-Krueger MB110 combo amp, with a “made in Austria” AKG D112 one inch from the grill cloth and one inch off center. Replaced the instrument with a sine wave from my cell phone tone generator app as source of signal. Armed two tracks and made sure recording levels were even at around reference level: -20 dB FS on both tracks, matched levels with preamp gains.
I had already determined that with this specific setup, at frequency of 85 Hz D.I. and Amp+Mic were 99.99% in phase (This may sound surprising and/or almost meaningless right now but hold on… we’ll get to that part later). Then I pressed record and started the oscillator @ 85 Hz.. Stopped recording after 5 seconds. Zoomed in and looked for the difference… I could not see anything that could be described as “arrival difference”.
Signal was present on both tracks virtually at the same time, matter of a few samples: 26 samples @ 48 kHz (see image below). That’s 0.541 milliseconds.
Schermata 2021-02-02 alle 09.45.06
One might say: “That is an actual difference”. Sure it is, but not one which can cause significant phase issues. Most importantly, as you can see below, such difference is temporary. On the Amp+Mic track (named GK_112) I could see that the speaker took its time to activate but, in a matter of three complete cycles its waveform almost matched the D.I. and definitely after the fourth cycle the two were in phase (see image below).
Schermata 2021-02-02 alle 09.47.27
Being the two waveforms in phase, it was easy to compare the two signals. No time difference but 47,05 ms before the speaker fully won its inertia and oscillated properly.
Not only I could not detect any significant delay but also the two waveforms were in phase… HOW POSSIBLE? Well, that’s how it is actually. Try it yourself. I ran the test with two different microphones and I had very similar results: first with AKG D-112; second test with Electrovoice RE20, same position. I simulated a frequency sweep with my cell phone-turned-oscillator and below is what I found connecting D.I.’s mic preamp output to channel #1 and Amp+Mic’s preamp output to channel #2 of my oscilloscope.D.I.+AKG D112:
  • in phase at 85 Hz (E2 sharp, see picture below)
  • 180° out of phase at 168 Hz (E3 sharp)
  • in phase at 1030 Hz.
The inconsistency of phase is likely to be attributed to what is called “group delay”.

IMG_2899

D.I.+EV PL20:
  • in phase at 104 Hz
  • 180° out of phase at 238 Hz
  • in phase at 1134 Hz
Nothing relevant about time difference. Nothing relevant about acoustic delay. Specific frequencies were actually in perfect phase. Comb filtering? I’ll discuss that in the next post about “group delay”.
Let’s go to the section where the time gap on the oscilloscope is shown to us,  supposedly indicated by the fact that waveforms don’t line up. At 3’50”, upon playing the first note which is a G, I can see that the two waveforms actually sort of line up! The phase between the third and fourth notes played , a B and a C, is almost perfect. The deepest valleys and the highest peaks match. The source signal is rather complex and it’s not easy to decipher on the scope. He should have used a sine wave tone and should have thought in terms of fundamentals and harmonics a là Fourier: that’s why, while E and F look out of phase, it’s hard to believe they actually are when noticing that phase of notes just adjacent is almost perfect.
At 4’08” a confusing concept: the discrepancy to be corrected is NOT between the two signals, but between SOME frequencies contained in those two signals. Phase is always related to frequency!!!
And finally Mr. Albini tells us that a mere 774 microseconds delay was to be implemented on the D.I. signal to correct the discrepancy. Now, having said that waveforms could be almost completely out of phase with each other, a delay of 774 µs would have ONLY solved a 180° phase issue at 646 Hz.. Did he correct discrepancies just with that? I don’t know man… Are waveforms in very good alignment?
Low frequencies are better supported“. Mmmhhh… Microseconds can align medium high and high frequencies, but to manage medium low and low fundamentals you definitely go with milliseconds. As in my example 85 Hz is almost an E2, i.e. lowest note on a guitar. E2 has a period of 12.195 milliseconds and 180° of that counts 6.097 milliseconds or 6097 microseconds if you like: huge difference!
In case you wonder, I never heard back from Steve Albini.
More about a general DI + microphone combination and “group delay” in the next post.

 

Eventide Precision Time Align

Eventide Precision Time Delay / Precision Time Align plugin

This is badass!

Schermata 2019-09-02 alle 19.27.48Having passed the point of no return into D.A.W./I.T.B. mode, it’s been great to discover what this utility tool can do. When in the past I would use a hybrid in-the-box/out-of-the-box approach and adopt my trusted Little Labs I.B.P. for phase alignment, today I’m pretty much all in the box and the Eventide Precision Time/Phase Align is a fantastic step forward. The reason for this personal excitement is that this plugin works as a delay (as I.B.P. would) in its “Time Delay” version and can shift recorded tracks not only back (delay) but also forward (advance!!!) in its “Time Align” version.

This means at least two very important things: I can think about the phase alignment, what I call tuning, after the fact i.e. after the recording of the take. It also means I don’t have to identify in advance (before recording) which source is “late” in phase (=time) at the specific target frequency I am interested in.

Example. Two microphones for a combo guitar amp: one moving coil close to the grill cloth, one condenser three feet away. I’m looking for perfect tuning @ 280 Hz. In the past I’d send the frequency tone to the amp and watch the two mic preamp outputs on my oscilloscope, look at the delay between the two and, if any, apply a combination of polarity reversion (if necessary) + the least amount of phase shifting with my trusty Little Labs I.B.P. to null the delay. Because back in the day I could only apply delay.

Today is different. I don’t need to do the preliminary job, I can get away with the scope: send the tone to the amp and record the two mics/tracks. Instantiate the plug-in on both the tracks and bypass alternatively one of the two while moving the alignment fader on the other to find the setting imparting the least movement from the original position. Keep in mind this could again entail reversing polarity on one of the two tracks… If you followed and practiced a little bit with the matter, as exposed in the two posts dedicated to the I.B.P., everything should be pretty familiar. The polarity reverse button is included in the Eventide P.T.A., top left.

I could be very accurate, as I was in the past, and hook up the oscilloscope to the separate outputs of the two tracks where the frequency tone has been recorded: EITHER slide earlier the late one or slide later the earlier one to tune them perfectly. But the great aspect of using the Eventide Precision Time Align is that I can do it later with the actual musical material and use only my ears. Time saved and more fun!

From the Eventide manual: “If you place two microphones near the same sound source, but at different distances from it, the resulting waveforms in your DAW will be similar, but delayed with respect to one another. This can cause comb filtering whereby some frequency bands cancel each other when the two signals are mixed together. This often makes the combination of the two signals sound worse than either track soloed. This problem can be partially addressed by shifting one of the recorded tracks relative to the other in your DAWs arrange view”.

I’d like to bring the term “partially” to your attention: this is not because the tool is only “partially” able to perform its task. It’s because, as you should know by now, while some frequencies get aligned in phase at the same time other frequencies get misaligned more. It’s up to you to decide which combination of the two mics works best, both in phase and in volume.

The presentation page at Eventide says “[…] completely eliminating timing anomalies and phase issues from your mix […]”… I doubt it.

Shaping Guitar Tones

The No More Source Of Confusion series goes on…

Upon checking some video stuff online, I noticed a recent contribution by guitar great Tim Pierce. I’ve watched videos from his “Tim Piece Guitar” channel before and it’s always cool, top notch, very inspiring material. This is to say that my post is not meant in any way to diminish the content of Mr. Pierce’s video. He’s great, his gear is awesome and his tone is superb. My post is just about a couple of spots which, from my personal point of view as a “phase issues devoted”, stand out as relevant. Because the subtitle to the video goes like “on shaping guitar tones with an R-122V and an SM57”, I thought some aspects should be magnified to the eyes and ears of an audio engineer to improve his/her skills when working to shape a guitar tone and using two microphones. Hopefully Mr. Pierce will appreciate this too.

The graphics introduced at the beginning make clear how Mr Pierce’s signal is engineered from speakers cabinet to recorder: SM57 into BAE 1028 preamp, R-122V into Skibbe Electronics 736-5 both going to ProTools DAW. What is not mentioned is where Pierce has hooked up the Chandler Limited Mini Rack Mixer (line only mixer, not shown in the graphics) used to create a balance between the two paths. Presumably before Protools inputs? That would allow him to create the desired blend and commit to it. It looks like the channels he works on the mixer are both panned left, actually suggesting a two-channels-into-one-track configuration. This doesn’t matter much but I thought I’d highlight it for anyone wondering about the little red knobs he is rotating.

Mr. Pierce starts with a guitar equipped with P90s pickups, looking for a “clean” tone. He declares he is using only the Royer R-122V at first: he likes the settings but still is curious to investigate the contribution of the SM57. At 4’02 he dials in the 57 and the sounds gets, in his own words, “nosey”. Well… here’s the first relevant spot I was talking about: the nose is clearly a comb filtering effect. It’s not that the 57 is a nosey microphone or it adds “nose” by itself. It is the combination of the two, the 57 and the R-122V that sounds nosey because portions of the frequency spectrum are canceled while at the same time other portions are boosted. The resulting frequency response is contaminated by inevitable comb filtering. The 57 shouldn’t be blamed. This is confirmed at the second spot, around 6’15”, when “coming from the other direction” Pierce goes for a different part with a different, brighter guitar (G&L ASAT) and starts off with the SM57 only. Not “nosey” at all on its own, don’t you think? He’s actually very surprised when he adds the Royer, not expecting all that midrange (lower midrange I would say) build up… which is exactly what happened before: the “nose” again, affirming the fact that his R-122V and SM57 in that specific position in front of the speaker (check out 1’18”, note the two are not perfectly aligned in height) they just combine in that specific way, creating that specific filtering. What is most evident is a build up around 120 Hz, roughly, suggesting that the two microphones are possibly in tune at one fundamental frequency around that area and “out of tune” in the higher range of spectrum.

Speaking of guitar tone shaping, I know by experience that there’s only so much you can obtain leaving things like that: you start with one mic, add the second one, mix the two with different proportions and hope you like the results. If you like it and it works for you, that’s cool. But the shaping can definitely benefit from some refinements.

Refinement #01: move one of the two microphones away from the speaker… There is a very good chance that the comb filtering will shift to a higher range of the spectrum, opening up the tone of the mic combination. Don’t wanna do that ’cause you like the tone of each single microphone in its position? And most of all, you don’t want to move a mic every time you are searching for the right tone? Understandable… Refinement #02: leave the mics where they are and apply a few ms of delay (start with 1 ms) to one of them: a coarse resolution but it gives you options… Refinement #03: leave the mics where they are and get a phase alignment tool, hardware or software, apply it to one of the two microphones and by gradually working the tool, you’ll move the mic without actually touching it! You will be able to tune the microphones at different fundamentals and finally get the right proportion for the right tone.

P.S. If you have read the previous posts, you already know that a dynamic (moving coil) mic + a ribbon mic is quite a different situation than a dynamic + condenser mic.

Thank you very much. And thanks to Mr. Tim Pierce!

“Being social”

Some days ago Tape Op magazine published on its Facebook page the article formerly published on their website within the tutorials section. It is the article you find opening the “Tape Op #106” page here at the blog. Luckily comments followed the posting, unfortunately some of them were either inappropriate when not rude or offending. The article was described as “full of inaccuracies, misstatements and gross over-simplifications“; “not up to Tape Op standards“; “a dud“; “filled with half-thruths and falsehoods”.

I guess these are just different ways of “being social” over at Facebook, ways I am not interested in. Again, the reason why I estimated a good idea sharing my experience as an audio engineer was that I felt I had come to interesting conclusions, that could possibly be helpful to somebody else.

I believe that one should at least try to go along, understand, put into practice the proposed methods and solutions and eventually reply. After, not before. I totally understand the skepticism that might arise upon reading the article. In fact it actually contains a few early “inaccuracies” about the series of harmonics influenced by the combination of microphones. Those “inaccuracies” have been revised and improved here at the blog.

This article never wanted to be “scientific”. It originated as a shot in the darkness of phase related “issues” when working with more than one microphone. I am not a scientist. I try to be as accurate as possible. I don’t draw my own conclusions. I am not new-age.

You can’t even imagine how skeptical and at the same time embarassed I was when I first started to have hypothesis about the phase displacement between condensers on one side and moving-coils plus ribbons on the other; and about what happens when their outputs are combined.

While researching, I tried to gather as much information I could but… what I was hoping for wasn’t around! I read a very interesting article on Sound-On-Sound magazine which was extremely in accordance with my results (down to using the same words) but totally missed the point when saying “If you use more than one mic to record a single instrument, the simplest way to minimise the effects of phase cancellation is to get the mic capsules physically as close together as possible”. Aaarghhh… Damn it!

I felt almost guilty before the audio engineering world community. “Who am I to introduce these obscure facts?”. Because to me they were facts. “Why isn’t anyone talking about this?” I was desperate for support, until…

One day I finally read the “Handbook for Sound Engineers”, chapter 16, page 505: “The electrical waveform output from the moving-coil microphone does not follow the phase of the acoustic waveform because at maximum pressure the diaphragm is at rest (no velocity). Further, the diaphragm and its attached coil reach maximum velocity, hence maximum electrical amplitude at point c on the acoustic waveform (i.e. when pressure is at null). This is of no consequence unless another microphone is being used along with the moving-coil microphone where the other microphone does not see the same 90° displacement. Due to this phase displacement, condenser microphones should not be mixed with moving-coil or ribbon microphones when micing the same source at the same distance”.

Then, at page 508: “Capacitor microphones generate an output electrical waveform in step or phase with the acoustical waveform […]”

Those were exactly the same conclusions I had come to on my own: I finally felt relieved and legitimated to share.

tunedmiking.net actually tells you that one can mix condensers with moving-coils or ribbons. It is quite accurate in telling you the shape of the comb filtering which occurs when you do so and also how to manage the filtering for the benefit of your recordings.

You might find it interesting and hopefully useful. Or you might think it’s not for you.

I strongly encourage you to try yourself and post honest comments here, whatever you think about it. If you have supporting documentation, please share.

P.S. I have replied very politely to the various comments but for some reason Facebook classifies the majority of my replies as spam and doesn’t allow them. They must be approved by Tape Op first.

Andrew Scheps on recording drums

Continuing the “no more source of confusion” series, here’s something I recently watched and found very interesting and very confusing about phase: Andrew Scheps, a big name in the audio engineering world department, talks with recordproduction.com about his approach when recording drums.

At 2′ 46” he goes into detail about specific choices for specific sources:

Standard kick inside and outside, and if I had to pick something I suppose AKG D-112 inside and 47 FET outside

Is that just inside?” asks the moderator.

Just barely inside, yeah, because… Two reasons: I’m usually not recording things like Pantera where you need to have a very clicky kick drum, so I like to have more low end in that inside mic; and also that keeps the inside and outside mic lined up, so phase-wise you’re only an inch or so apart which at the low frequencies doesn’t matter at all as opposed to being maybe a foot apart if you really stick that mic far inside“.

I am perfectly OK with the first reason but the second one I find confusing.

Because:

  1. it is absolutely not a given fact that two almost-lined-up microphones are in phase, especially when they are one dynamic and one condenser;
  2. the lower the frequency, the lighter is the comb filtering resulting from changing the distance of two non-coincident microphones: 80 Hz has a wavelength of 4,25 meters or 13,94 feet: a foot or so doesn’t change much proportionally whereas 200 Hz (low-mid frequency) has a  wavelength of 1,7 meters or 5,57 feet: a foot or so has a much more heavy impact on the phase of that frequency

Also when talking about phase, a reference to the target frequency should be mandatory.

(No more) source of confusion

With the present post it is my intention to highlight and discuss what I find around as potential source of confusion: articles, videos, interviews, manuals and all words that do not help audio engineers with understanding phase of audio signals. As attempted in previous posts, I will try my best to provide you with right, definitive words that possibly guide you to the most perfect, i.e. most useful, comprehension of the matter.

A few days ago I bumped into this article. Engineer Alan Branch tells us about his work and his techniques. When speaking of electric guitars and amps, he reports his use of a Shure SM57 in combination with a Neumann U67: “Depending on your room, mic distances will vary. But start with a few feet and listen. Bear in mind any distance between the mics can cause phase problems by cancellation of certain frequencies. It’s easy to hear the tone change with these. If it is missing bass end or brightness, move the mics or check your mic balance.

Here my considerations:

1) any distance between microphones does cause phase problems, always: the arithmetical sum of the signals gives negative result i.e. cancellation of some frequencies. Not only that.

2) any distance causes phase benefits as well, always: the arithmetical sum of the signals gives positive result i.e. augmentation of some frequencies. Not only that.

3) with an electromagnetic microphone and a condenser, even no distance causes phase problems (read here if you haven’t already).

4) Let’s not call them problems.

5) Let’s call them alterations of the combined frequency response or, more appropriately:

COMB FILTERING effects.

To sum it all up: any distance, even no distance, depending on the transduction topology of the mics, causes the alteration of the combined frequency response. This alteration can be bad or can rather be good.

And then let’s not just listen to this alteration: we are audio engineers, let’s measure it,

Try this: send a 80 Hz tone to the guitar amp and stick a SM57 and a coincident condenser (lucky you if you have a U67 available) in front of that. Provided preamp gains are OK, check phase with a phase correlation tool, better an oscilloscope or best by simply recording the two tracks and checking with your own eyes on the zoomed-in timeline. What do you see? How many samples are they apart, if they are? Is it true that simply by flipping the phase on the condenser the situation improves? Now change frequency, say 100 Hz and check again. Try 125 Hz, 150 Hz, 180 Hz, 200 Hz, 250 Hz and take notes of the changing situation. Now redo the same for each varied distance of the condenser from the amp (and the SM57 of course).

At the end of this lenghty work you will have very useful data about the combination of an SM57 and a condenser used to mic a guitar amp. You will find out that a perfect combination at 80 Hz corresponds to a very bad one at around 240 Hz. And most of all: how does it sound? You will find out a lot more, and possibly your preferred tuning distance.

What about a perfect combination @ 200 Hz (corresponding to a 180° phase inversion @ 600 Hz)? Again: how does it sound? What is a nice balance between the two mics? Better with a prominent 57 and a touch of condenser or other way around?

Using Little Labs I.B.P., part 2 of 2

I haven’t purchased a Little Labs I.B.P hardware box yet… But recently I was lucky to find out that an audio engineer, friend of mine, had one (thanks a lot Marco Posocco!).

I immediately borrowed it and put it on the test bench!!!

What is the best way to describe the I.B.P.? I guess it can be thought of as an analog mono delay, capable of continuously shifting in time the signal from 0 up to almost 4 ms . Therefore it can re-align frequencies out of phase but, as you might already foresee if you’ve read previous post, it can only provide you with perfect phase for ONE FREQUENCY at a time.

I tested the unit really in depth and I was able to understand (and see on the oscilloscope) that the unit is not consistent in performing its task. At first I imagined the delay was one and the same for all frequencies. No, it doesn’t work like that.

I set the knob at “full shift”, CW position, no buttons pushed and then I sent different frequencies to a pair of channels of my digital recorder, one with the IBP inserted. Here’s what I found: at 40 Hz, the delay is 190 samples (@ 48 kHz). Or 3,958 ms, if you like. At 50 Hz the delay is 182 samples. At 63 Hz is 173 samples. At 80 Hz is 162. As the frequency rises, the delay decreases.

Now, the quarter wavelength for 40 Hz is 300 samples (@ 48 kHz). 50 Hz is 240, 63 Hz is 190, 80 is 150 samples…

Frequency 1/4 wl IBP Max. Shift
(Hz) (samples/48k) (samples/48k)
40 300 190
50 240 182
63 190 173
70 171 169
72 167 167
75 160 165
80 150 162
100 120 147
125
160
200 60 102
250
320
400 30 91

What this means? It means that I.B.P. won’t be able to re-align completely frequencies below 72 Hz. For those frequencies I.B.P. doesn’t cover the full span of 1/4 wl: if the misalignment occurs within the first 1/4 wl and its dimension is beyond capability of the I.B.P., there is no way to re-align.

Let’s better understand this with an example: you want to tune (i.e.align) @ 50 Hz a “Bass D.I.” track and a “Bass Cab Mic” track (same take of course!). On the D.A.W. timeline you measure a time shift of 215 samples between the two, which is less than the first 1/4 wl of 50 Hz. The “D.I”. preceds the “Cab Mic”. Here’s the point: the I.B.P. inserted on the “D.I.” won’t be able to delay the track more than 182 samples and therefore it’ll remain short of 33 samples. Probably not a great deal for 50 Hz but still… phase is not perfect.

Let’s see another example: you want to tune @ 80 Hz a “Kick In” mic and a “Kick Out” mic: adopting the procedure explained in previous post, you measure a difference of 124 samples between the two (still within the first 1/4 wl). In this case, the I.B.P. applied to the preceding “Kick In” track will provide perfect phase because it can delay 80 Hz up to 162 samples and the difference is only 124.

Let’s consider now a different scenario: you have a third, “FrontOfKit” mic which you want to tune @ 80 Hz as well. By the way: is it a condenser? A ribbon? Consider that! Whatever the typology, time shift between the two tracks is 173 samples, which falls within the second 1/4 wl of 80 Hz and beyond capabilities of I.B.P.

BUT… if you insert I.B.P. on the “FrontOfKit” track and reverse polarity by pressing the “phase invert” button, then youll’be able to tune that mic as well.

Going back to the first example, bass alignment @ 50 Hz, you could do the same i.e. realign the “Bass Cab Mic” track if the difference fell within the 182 samples preceding the 1/2 wl, 480 samples (between 298 samples and 480).

How does all this theory sound in real life out of a pair of speakers?

Well… It sounds good.

Real life test:

Sent a 50 Hz, -20 dBu tone to my BSS AR-133 active D.I. box linked to a bass head+8×10” cab bass system. Placed an Electro-Voice RE20 and a SOLOMON MiCS LoFreQ in front of the cab (on different speakers). Recorded the three tracks.

Compared to RE20, D.I was misaligned, almost 180° out of phase. LoFreQ as well.

Flipped the phase on both.

Result: RE20 preceds D.I. by 17 samples (FS: 48 Hhz); also preceds LoFreQ by 60 samples.

What I did? 1): on the timeline I nudged forward the D.I. track by 17 samples to align it with RE20. 2): I duplicated the RE20 track. 3): one copy of RE20 I nudged back 60 samples together with DI to align both with LoFreQ. 4): the other copy I left it where it was and I inserted IBP on that channel (remember I work on an analog console, therefore I don’t deal with latency).

I listened to both versions.

Version 1:

  • D.I. track nudged back by 43 samples (17-60)
  • RE20 track nudged back by 60 samples (I.B.P. on bypass!!!)
  • LoFreQ track

Version 2:

  • D.I. track nudged back by 43 samples (17-60)
  • RE20 track NOT nudged and with I.B.P. active
  • LoFreQ track

Version 2 sounded better, without any doubt. More open and just… better!

This test was really important for me to understand if IBP is worth having: my answer is yes.

On version 1 ALL frequencies of RE20 were nudged back by 60 samples.

On version 2, by virtue of I.B.P. non linearity, NO frequency was delayed by the same amount and this action was definitely sounding better. Period.

Since I had only one I.B.P., I quit for a moment RE20 and inserted I.B.P. on a copy of D.I. track non-nudged and mixed it with LoFreQ. Again with oscilloscope I found perfect phase.

Version 1:

  • D.I. track nudged back by 43 samples (-17+60)
  • LoFreQ track

Version 2:

  • D.I. track NOT nudged and with inserted I.B.P. active
  • LoFreQ track

Again version 2 sounded better.

 

Addendum

First thing I noticed is that when the circuit is inserted, a “phase issue” occurs by default. What this means? I’ll explain what I did.

At the beginning of my test I ran a 440 Hz tone from the console’s oscillator into channel #21. I took the tone out of the insert send of that channel and reinjected it straight into the insert return of channel #22. Why 440 Hz? No particular reason: it was the last setting used.

I took the #21 & #22 direct outs and brought ‘em to the oscilloscope: the two sine waves were in perfect phase.

Then I patched the Little Labs I.B.P. between insert send #21 and insert return #22… looked at the oscilloscope: the sine wave of channel #22 (the one with the I.B.P.) was slightly delayed. Ooops… by default NO PERFECT PHASE there!

OK: the I.B.P. circuit introduces a phase shift by default when inserted and with the knob at minimum, fully ccw.

Could this be a problem? Maybe… I wondered: “which frequency is put 180° out of phase because of this delay?” An higher frequency, for sure! I increased the frequency on the oscillator until I reached a 180° out of phase situation on the oscilloscope: it happened at 5228 Hz. But the knob was at minimum, i.e. no delay, which is not the working condition of I.B.P.

Control Over Phase? Little Labs I.B.P., part 1 of 2

I was seriously intrigued by the Little Labs I.B.P. hardware box when I first got to know about it many years ago. The fact that it could do something no other tool could got me excited in the first place. If I had one I could do something very special to my recordings. That was the point for me but to be honest little I knew about phase at that time.

These days I know more about phase and I’m thinking about it again: could it make it easier for me when recording? Maybe… Let’s listen and watch what Jonathan Little, I.B.P.’s dad, shows about it (up to minute 2’58”):

So I’ll consider the scenario proposed by Mr. Little: a direct, “pure”, not equalized signal coming from an electric instrument like a bass guitar is going straight to tape/D.A.W. through a D.I. box. By the way: the IBP itself (not the Jr. version though) is also a D.I. box with line level output that allows you to save a mic preamp for other tasks. Simultaneously, a second version of the same signal is linked from the D.I. box and injected into a bass amp like an Ampeg SVT (not SBT!), it is therefore picked-up by a microphone placed in front of the speaker and sent to tape/D.A.W. as well. According to Mr. Little, the applied eq on the head and the speaker cabinet itself affect and modify the phase of the second signal. “Everything has a phase versus frequency response“.

Mmmhhh… I think what really matters here is that the second path is such a different one to actually cause a time offset between the two signals! The direct signal gets recorded before the amp+microphone signal and this situation creates phase issues. It is true that everything has a phase vs. frequency response, but the point here is the difference between differences: you would not get any issue if you were to use just the D.I. track or just the mic’ed cab track. Said difference is the one generating issues and it should be clear by now, after reading previous posts, that issues manifest only at certain frequencies. Let’s not forget that!!! Each out-of-phase combination has a different (more or less negative) phase result for specific frequencies, depending on their period. One specific out-of-phase combination is also more or less positive (additive) for specific frequencies: the most positive happens when the difference in time between the two signals corresponds to the period of that frequency.

Let’s say we can measure the delay between the D.I. signal and the mic+amp signal: 4,31 ms. That’s the period of 232 Hz: said frequency is perfectly tuned, actually reinforced by the so-called “phase issue”. Major problems though for 116 Hz: totally canceled out! If you’ve read the previous posts you should understand why. No wonder then when Mr. Little goes “more in depth” analyzing ONLY ONE frequency, 400 Hz showing on his oscilloscope. With the I.B.P., as with any other tool or method, you can only tune one fundamental frequency at a time!!!.

When you have two microphone like say a room mike and you have a mike on the kick drum or a tom-tom for istance, you’re gonna get a degree of cancellation between the two. As you can see… (fiddling with the knob) this is just like moving the mike: you can see that it’s gonna cancel the signal somewhat. Remind that we’re looking at a sine wave (!!!), not a complex music wave. I’m using the simplest form to explain this (???)”.

In fact Mr. Little that is fiddling with 400 Hz ONLY. WHAT IS GOING ON WITH ALL OTHER FREQUENCIES? Variably in/out of phase…

My personal conclusion is that the I.B.P. box could be helpful in trying to tune (i.e. get perfect phase) two signals at one specific frequency without my usual need to track in advance to the recorder. It doesn’t provide though any information about the frequency itself and therefore I’d still have to set up a more elaborate way to target it.

Oh such a perfect phase… 3 of 3

Following post 2 of 3, let’s analyze now the second arrangement, when in the capture are involved microphones with different type of transduction (condenser + moving coil or condenser + ribbon). As we’ve seen in the post 1 of 3, their phase is already 90° out by default. According to “phase” terminology, 360° corresponds to full wavelength; 180° to half wavelength and 90° to 1/4 wavelength. So they are 1/4 wavelength out.

The peak of the condenser mic (in red) occurs when the electromagnetic (in green) is at null and the next negative peak of the electromagnetic corresponds to a null of the condenser. And so on… peaks and nulls are 1/4 wl apart . This provides us with the correct solution when positioning such microphones and looking for perfect phase of one frequency: we must separate them 1/4 wavelength of the target frequency. Not only that: since the phase of the electromagnetic (moving coil or ribbon) is “ahead”, the condenser is the one to be positioned closer to the source to compensate the difference.

If we refer to the same 1 meter separation we set up when checking out microphones of the same type (tuning frequency 340 Hz) and replace one of them so that the closer to the source is a condenser and the farthest is an electromagnetic, we obtain radically different results.

1 meter is to be regarded now as 1/4 wavelength of 85 Hz: that’s the tuned frequency (this means you can tune at lower frequencies using a lot less real estate!). The coincident positioning is to be avoided (at least until you get familiar with the matter and learn how to implement tools such as IBP by Little Labs or Phazer by Radial); what was the coincident positioning is now a 1/4 wavelength-apart positioning, with the condenser closer to the source.

Let’s look at this from a different starting point. Say you have positioned the different microphones as coincidents: there is no chance to have perfect phase for any of the frequencies. Period. As the sound wave propagates from the source towards the microphones, condenser translates variations of sound pressure at any frequency with a 90° or 1/4 L delay. Or, if you prefer: electromagnetic translates 1/4 L ahead. So… repositioning is a must. Move the condenser closer to the source, move the electromagnetic farther away, keeping in mind that the distance is 1/4 L of the “tuned” frequency. The positive combination of energy will reinforce that frequency.

On top of this let’s analyze further implications: first, because of the needed separation, as you tune (i.e. get 0° phase) to 85 Hz, you get total 180° phase cancellation at 255 Hz, i.e. the frequency three times higher (3rd harmonic).

3of3Fig2

That’s because 1 meter corresponds to 3/4 wave length of that higher frequency, which gets canceled out because the condenser picks up the negative peak exactly when the electromagnetic picks up the positive peak.

3of3Fig3

Second, the separation also corresponds to 5/4 wave length of an even higher frequency: the 5th harmonic (five times higher), 400 Hz, which is perfectly in tune with the fundamental.

3of3Fig4For a number of reasons like microphone’s sensitivity and consequent preamp gain involved, or rather sonic results, it’s common practice to have an electromagnetic closer to the source and a condenser placed farther away. Right? OK. No problem… Think back to what we learned about microphones of the same type: they can be either coincidents, or 1/2 wavelength apart with polarity reversed or else full wavelength apart both with the same phase.

Having learned this and knowing now that with different type of microphones a 1/4 wave length separation is mandatory, you’d have to adopt one of the following two options:

  • move the condenser back 1/2 wave length and invert its phase
  • move the electromagnetic back 1/2 wavelength and invert its phase

The first option is often the prefered one, with the electromagnetic closer to the source and the condenser 1/4 wave length behind it and with phase inverted.

You can go ahead and build the positioning pattern: with respect to the source

CONDENSER

ELECTROMAGNETIC

CONDENSER, polarity reversed

ELECTROMAGNETIC, polarity reversed

CONDENSER   and so on…

 

 

Oh, such a perfect phase… part 2 of 3

In the previous post it was briefly suggested that in a multi-microphone arrangement, the relative position of microphones determines which frequencies are being picked up in phase and which are being picked up out of phase.

From that post we learned about the congenital phase displacement of condenser microphones as opposed to moving coils rather than ribbons. We also re-categorized moving coils and ribbons as electromagnetic microphones.

Now let’s analyze the simpler case, when in the capture are involved microphones sharing the same type of transduction: be they both condensers or both electromagnetics.

The basic arrangement is to place them coincidents, as close as possible to the source: a situation when you just want a different, alternative sound: virtually no phase difference occurs and both capsules pick up sound pressure’s variations at the same time.

But say you are spacing microphones 1 meter apart in the direction of the source, one closer and the other one placed farther away.

2of3Fig1

There is one (and only one!) frequency which cycles with a wavelength of 1 meter: that’s 340 Hz. As the complex sound wave propagates from the source towards the first and then the second microphone, capsules will pick up 340 Hz with no difference in phase since they are placed at the distance corresponding to one exact cycle of 340 Hz from each other.

2of3Fig2

The positive combination of energy will reinforce that frequency.

2of3Fig3

The same arrangement though will suffer major phase issues at the frequency one octave lower, the one for which the distance of 1 meter corresponds to half wavelength.

2of3Fig4

This is what goes on: exactly when that frequency is picked up by one of the two microphones, at the other microphone the phase of that same frequency is 180° inverse.

2of3Fig5

Full out of phase combination, i.e. cancellation of the frequency: that’s 170 Hz (340/170 = 2 meters wavelength).

To further expand: let’s call D the distance between microphones. D is half the wavelength of the canceled frequency one octave lower (170 Hz). D is also 3/2 wavelength of the closest higher, canceled frequency (510 Hz) and D is also twice the wavelength of the higher octave (680 Hz), perfectly in phase and therefore boosted. And so on: D is 5/2 wavelength of 850 Hz (canceled) and is 3 times (6/2) the wavelength of 1020 Hz (boosted). 7/2 of 1190 Hz (canceled), 4 times (8/2) 1360 Hz. In fact such an arrangement works like a comb filter: draw the frequencies boost/cut pattern on a piece of a paper and you’ll immediately understand where such a name comes from.

If you change the distance between microphones (of the same type), different frequencies will be canceled and boosted. The filter’s curve shifts to a lower region of the audio spectrum if distance is increased, to a higher region if distance is reduced.

One very interesting consideration: we’ve seen that distance D corresponds to half wavelength of the canceled, lowest filtered frequency. Sure enough you can invert polatity of one of the two microphones and by doing so you reverse the shape of the filter, i.e. boost that lower octave without changing D. In the previous case 170 Hz would be boosted, 340 canceled and so on.

Pay attention: this also means that given a target frequency, you can place microphones at half D (i.e. half wavelength) from each other and still get perfect alignment at that frequency by inverting the polarity of one of the two mics. Let’s better study the case with an example. Say you are looking for a solid in-phase alignment at 80Hz. That’s a 4,25 meters wavelength. That’s your D, distance between microphones. With microphones spaced like that, you’d get 0° correct phase at 80Hz, 160 Hz, 240 Hz, 320 Hz (note: all the harmonics!). You’d also get 180° inverted phase at 40 Hz, 120 Hz, 200 Hz, 280 Hz. Consider this: if by pushing the polarity button you reverse the filter and get 0° correct phase at the lower octave, you can

1) place microphones at half D (wavelength of the higher octave) and obtain 160 Hz in perfect phase, 80 Hz 180° out-of-phase…

2) push the button… et voilà: 0° phase at 80Hz (lower octave) and 180° at 160 Hz.

All this brings us to one very important, second notion which in fact should be regarded as supreme law: there is no such thing as a generic phase issue pertaining to the combination of two (or more) microphones. Phase issues are frequency dependent: given a certain distance between microphones, some frequencies will suffer, some will actually benefit!

Someone might tell you that pushing the polarity button of your preamp helps you solving the problem… Which problem? The phase problem? Mmmhhh… not that easy! Phase is frequency dependent: what the button does is shifting the whole in-phase/out-of-phase pattern (the filter curve) an octave lower in this specific case of microphones of the same type (we’ll analyze in the next post the case with microphones of different type).

The problem isn’t being solved, it’s just being moved to a lower region of the audio spectrum, which can be very desirable at times. Note that below 2D, none of the frequencies oscillates so that inverting their polarity will give 180 degree inversion: their wavelength is just too long.

Conclusion: microphones with the same type of transduction can be positioned in phase to a specific frequency according to the pattern:

– coincident

— half D, one mic output w/ polarity inverted

— full D, both mic outputs with the same polarity

Try yourself!