Careful with that bass, Eugene

I have met Steve Albini in person. It was January 3rd 2019. He was enrolled to record Uzeda band here in Italy at Sotto il Mare Recording Studios, not far away from where I live. Being friend with studio owner Luca, I was able to sneak in the back of the control room and silently witness Albini’s activity. Pretty amazing the tape edit he performed on the fly to create an ending for a song… After work we all went to dinner and I was able to briefly introduce myself to Albini and put in his hands a piece of paper with www.tunedmiking.net written on it, asking if he was willing to leave blessed, golden feedback.
A few days ago youtube has highlighted for me a document that was uploaded on the Electrical Audio recording studio channel six months after said meeting: it’s an educational video with studio owner Albini discussing time aligning and phase “issues”. I guess I will never know wether or not he visited tunedmiking.net and got inspired by what he read to think about giving his own version of the matter… either way, as a recording engineer I have spent many hours during the last five years recording, listening, comparing, listening again, reading, working on phase: I felt obliged to go over the document, highlight and discuss some contents.
Capturing multiple signals from a signal audio source is a known procedure of audio engineering and music recording. But a distinction is due: one situation is setting up two (or more) microphones in front of an electric guitar amp; one VERY DIFFERENT situation is capturing a directly injected version of that guitar plus the microphone(s) at the amp. The former setup is an all-acoustic one, meaning the guitar signal gets transduced into moving air by the amp and the microphone picks that movement up. ALL the microphones, when more than one is setup at the amp, pick up the same moving air, the same rendition of the guitar signal given by the amp. The latter setup though entails duplicating the guitar signal before the amp and recording a version of it that NEVER gets transduced into moving air. If in the first case the phase correlation between signals can be managed quite easily, not so in the second case, where things become less intuitive and more difficult to manage. The second setup, as discussed by Albini up to minute 6’30” is the topic of the present post. I respectfully disagree on 90% of that section.
From my point of view, time alignment has little to do with phase alignment. Time is about… time. whereas phase is about the time of frequencies. Between 20 Hz and 20 kHz there’s 19980 of them each with its own time, properly called “period”: which time is “time” then? About the arrival time difference between the directly injected signal (“D.I.” from now on) and the microphone signal: my current set up these days to record electric bass is a BSS AR-133 active D.I. linked to a Gallien-Krueger MB110 combo amp, with a “made in Austria” AKG D112 one inch from the grill cloth and one inch off center. Replaced the instrument with a sine wave from my cell phone tone generator app as source of signal. Armed two tracks and made sure recording levels were even at around reference level: -20 dB FS on both tracks, matched levels with preamp gains.
I had already determined that with this specific setup, at frequency of 85 Hz D.I. and Amp+Mic were 99.99% in phase (This may sound surprising and/or almost meaningless right now but hold on… we’ll get to that part later). Then I pressed record and started the oscillator @ 85 Hz.. Stopped recording after 5 seconds. Zoomed in and looked for the difference… I could not see anything that could be described as “arrival difference”.
Signal was present on both tracks virtually at the same time, matter of a few samples: 26 samples @ 48 kHz (see image below). That’s 0.541 milliseconds.
Schermata 2021-02-02 alle 09.45.06
One might say: “That is an actual difference”. Sure it is, but not one which can cause significant phase issues. Most importantly, as you can see below, such difference is temporary. On the Amp+Mic track (named GK_112) I could see that the speaker took its time to activate but, in a matter of three complete cycles its waveform almost matched the D.I. and definitely after the fourth cycle the two were in phase (see image below).
Schermata 2021-02-02 alle 09.47.27
Being the two waveforms in phase, it was easy to compare the two signals. No time difference but 47,05 ms before the speaker fully won its inertia and oscillated properly.
Not only I could not detect any significant delay but also the two waveforms were in phase… HOW POSSIBLE? Well, that’s how it is actually. Try it yourself. I ran the test with two different microphones and I had very similar results: first with AKG D-112; second test with Electrovoice RE20, same position. I simulated a frequency sweep with my cell phone-turned-oscillator and below is what I found connecting D.I.’s mic preamp output to channel #1 and Amp+Mic’s preamp output to channel #2 of my oscilloscope.D.I.+AKG D112:
  • in phase at 85 Hz (E2 sharp, see picture below)
  • 180° out of phase at 168 Hz (E3 sharp)
  • in phase at 1030 Hz.
The inconsistency of phase is likely to be attributed to what is called “group delay”.

IMG_2899

D.I.+EV PL20:
  • in phase at 104 Hz
  • 180° out of phase at 238 Hz
  • in phase at 1134 Hz
Nothing relevant about time difference. Nothing relevant about acoustic delay. Specific frequencies were actually in perfect phase. Comb filtering? I’ll discuss that in the next post about “group delay”.
Let’s go to the section where the time gap on the oscilloscope is shown to us,  supposedly indicated by the fact that waveforms don’t line up. At 3’50”, upon playing the first note which is a G, I can see that the two waveforms actually sort of line up! The phase between the third and fourth notes played , a B and a C, is almost perfect. The deepest valleys and the highest peaks match. The source signal is rather complex and it’s not easy to decipher on the scope. He should have used a sine wave tone and should have thought in terms of fundamentals and harmonics a là Fourier: that’s why, while E and F look out of phase, it’s hard to believe they actually are when noticing that phase of notes just adjacent is almost perfect.
At 4’08” a confusing concept: the discrepancy to be corrected is NOT between the two signals, but between SOME frequencies contained in those two signals. Phase is always related to frequency!!!
And finally Mr. Albini tells us that a mere 774 microseconds delay was to be implemented on the D.I. signal to correct the discrepancy. Now, having said that waveforms could be almost completely out of phase with each other, a delay of 774 µs would have ONLY solved a 180° phase issue at 646 Hz.. Did he correct discrepancies just with that? I don’t know man… Are waveforms in very good alignment?
Low frequencies are better supported“. Mmmhhh… Microseconds can align medium high and high frequencies, but to manage medium low and low fundamentals you definitely go with milliseconds. As in my example 85 Hz is almost an E2, i.e. lowest note on a guitar. E2 has a period of 12.195 milliseconds and 180° of that counts 6.097 milliseconds or 6097 microseconds if you like: huge difference!
In case you wonder, I never heard back from Steve Albini.
More about a general DI + microphone combination and “group delay” in the next post.

 

2 thoughts on “Careful with that bass, Eugene

  1. Hi,
    This website has really got me thinking. Thank you for diving so deep into this subject!

    When I was starting out in recording I got to work with an engineer that had worked with Michael Beinhorn and he showed me how they were lining up multiple mics on a cabinet.

    They would record both tracks to Pro Tools and have the guitarist touch the tip of their guitar cable to a metal part of their guitar, such as the jack. This would create a big transient spike. Then in Pro Tools they would measure the difference in time between the two tracks and sync them up using the TimeAdjuster plug-in.

    What do you think of this technique? Is this viable? Thank you!

    1. Hi! Your post got me thinking as well… thanks for sharing! What you described is a situation where microphones can be considered “coincident”, placed very close by the same source (guitar speaker in your case). I think having a “time 0″ event like a sudden transient is an excellent point for measuring difference in time. I guess one can fairly assume that when sound strikes capsules at the same time, phase correlation will be preserved throughout the entire spectrum of frequencies. This can prove true for microphones sharing the same transduction principle, like two (or more) electromagnetics a.k.a. dynamics/moving coils, or two ribbons rather than condensers. It is possibly true when you have a dynamic and a ribbon, different principle but still both “electromagnetic”. Different story is when you have a moving coil and a condenser: for reasons explained earlier on my pages, you should check at different frequencies what happens. Try yourself: first check what happens with a transient spike, if any delay shows up on the timeline. When you have nulled that delay, check at various frequencies in the lower region of the spectrum: 80 Hz, 100 Hz, 125 Hz and multiples up to 400 Hz. Best checking method for me is running the signals thru an oscilloscope. Can I ask what microphones would you use? I will try that myself in the near future. Thank you very much! Have my best regards, Gabe.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>