[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [ARSCLIST] preservation best practice - to dolby decode or not
Hello, Eric,
I think it's confusing to apply digital terminology to an analog process.
Since the amplitude resolution of an analog chain is essentially
infinite between the noise floor and the clip point, there are no
quantization steps.
Dolby, dbx, and Telcom all use audio level modification (called
compression on the record side, but not in the same sense as data
compression, and called expansion on the playback side--or compander
for the entire process). The magic in this is how the hardware tracks
between record and play, how various frequency-response adjustments
to the compander (complementary between encode/decode (record/play),
and how band splitting is achieved (most Dolby processes and the
Telcom process).
It is prehaps easiest to envision how the dbx process works.
Centered approximately around VU Meter 0, there is a 2:1 decibel
compression on the encode and a 1:2 decibel compression on the decode side.
So if we take a signal from -60 to + 20 (80 dB of dynamic range) and
feed it into a dbx encoder, the output signal runs from -30 to + 10
(and has some frequency skewing added). On decode, the same signal
has the frequency skewing removed, and is expanded out to -60 to + 20 again.
Since the tape is recording only a 40 dB dynamic range, any noise
that is added (let's say the noise floor of the tape is -40 dB (on a
pro machine it will be better) is expanded downward along with the
signal. If the tape noise is -40 dB within the dbx encoded chain, on
reproduce, it will be expanded downwards to -80 dB.
Since there are no quantization steps in an analog signal, nothing is
lost in the 40 dB compressed signal -- in fact it's made more robust,
as the weakest signals are substantially higher than they were, and
thus remain separated from the noise floor). Also, the loudest
signals are made less, so they can be fed over channels with lower headroom.
If everything is working correctly, nothing is lost in the
encode-decode cycle. However, the system introduces small bursts of
temporal gain errors if the encode and decode do not track properly.
The Telcom system is reported to have perhaps the best performance of
the three. It has the benefits of the Dolby system in being
multi-band so that loud signals in one band don't affect the noise
performance in another band and it has the benefits of having greater
compression and being multiplicative like dbx so that there are
reduced headroom requirements and less of an issue with encoded
channel gain calibration.
Encoded channel gain calibration is one of the Achilles heels of the
Dolby system, which is why there is all the fuss about lineup tones.
In any companding noise-reduction system, anomalies in the
frequency-amplitude response of the noisy transport channel (tape or
microwave) will be amplified by the amount of compression.
Yes, there is more high-frequency energy in all of the companded
signals, but they bear absolutely no relationship to the highs in the
original performance. You are NOT losing those highs when you run
them through a Dolby decoder, but rather you are placing the highs at
appropriate levels that match (within the system error margin) those
recorded at the beginning.
While I am an advocate of storing the un-processed Dolby recording in
the digital domain along with the processed recording, it is only
because there can be errors and malfunctions in the Dolby decode
process (and even errors in the Dolby--or other system--encode
process and storing the un-decoded recording will permit greater
opportunities to recover from that broken system later if it is not
noticed at the time of playback.
In other words, I think best practice for digitizing a two channel
Dolby tape is to archive four 96/24 channels, two encoded--raw off
the tape--and two decoded. You can do that easily with the mults in
your jackfield.
However, except in the rare (but not unheard of) event that the Dolby
metadata is incorrect (i.e., there is the sticker on the reel and it
doesn't belong to that tape), they you still have the un-decoded
version to work with.
I have seen multi-track tapes with not all channels processed (and in
one instance only one channel was processed--I think it was the lead vocal).
I wish we had taken a tape track approach to noise reduction rather
than the Dolby/dbx/Telcom/ approach -- and at least Sanyo, Nakamichi,
and Burwen also had incompatible systems -- to noise reduction. I
think the Europeans were on to a good thing with 1-inch 4-track.
However, Dolby really became necessary with NAB equalization and
especially with 24-track heads where the track width is the same as
quarter-track stereo, 43 mils. The other multi-track formats in North
America were all 70 mil tracks (4T 1/2", 8T 1", 16T 2").
We won't even talk about what happened when we went to 16T 1", 8T
1/2" and then to 16T 1/2" and 8T 1/4". The need for noise reduction
went up and up and up.
Of course, the other factor is in multi-track recording, the random
noise of all 24 (or more) channels is added together in the mixdown,
making the need for noise reduction even greater.
I think it would be a great dis-service not to make a Dolby decoded
version for the archive as part of the preservation suite and it
should be marked "USE THIS COPY" because few people coming into this
field today fully understand the noise compander process, and fewer
still will understand it in the future.
As to the energy above 20 kHz, I think it's somewhat undefined. Audio
passband was always considered to be 20 or 25 kHz and the rolloff
wasn't soft. It was a struggle to get mics flat to 20 kHz and many
good design requirements were "don't provide more bandwidth than you
need" and tape rolloff occurred somewhere above 20 kHz so you have
multiple poles all making sure that the signal nose-dived hard above 20 kHz.
I don't think you can accurately capture bias with a 192 ks/s
digitization as that provides a 90 kHz (more or less) effective
passband and that's below the bias frequency of most recorders. We've
discussed this in the past and there is a blog article on my website
about this.
http://richardhess.com/notes/2008/02/02/tape-recorder-bias-frequencies/
If you keep both, then you can do whatever manipulation you wish,
including splitting the band and taking the material above 25 kHz or
so from the unprocessed copy (but using it for control purposes, not
for release).
IMHO, NOT decoding the Dolby is destroying the original signal--the
Dolby-encoded signal was NEVER intended to be listened to without
decoding. And, you NEVER mix Dolby-encoded signals from the
multi-track without decoding each individual channel. You could say
that the original signal was irreversibly compromised in the Dolby
encoding with the knowledge that it would be de-compromised in a
complementary way in the Dolby decoding process.
IF you have a recording made with a broken Dolby card, for example
one band isn't functioning, you will need to break a Dolby card in
the same way to recover the sound properly. This was not always
noticeable as the same Dolby Cat 22 card was used for encode and
decode in almost all of the Dolby frames.
NOT decoding the Dolby now would be a serious breach of trust for the
preservation master as the availability of hardware and knowledge in
the future is not guaranteed. The decoded Dolby is what the original
producers heard and want they'd want you to hear.
You can look in your Dolby documentation and see if there are
low-pass filters, but whatever they are they would be in both the
record and play chains as it's essentially the same circuitry with
the action turned around.
I hope this helps.
Cheers,
Richard
Richard L. Hess email: richard@xxxxxxxxxxxxxxx
Aurora, Ontario, Canada (905) 713 6733 1-877-TAPE-FIX
Detailed contact information: http://www.richardhess.com/tape/contact.htm
Quality tape transfers -- even from hard-to-play tapes.