[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [ARSCLIST] National Recording Preservation Board (NRPB) Study
Richard L. Hess wrote:
> We already have standards for audio files that provide a lot of
> benefit. I am seeing an attempt to use 24/96 as a standard for
> everything. While I agree that 24/96 (or I actually prefer 24/88.2)
> should be the norm for musical recordings, I see the uncritical
> application of this standard to voice recordings as a waste of money.
> I do not subscribe to the argument that disks are cheap - their
> management is not. If the difference in archiving the oral history
> archive is between 300 TB and 1 PB, there is a huge cost difference
> there, long-term.
Three comments:
1) A lot of older disk recordings were not recorded at a standardized
speed. In the transfer stage, it is best to do the transfer using a
standardized speed (whether or not it is the speed of the original
recording -- it just needs to be close.) It is during restoration
when the sound engineer can perform the requisite music analysis and
determine the exact, actual recording speed.
What this means for digitizing, at least of pre-microgroove disk
recordings (don't know that much about tape) is that there is no
inherent advantage in going with 88.2k sampling, since resampling
will very likely be necessary to adjust for the original speed
variation. 96k is more standardized, plus it gives 9% more overhead
for resampling than 88.2k.
2) Many feel for older recordings, where there may be no audio
information past 12kHz (other than noise), that 24/96 is overkill.
However, for restoration, it *may* be important (still to be
determined) that having a good representation of the *noise* will
aid in restoration. That is, having an accurate fingerprint of the
noise (especially impulse type noise found on groove recordings) is
beneficial. Thus, 24/96 is not necessarily overkill.
3) Storage space is getting to be less and less of an issue. A lesson
I've learned in the text digitization area (where it is easy to
collect 5 gigs of scan images for a book) is that one must push the
envelope. Already there are tens of thousands of books scanned the
last decade that have to be redone because the quality (resolution)
chosen was driven by disk space considerations -- as well as a lack
of future-vision ("I only need it for the moment -- who cares about
future needs?") It now turns out the quality is insufficient for
future archival and direct presentation needs, and some of the
people who did the low quality books scans now wish they put in the
extra 10% effort to "do it right."
When it comes to digital preservation, being anal, doing things right,
and erring on the side of overkill, is Good (tm) -- these are
virtues. There's no room for corner cutters in the digital preservation
world.
> A note: Standards are useful for the new technology that we are
> moving towards or into. I think the term "recommended practices"
> applies more to how to address the reproduction of older recordings.
> For example, suggesting appropriate stylus widths for grooved media
> reproduction would be very useful, but I suspect the best transfers
> come after analysis, not rote following of a particular standard.
Definitely the issue of playback stylus size for grooved records, plus
what should be used for the pickup (e.g. moving magnet versus moving
coil), are important issues. Eric Jacobs has been experimenting with
moving coil, and although the cartridges are much more expensive, and
the lower levels require state-of-the-art preamps that make one go
"whoa", the results can be remarkable. Although the jury is still out,
in my opinion (and I may be totally off-base here), it seems like
moving coil is better able to track the groove, and get a more accurate
"fingerprint" of both the wanted signal and the noise (see above.)
I'll let Eric clarify where I may be off in my assessment.
Then there's laser pickup, and Eric has been experimenting with that
as well. Don't know where his research currently stands, but laser has
its own unique sets of challenges, problems, and opportunities.
> Metadata interchange is still a challenge as the typical metadata is
> larger than the usually supported space in a B-WAV file. On a recent
> project I delivered TXT files with the metadata in a structured
> format that had the same base file name as the WAV and the MP3 access copies.
>
> I would have preferred to use XML files, and this is an area where
> some standardization would be useful.
Working with XML for open standard ebook formats (both for content and
metadata), I agree that XML offers a lot of interesting advantages to
structure metadata, which most audio people usually refer to as
discographical information.
For a while I've been advocating that the ARSC fraternity, working
with other entities, develop an *open standard* XML schema for
discographical information. The advantage of this is that discographical
data in such a format is platform- and application-independent, useful
as an interchange format, and there's a huge toolbase, a lot of it
open source, to author and process XML data documents. In addition, since
XML is simply text with markup (the "pointy brackets"), the XML data is
readable with a simple text editor (preferably a UTF-8/UTF-16 compliant
text editor). This makes XML documents eminently archivable and
repurposeable. If one is to digitally preserve discographical information
into the distant future, it is important that the information be in the
most readable form, which is plain text -- proprietary binary (non-text)
encodings (especially of complex data structures) *must* be avoided at
all costs.
Just my usual pontification. <smile/>
Jon Noring
(p.s., I wish I could attend the ARSC convention, but cannot since
I'll be in the other Washington, Washington DC, attending Book Expo to
promote the XML-based ebook standard several of us recently developed.)