[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [ARSCLIST] .wav file content information
Hello, John,
Yes, I hope we're not boring the majority of the list!
I'm afraid I'm going to have to cut this dialogue short at this point as
I'm headed out the door to go to the ARSC conference -- and see some family
and friends as well as do some errands along the way both going and coming.
The most important one is Friday, seeing my 89 year old Dad in
Pennsylvania. On the return, I pick up 24 channels of Dolby A, some logging
recorders, and a Sony DASH digital player. I'll be at the mercy of dialup
hotel networks for the next two weeks. A few have wireless which should be
better.
At 03:57 PM 3/23/2005, John Spencer wrote:
I agree that this is a useful (and hopefully not too boring!) dialogue.
Let me hurl a few softballs back, and please, do understand that I agree
fundamentally with what you are saying. As they say, "the devil is in the
details".
Oh yes! Definitely in the details. I was trying to provide a broad overview
rather than get into devilish details
I truly believe this is a "crisis" for small archives, as the lack of
funding means that structured metadata gets pushed to the back of the bus
(or worse, OFF the bus).
That is definitely the case. The CD-R preservation route is the only thing
that they can afford. The minute I start talking to archives about managed
data storage, many (not all) archivists' eyes seem to glaze over. One of
the things I'm looking for in these cases is an IT department that the
archive can piggy-back onto. It's imperative that we get the mindset away
from CDs on the shelf or hard drives on the shelf. Overall, when you
include administrative (IT services) costs, it is far more cost effective
to dump 2 TB of data into a 20 TB IT department than try to manage a
separate 2 TB store (numbers are semi-random, but 2 TB of oral history is a
fair amount).
> At 12:40 PM 3/23/2005, John Spencer wrote:
>> Also, we've built these tools for our internal use, it's
>> certainly not that hard.
>
> Right, but I think Scott addressed that and what we're trying to do here.
> Mounting heads and aligning tape machines isn't that hard for me, but lots
> of people don't do it themselves. Writing the software would be harder
for me.
Understood and agreed. We have a number of data projects underway where the
archive is doing the "real work" (the actual transfers) and we're helping
out with the IT issues.
This might be useful to learn more about--if you're coming to ARSC we
should try and sit down and talk about your services in this regard.
> I think these tools are intended for smaller archives and people like me.
> Larger operations will require you to use the rigorous tools that they
> develop internally or purchase with rights management.
Here I must disagree. If I were to share my collection of files with
another institution (small or large), I would have a problem if all present
metadata were modifiable. DRM or not, the core information should not be
easily changeable.
This is all a matter of degree. The essence is modifiable unless we
completely lock the file. If the essence is modifiable, then the metadata
will be as well. So, now we talk about degree. I do not see the
modification as something that is done on a regular basis. I see these
tools used much more for the creation and reading of metadata than
modifying. The metadata I see embedded in files is not the type that should
be modified.
This is another area of concern for me. How can we assume that SANiP has
their metadata fields laid out in the same manner as ACHCN (Aboriginal
Cultural Heritage Centre of Nowhere in Particular)? Sounds like there might
be some re-keying (or re-mapping, or crosswalks) of data, which is not my
favorite scenario. The more times we re-type the same information, the
greater chance for error. Are we talking about MARC records, DC metadata,
etc.? The use of XML should remove many of these obstacles, but the same
cannot be said for those using Excel 95 to collect metadata!
No, I was always assuming that there was a structure that would be mappable
either via field names (as used in Excel 95 etc.) or to be more modern, via
XML.
I don't know what structured metadata system makes the most sense. I've
been specifically avoiding that area of study for the moment. Yes in the
generic sense of MARC that is what I had in mind, but the specific LoC MARC
fields leave something to be desired for audio -- at least what I've seen.
I do agree that some metadata should reside in the header, as you could
always open the file up in hexadecimal and read it. At our office, we call
this "catastrophic metadata" (or "CYA" metadata). However, I'm somewhat
unsure of your meaning of "tied together". Are you referring to
1) a wrapper that can be opened automatically (like MXF), or
2) the metadata and audio files reside on the same physical carrier, or
3) all of the metadata would be in the BWF header?
"Tied together" means that there is one entity that is passed from A to B
with essence and metadata.
(1) Yes MXF is an approach, but so is BWF as I understand it. Other than
the semantic difference of wrapper vs. file, isn't what we're talking about
with BWF and MXF very similar? Actually, I've been a fan of AAF for a long
time--I wish it gained more traction.
(2) is an invitation to trouble IMHO.
(3) Yes, that is what I'm talking about -- using BWF in a way similar to MXF.
Also, I was under the impression that many smaller archives don't have
"digital storage systems", hence the transitional migration to Gold CD-R (as
evidenced by various discussions on this list).
See above - yes, but it has to change.
Note, some snippage happened. Presumably anyone interested has the earlier
posts as well.
Cheers,
Richard
Richard L. Hess email: richard@xxxxxxxxxxxxxxx
Vignettes
Media web: http://www.richardhess.com/tape/
Aurora, Ontario, Canada (905) 713 6733 1-877-TAPE-FIX