Monday, September 24, 2007

It's almost like being there: SAA 2007 Conference Report

What a great trip! The SAA 2007 conference was my first national conference-- and it was exhausting and amazing. In addition to the general conference festivities, I found myself involved at many levels: I went to a workshop on user studies, took the Certified Archivist exam, spoke to a couple of roundtables, and presented a paper.
Rather than write it all out here, I'm going to refer you all to my notes and to some great blog posts by others.

My "Schedule at a Glance"
8/28
8/29
8/30
  • Opening Plenary: SAA president Elizabeth Adkins ponders diversity in the archival profession.
  • Session 205: Ships That Pass in the Night? Evaluating Archival Users Tools with a User-Centric Perspective. This session was chaired by Jodi Allison-Bunnell and asked us to listen to the users and work to develop a personal relationship with them. My notes can be found here.
  • Session 307: Reference Service and Minimal Processing: Challenges and Opportunities. This was my big moment! My session was held in a big ballroom and there were roughly 500 people there. You can find the text in this blog post.

8/31:
9/1
  • Finally, in-coming president Mark Greene gave his closing remarks, which can be found, in full text, by clicking here.
I welcome questions about my trip, Chicago, and what it was like to stand in front of so many people and hope what you are saying makes sense to them!

Thank you to the library for supporting me, it made all the difference to feel like I was working for both myself and my institution.

Tuesday, September 11, 2007

Sharing and Snooping: How to Use Blogs and Social Bookmarking for Business and Pleasure

In the true spirit of blogging, I've decided to abandon my beloved PowerPoint and post my session for the 2007 OSU Libraries in-service here.

Why Are We Here?

I assume you are all curious about how you can use blogs and social bookmarking, so we've come together this afternoon to talk about the fundamentals of blogging and bookmarking. But I also hope to demystify these tools a bit and show you how easy it is to snoop into what others have to say and to see what others have shared.

What is a blog?

Short for "weblog," a blog is a "frequent, chronological publication of personal thoughts and web links." Remember, a blog is just another website, albeit one that is inherently social and frequently conversational in nature. It’s like your diary or journal, a letter to the editor, or a message on a listserv—but anyone online can read it and, more importantly, they can tell you what they think.

Never one to repeat research when I can repurpose someone else’s, I’d like to refer you all to a wonderful site that can tell you more about the history, structure, and general academic uses for blogs: Academic Blogging. This site was funded by the Educational Technology Collaborative at University of Tennessee, and is a great explanation of the fundamentals of a blog.

What about libraries?

Again, in the spirit of recycling, I point you all to these two sites to answer this question. The first, at Web Junction is called Blogs for Libraries. This site addresses the eternal question: “why should I care about this?”More than that, it also provides links to notable library blogs, as well as blogging tools and resources. The second link is to Darlene Fichter’s article entitled “Why and How to Use Blogs to Promote Your Library's Services.” Darlene provides much of the same information, but variety is the spice, etc.

Examples from my own news reader:

Annoyed Librarian

Info Tangle

Information Wants to be Free

The Shifted Librarian

The Well Dressed Librarian

(Yes, I chose these, at least in part, because of the clever titles!)

What about the archives?

As you can all imagine, I watch blogs that are specific to archives with more frequency than those that deal with libraries in general.

Here are some examples:

Historical Notes from OHSU

Reading Archives

ArchivesNext

Hanging Together

ArchivesBlogs

Archivists are using blogs as places to share their experiences as archivists, as educational tools, and as places to tell us about their stuff.

What about everything else?

Yes, it’s true that most of us have interests outside of our professions, and I wanted to take a moment to share some ways I use blogs outside of work. I’ve found myself a bit addicted to cooking and recipe blogs; in the past year, I've started to watch several, aided again by my handy Google news reader.

101 Cookbooks

Simply Recipes

Je Mange la Ville

Smitten Kitchen

This list seems to grow, though so does my recipe repertoire!

Social Bookmarking

Remember the days of finding great sites, saving them in your “favorites” or bookmarking them, creating an elaborate folder organization hierarchy? And remember either completely forgetting about the life-changing find or never being able to figure out your own classification? And remember finding something at home and note being able to find it again at work? And remember emailing links? The world has changed! Social bookmarking is a way to store, classify, share and search your Internet bookmarks. Sorry in advance to those who will shudder at this reference, but let’s visit the famous wiki Wikipedia for a definition of Social Bookmarking.

The power of the people

When the power of classification moves into the hands of the masses, what they create is called a “folksonomy,” is part of a much larger discussion. According to Wikipedia, “folksonomy (also known as collaborative tagging , social classification, social indexing, social tagging, and other names) is the practice and method of collaboratively creating and managing tags to annotate and categorize content. In contrast to traditional subject indexing metadata is not only generated by experts but also by creators and consumers of the content. Usually freely chosen keywords are used instead of a controlled vocabulary.”

More tools of the social networking trade

Stumble Upon

Digg

del.icio.us

How do I use it?

edmunsot’s bookmarks

Ask the experts

I can’t claim to be an expert, or even a power user of either of these tools, but I have found some great blogs and useful ways to keep track of what I find.

The way to find your own gems is to explore. Trust yourself to find sites and to become your own expert—after all, you know what you want!





Tuesday, September 4, 2007

SAA 2007 Session #307

SAA 2007

Session #307: Reference Service and Minimal Processing: Challenges and Opportunities

What Should we do with this?

While the profession has spent the last couple of years debating the promises and faults of minimal level processing, most of the attention has been focused on the processing phase and little has been done to test researcher needs, desires, and expectations for accessing archival and manuscript materials.

NWDA: History of the Program

In my talk today, I will present findings from a series of usability tests I did last winter for the Northwest Digital Archives (NWDA). For those of you who are unfamiliar with the program, the NWDA was a project funded by grants from the NHPRC and NEH; it is now part of the Orbis Cascade Alliance. It provides enhanced access to archival and manuscript materials in Idaho, Montana, Oregon, and Washington through a union database of EAD finding aids. In many ways, each individual collection is a part of the larger collection that documents the political, cultural, and natural history of the region.

NHPRC grant funded the Northwest Archives Processing Initiative or NWAPI. The overall purpose of the second phase of the NWAPI project was to improve access to primary sources for NW history by providing wide-spread collection-level access through minimal processing; Mark Greene acted as the consultant. However, because the NWDA as a whole has not adopted the MPLP suggestions, working with NWDA users meant it was possible to observe a single patron as they interacted with twenty-one repositories' finding aids, with each providing a different level of detail, and it was possible to survey anyone who performed a search on the site, regardless of physical location. In this case, the location was a conference room, but we could have been searching from a computer in Oregon, New Hampshire, or London—our location did not impact the search. The NWDA site provided the variety of individual repositories with the guidance of a consortium.

Grad School optimism meets reality

I'd like to step back before beginning this talk to tell you a bit about the genesis of this project. I was hired by Lane Community College in Eugene, Oregon as one of the minimal processors for the NWAPI II grant. Armed with my MLIS, I had lots of theory, lots of rose-colored optimism, and plenty of mythical ideas of what it would actually be like to be an "archivist." My vision included hushed reading rooms, 200 year old love letters, and intense, interested, patient researchers who wanted nothing more than to spend endless afternoons reflecting on the bounty of history.

The reality was, when I got my first job, I was working with institutional records, very few people visited, and we didn't really have a reading room. But I approached these meeting notes with the respectful, hushed awe I felt was appropriate, ready to read each individual memo about the establishment of this program or the elimination of that position; unfortunately, my position was being funded by a grant that was acting as the test bed for minimal level processing. More than that, we had a deadline!

Like many of you, as a processor I approached MPLP with heavy skepticism- it flew in the face of my grand vision, but it also flew in the face of what I had heard from my grad school instructors. Frankly, though I had very little experience with actual processing, I had absolutely no idea of what "processing" meant if you didn't at least open all the folders, riffle around a bit, rearrange and sort a bit, and then craft a detailed list so the next person could a) find something and b) tell that you had done something.

Now, as a reference archivist, I spend most of my time trying to find things!

Why test these users?

This was actually the 3rd round of usability testing of the NWDA site. The first round of testing was conducted between July and October of 2004 and was a broad analysis of the usability of the site in its infancy.

The second round of testing took place last fall and winter at the University of Montana in Missoula and Western Washington University in Bellingham. The goal of the testing was to observe how undergraduate students interacted with the main NWDA page, as well as the basic search, results, and finding aid pages.

For the third round, in addition to more specific display-oriented issues, the goal was to examine some potential impacts that minimal level processing might have on reference services and user satisfaction; specifically, I wanted to measure how the level of detail in the description impacted the success and satisfaction of NWDA users. I want to repeat and stress the word “goal” here—because, frankly, saying I was going to analyze the impact was considerably easier than actually analyzing the impact.

The Test

I started the project with several assumptions and was certain they would all be confirmed. In fact, I included this sentence in my original abstract: “this project will address the difficulties that arise with reference services, with the assumption that minimal level processing suggestions will make reference services much more difficult and result in a decreased satisfaction rate for users.” Specifically, I thought saving staff time in the processing phase meant spending more time on reference services, that general container lists are never helpful for off-site users and can be frustrating for both users as well as reference staff; and that, if the profession is indeed moving in a general way toward series level description, good, standardized descriptive practices are essential for providing access to the items and information in our collections. My working hypothesis was that the future of reference services was doomed without robust finding aids full of meticulous details.

It quickly became obvious to me that thorough testing and analysis of the first and third assumptions was a much bigger project and a discussion that was not going to fit into a 20 to 25 minute conference presentation. I refocused and hoped instead to look more specifically at user reactions to detailed container lists versus a more detailed narrative descriptive practice.

The Work Began

As I mentioned earlier, the larger purpose of this study was to test users’ perceptions and reactions to a variety of finding aids, with several different styles of presentation, levels of description, and types of collections. For the test, participants were asked to navigate the NWDA finding aid pages and answer question; in several instances, they were also asked to compare the NWDA to the Online Archive of California and the A2A databases. Predictably and intentionally, the format and style of presentation for each database varied, as did user response.

Testing took place at OSU from February through April of 2007. I tested 11 people from the OSU and University of Oregon communities, with varying levels of experience with archival research, from a library patron with no experience to a professional archivist. Initially, I was concerned that I need to test more people, but after 9 or 10 tests, most of the answers began to repeat.

Listening to Users
My concerns and questions found their voice in an "ask the users" rallying cry. Well, with my round of usability testing, I finally had my chance, and much to my chagrin, when I actually asked them, many of the responses were contradictory and confusing.

I opened the testing with this question: How much detail do you expect when searching for historical or archival information online? Predictably, I heard "as much as possible" across the board; however, when I probed deeper, I discovered a variety in definitions of what "detail" meant: expectations ranged from a brief description of administrative information to full digitization of materials. Some thought it was generally accepted that they would be provided with a file-level inventory, with dates and extent for each, while others thought detail meant a lush narrative with plenty of information about how the collection related to everything and anything.

In order to gather more specific information about how much detail users expected from online finding aids, I asked them to compare the finding aids for the Linus Pauling Collection at OSU Special Collections and the Instruction Services Office Records at Lane Community College.

I’d like to give you a little background on both these collections.

The depth and breadth of the Linus Pauling collection is truly amazing. It is comprised of at least 500,000 items, covering 4437 linear feet, and is housed in 1800 boxes. The Special Collections staff has, quite literally, touched each piece of paper; not only have they touched it, they have preserved it, described it, and cataloged it. Last year, they finished the Pauling Catalog, which is a finding aid of epic proportions-- six volumes and 1,800 pages; they are currently working on an online calendar that will allow a user to click on any day of Pauling's life and see the items in the collection related to that day. If we clicked on August 30, 1943, we would find out that Pauling wrote a check to Maybelle Marbury for $3.50, filled out a form, and received 7 letters. The purpose of providing this detail is to show you all that this collection has been processed: at a micro-level. However, while the NWDA finding aid provides a lot narrative detail and series level description, but it doesn't provide 1,800 pages of detail. Knowing that, I was interested in observing the assumptions those I tested would make about the collection, the level of processing, and what their reaction was to this form of detailed narrative description.

At OSU, the Special Collections and University Archives are quite separate, so I don’t have any insider knowledge on the Pauling Collection; but the Instruction Services Office collection is another story: I processed and wrote the finding aid while I worked as a part of the NWAPI II project. My choice to include it was not based on an exaggerated sense of myself, but because a) I knew it was minimally processed and how I had defined that phrase, and b) because I knew what went into the decisions for the levels of description. The collection is comprised of the administrative and departmental records from Lane Community College's Instructional Services Office from 1974 to 1997—it’s the ideal MPLP candidate!

My Assumptions and the Reality of What I Found...

There were those who liked the general, narrative description of the Pauling finding aid and those who preferred an inventory-like, container list of the Instructional Services finding aid.

Many expressed frustration with what they felt was an incomplete level of detail in the inventory list of the Instruction Services Office collection. To those users, it felt like a tantalizing amount of information: just enough to make them think there might be something there, but not enough to tell them if it would be useful. The general folder level description was called "elitist," "unattainable," "frustrating," and "too general." One user asked “is there was something else there that they just aren’t telling me?" Others were comforted by the list, assuming that it indicated a higher level of intellectual and physical organization that that found in the Pauling Collection.

The back and forth nature of my testing results left me wondering why I wasn’t getting any consensus or clear idea of what it was that would satisfy our users. I reached back to reexamine my original assumptions, and found some successful conclusions and some unsettling ones.

My first assumption was that while users, projects, and needs vary from search to search, ultimately, all users want a detailed container list, with as much detail as we can give them (this means item level). If an item level description isn't possible, than an extensive, well organized inventory would be an appropriate and satisfying substitution. When I was a processing archivist, I spent a lot of time carefully crafting these lists, sure that researchers would thank me one day. I also assumed that to effectively use this long list, users would simply search the page with commonly used keywords.

  • Here is what I heard:
    • Users don't search the page, even with long container lists, they skim. And once they found the "answer," they stopped skimming. Further, and more disturbing, people don't really want to read. One person, a university librarian, said that, in her experience, patrons wanted their search term highlighted and wouldn’t look for it if it wasn’t. Another, a recent college graduate, said “Are you supposed to read this stuff?” ... “The general public is lazy, no one will read this stuff.”
    • While the reasons for frustration varied, it seemed that long lists, without adequate context, simply frustrated people. These lists might be sufficient for keyword searching, if we all share the same keyword lexicon, but most wanted either specific, item-level details or more focus on the narrative description. One person said the long container list was like a laundry list of information without any details about how to gain access to it, another said it seemed like elitist forbidden fruit, and yet another actually laughed out loud when he read the phrase "detailed description" for the Instructional Services finding aid, saying "you call this detailed?" At the same time, there was a general confusion about what was actually being described in the narrative series description in the Pauling papers. Conceptually, many people didn't understand how large the collection was, how general the series descriptions were, and how unrealistic it would be to walk up to a reference desk and say "I'd like to see the correspondence please." Those who did had experience working either in or with archives. One person asked "How do you find anything? What is the archivist’s job here?” Another asked “so what do you do on the reference desk?" And another said "from a practical standpoint, you are on the desk and someone asks you to find something and you have 100 boxes? That is a nightmare in terms of the finding aid as a surrogate.” What was vital, nearly across that board was not the subjective perception of "detail," but the contextual information about how these pieces fit into the whole, how they related to other items in the collection, other materials in the repository, and the general field of research.
  • My second assumption was that with a lower level of processing comes a greater need to emphasize sound descriptive practices and subject analysis. This means good biographical and historical notes, good summaries, and good series descriptions, which will give researchers enough detail to be useful and let them know if further research is necessary. This all appears to be true, the textual information we can provide is more meaningful to them than lists, but it seems that, in the Golden Age of Google, we return to my first assumption about the importance of keywords-- in this case, keywords aren't important for the user who is already there, but for the user who is typing away in the Google search box.
  • This brings me to my third assumption: our users drop into our finding aids with little to no introduction, instruction, context, or repository information. To them our institutional and consortia boundaries are constructed-- Google is like one big happy consortium, without the grant applications and pretty style sheets! We need standards and we all need to follow them!
    • If MPLP does, in fact, become the standard for processing, we must all be even more mindful and vigilant about our presentation. It has been encouraging to me that so many people want to talk about these tests, but one of the main things I found was that users "learned" by their second visit to all three union database sites and were able to remember their strategies, and they wanted to navigate within finding aids, within sites, within archival materials, and have things relatively predictable. However, even with the NWDA site, while there is a general feeling of site uniformity, users commented on the lack of uniformity between the finding aids for the individual repositories. Subtle differences can be confusing to researchers who are new to archival research or even to this website.

Conclusion

I never imagined that, as I championed the user and the need to really listen to our users, that they would say anything other than "give me more, I want the details, anything less and I can tell you failed as an archivist." As I reread the American Archivist article last week, with real work experience as a Reference Archivist and after actually talking to the users, I had to admit things had changed for me. Greene and Meissner are right, MPLP effectively shifts our notion of what a finding aid is. But maybe we, and I definitely need to include myself here, need to remind ourselves what a finding aid is—a descriptive surrogate for the materials themselves—and remind ourselves that we can’t do it all and we’ll never be able to find it all.

As a profession, we need to determine what is important to us and determine whether minimal level processing is actually going to meet the needs of our users and our profession. All in all, I think it is too soon to tell what the real implications of this level of processing is, but by beginning this research now, and discussing the implications for reference services, we can both begin the dialog and have a base for discussion in the years to come as more institutions face the reality of their backlog. There are lots of avenues of exploration.

The first is in my own shop: to keep pace with accessions and avoid adding to our backlog, for all new collections at the OSU Archives, we are only preparing collection level descriptions. We've also picked unprocessed collections, mainly those that have little to no online information, and are working to prepare collection level descriptions for those as well. It will be interesting to look at our reference statistics in 3-5 years, primarily the offsite requests, to see if the number of requests has increased with an increased web presence and if there is a correlation in the amount of time spent by staff on reference requests. It should provide a somewhat statistically sound look at how the level of processing impacts reference success and/or how much more work it is for the reference staff.

And let's not discount the promises of new, dynamic technologies. How will Web 2.0, the interactive & dynamic web, impact our ability to gather descriptive information about our collections from our users? Will this make up for not opening the folders?

As the NWDA moves into a sustainable program with ORBIS, rather than a grant funded project, I hope to work more to study how differing levels of processing and description detail within one database effects the user experience; at the same time, because the web allows the archival community to have one big database, I'd also like to explore further the implications of usability across regional, national, and international databases-- it's all the same in the Google world.