Reflections from ALA

So I'm back from a solid week of travels and meetings, and there is time to reflect a bit on our participation in the main ALA conference and trade show which took place last week-end. As a technology vendor, we spend most of our time on the showroom floor or in meetings. I really enjoy ALA; there is a feeling of companionship among the vendors -- perhaps even more so in times of economical hardship. It is fun to connect with old friends also among the libraries and consortia that directly make use of our family of tools, or for which we're providing ongoing hosting or support services, and for new customers and partners with whom we're mapping out exciting plans for the future.

We felt incredibly well received at ALA this year. We feel that our role as a provider of advanced information retrieval tools is really appreciated and welcomed, both by libraries who wish to build solutions that don't fit within existing products, and by vendors who're looking to add new capabilities or products without having to develop core technologies from scratch. Our model allows us to focus on what we truly love: To attack the most difficult problems in our space head-on and wrap the solutions into clean, flexible packages for other to build products and solutions from. The reception to our current generation of technologies was truly gratifying for our whole team.

If there was a message -- a thread -- that I took back from my conversations with friends and colleagues in the field, it remains a continued frustration that providers of technologies to libraries are not responding aggressively enough to the threats that face the entire profession. The library community must share in the responsibility for this.

I am speaking here of the lack of support for interoperability mechanisms between systems. I know it seems to some mundane, an often dull technical aspect of library technology, but to my mind it is absolutely essential.

Why? Well, I think that it has been demonstrated really well, especially over the past couple of years, that libraries as local entities and members of their community are truly essential. They serve a critical role as supporters of people who desperately need information but haven't the skills or the resources to make use of the latest tricks of the Internet search engines. But they are also important focal points and gatherers and organizers of local history, and local culture. None of the global discovery providers and resellers of culture can adequately represent these local needs with as thorough a reach and with the passion and energy exerted by the local library or the research library of the small college.

But in order to match and surpass the easy satisfaction of the big-box web providers, in order to maximize the use of limited staff resources and budgets, it is absolutely critical that libraries be able to interoperate, to share information amongst each other; to collaborate on special projects or in support of regional or topical interests, and to support resource sharing as widely or as narrowly as need be.

Far too often, libraries find themselves unable to collaborate effectively because some vendors continue to give only lip service to mission-critical industry standards, because they charge prohibitive prices for such support, or perhaps because they wish to protect a marketplace for proprietary models of interoperability and hence their market share. Such behavior is foolish and short-sighted: If libraries are unable to provide comprehensive services because of technological limitations, the information will find other paths to the user. Eventually, libraries risk obsolescence, and with them the entire library automation industry that we love.

Fortunately, the majority of vendors do 'get it', but there is still a need -- and we observe this in any interoperability project we participate in -- to work harder at it. And library staff have a real responsibility to ask tough questions of prospective vendors during procurement processes: To think about what kind of interoperability they may be looking for and to ask their vendors how they can support this.



Interoperability is always important. I'm curious, though - if you were to turn on your crystal ball, what specific areas of interop are the most important right now?

Galen Charlton


Hey Galen. I don't have much of a reliable crystal ball, and my predictions would tend to be colored by whatever I'm working on at the moment. Moving around simple bibliographic metadata is a big cost-driver these days, since more and more libraries are moving discovery out of the ILS and into third-party systems, whether of the OSS variety or the commercial, hosted offerings that spring up everywhere (some stand-alone, others offered by content aggregators). It's still a clunky process for many libraries. Another area we feel on our own bodies here are circulation status/control and patron authentication/information -- stuff that's needed to drive all kinds of consortial resource sharing. This is all mundane stuff... things that should work today based on existing standards, but too often it doesn't. I don't really care how it's done... as long as there's a way to get to the data and functionality, preferably without having to reinvent the wheel for every different system.

If I was going to guess about the near future, a conservative suggestion would be that social data is going to be big. Both the direct stuff, like tagging and reviews, and second-order stuff such as usage data that can be used to drive recommendation systems. This is legitimately tricky stuff because of privacy concerns, but also essential if we want to produce more interesting websites. I'd love to see this kind of data flow more freely between systems, though, to allow libraries to form larger communities and hence more easily get the kind of critical mass you need to drive social web functions.


A Few Thoughts...

Hello Sebastian, I found your comments interesting, I wanted to solicit your opinion on something. Is the need for Interoperability going to by supplanted by both hard drive and RDMS architecture of the ILS servers? Let me give you an example, most libraries now have a large bibliographic database that holds their monograph holdings, and assorted other metadata. They also subscribe to online indexes to provide patron access to journal abstracts. This is a very inefficent means of searching because it makes the patron complete two searches, one for books, another for journals. Vendors have attempted to federate these searches with mixed success. Would it not be better to download the journal abstracts directly into the ILS's RDMS database? This way they could complete one search for both monographs and serials. Remember that now hard drive space has become very cheap, even compared to 5 years ago. Most ILS vendors are now offering a variety of high powered RDMS engines that could easilly index millions of more records and import a variety of formats, Marc, XML, CSV. Would this model not supplant a need for federating/inter-operating any searches? It would in effect make all library ILS servers "mini-google" proxy servers. Just a thought, would like to hear your comments on this.

The Best of Both Worlds

Hi Calvin, It's an excellent question. I think you are absolutely right that because of improvements in network bandwidth, storage capacity, and software, it is becoming increasingly popular for libraries and others to build in-house indexes of all of their 'holdings', both physical and digital. In some cases this is done, as you suggest, using the ILS. At other times, people use 'next-generation OPACs' like Project Blacklight, VUFind, or commercial offerings like ExLibris Primo. In all of these cases, you get a lot of benefits from such aggregation of metadata, including speed, reliability, and quality of results. The good news is that the major database aggregators and providers are increasingly supportive of this kind of activity, and are often willing to share with a library copies of relevant metadata -- something that would have been unthinkable just a few years ago.

However, I can tell you from personal experience that such central indexes do not obviate the need for standards. Quite the opposite. Most libraries subscribe not just to one database, say, of full-text journal articles, but to many different ones. To maintain an up-to-date index of these databases can be extremely expensive if you have to deal with different delivery methods and data representations. That is why standards like OAI-PMH, perhaps RSS/Atom, XML, and the DublinCore can be hugely important in making it practical for libraries to build indexes.

And then, of course, there may be databases which you CAN'T index within your library because they are too big, too volatile, or because the owner of the data won't let you. For those databases, it may still be desirable to provide a single point of access -- one search box where you can search your library catalog, any subscription databases you may have added to it, and external databases. Those are the type of hybrid solutions that we find ourselves building more and more these days.