Hey, librarians, it's your own damn fault!

Peter Rukavina

Part of the problem with dipping your toe lightly in as many waters as I do (which is another way of saying “part of the problem with being so scatterbrained” or, as Steven would say it, “part of the problem with not really being for anything”) is that it’s hard to participate with any authority in the post-game analysis.

I’m not a librarian. I am the spawn of a librarian, some of my best friends are librarians, I admire librarians, sometimes librarians even invite me to come and speak to them; but as an aversion to calculus kept me out of the space program, an aversion to ontology kept me out of library school.

As such, I really shouldn’t wade into the how come our OPAC vendors are such dorks debate. But I will. Partly because it was this that apparently started it all. Partly because I’m a software vendor myself. And partly because I’ve watched the duelling between librarians and OPAC vendors, from the distant sidelines, ever since my mother became a technical services librarian back in the 1980s.

To summarize my thoughts: hey, librarians, it’s your own damn fault.

When you outsource the administration of your data to someone else (whether it’s an OPAC vendor or a university computing department or some guy down the street), you’re also outsourcing any chance you have at retaining ultimate control over that data.

When you buy a “one size fits all” technology solution — an OPAC that’s designed for, say, “any public library” — you’re buying a commodity, not a solution.

And you should expect to be treated as an insignificant cog by your vendor: that’s what you are. By absolving yourself of personal responsibility over your data management in the first place, you’ve already said “we don’t care enough about this to do it ourselves, so you take care of it for us.” Is it any wonder they treat you like they do?

Add to all of this that the prevailing wisdom in the old-line software world is that moves towards openness are to be best avoided lest users gain too much control, and is it any wonder that your vendors don’t let you export data as RSS: if they did that, then you might start doing interesting things with the data, and start to realize that you don’t need them as much as you thought you did.

You might say “but librarians shouldn’t have to become programmers!” And you might be wrong. In the olden days, being a computer programmer meant a much different thing than it does now, and you truly couldn’t be both a programmer and a [good] librarian because computers back then were more like coal-fired boilers that needed specialized practitioners to maintain.

These days we have the Internet, open source, scripting languages, UNIX everywhere: combined together these tools allow an informed person with an organized mind to create wonderful, powerful applications, customized to their own needs. Get those informed people with organized minds working together, and you’ve got a technology force to be reckoned with.

As librarians, you already have all the basic intellectual building blocks to take over the technology blast furnaces yourselves: you understand how information is organized, you understand the value of interoperability, you understand (intimately) the value of thrift and economy, and, what’s more, you’re already organized into associations that could become the sort home base that collaborative technology efforts can profit from.

Jenny says that “It’s crazy to see users writing code to compensate for a lack of services from library OPACs.” and “It’s true libraries have limited resources, but they already have a vendor for their catalog, and that vendor should be the one leading the way.” I would suggest a different tack: take the little scripts that I created not as a call to berate your vendors, but as a demonstration that it’s really really easy to take control yourself using free, open, public resources that already exist. Don’t berate your vendors, replace them.

If a scatterbrained non-librarian like me can string together 117 lines of Perl code to make an RSS feed of the books I have checked out of the library, just think of what a organized technology strikeforce of frustrated librarians could do! Vendors wouldn’t stand a chance.

Comments

Submitted by Casey on

Permalink

Righteous rant, Peter. As somebody who worked both for a major ILS vendor and as a sysadmin at a large library, my take is that library ILS software is extremely complex and the market just isn’t that big despite its high cost, and so things that seem really simple, like having RSS feeds, keep getting included in some mythical long-off radically new version that keeps getting pushed back because the sofware they sell needs to be easy to use, feature-rich and reliable. As far as ILS administrators go, for every one hardcore geek who can’t conceive of a catalog without RSS, there are probably a hundred who don’t even know what RSS is. So the ILS vendors always need to do that calculus — yeah, so and so is really cool and really simple to do, but it’s going to generate half a bajillion support calls, and it will push the product release back 3 months, and so on. A bunch of people wanting to get the technology is a good motivator for the ILS company, but it still doesn’t deal with that fundamental dilemma that they face about who their base is, and about how small the sector really is. I personally don’t think it’s crazy to be writing code to add cool new features to my ILS — but I love to hack, so… even if the software had every feature you could think of, I’d still need to be dreaming up new stuff to do with it, for my own enjoyment.

Submitted by Sara on

Permalink

I think you made a good point about outsourcing’s limitations when it comes to OPACs (and other similar services) and that librarians can do it ourselves if we wanted to. I have seen tons of great open source scripts done by libraries (my favorite being an assignment calculator used by many academic libraries). I don’t think the problem is limited to fun tools like RSS feeds and other eventual bells and whistles. I think the problem has to do with the overal useability of the OPACs and that there is no standard. I have been to OPACs that don’t even allow you to search by journal title rather than overall title. I have been to some that are still using telnet for searching. With the old card catalogs at least you knew that everyone was going to be the same and contain the same type of information. Not that I want that again, but it would be nice to have a standard content and quality to our OPACs.

After reading the above post I had to wonder, if librarians could, as a group, find a way to create some sort of open source OPAC and, if so, how could it be organized? In my opinion it would take a nonprofit group with a nice grant and lots of librarian involvement.

Submitted by Heather on

Permalink

In some ways, libraries and librarians are beginning to create tools that our vendors have not been willing or able to create. See the UCI “full-text online journal” page (http://www.lib.uci.edu/online/… that allows our users to search or browse for ejournals by topic, keyword, etc. All of the data for this search is drawn from our OPAC. Many other libraries have similar tools. Others are working on more complex mapping projects to create similar subject searchable databases (http://www.columbia.edu/cu/lib…. These are excellent projects. However, the ultimate goal must be to create standardized tools for all libraries, such as the OPAC.

The technological advancement on which the OPAC is based is Machine-Readable Cataloging (MARC) (http://lu.com/odlis/odlis_m.cf…. MARC standardized bibliographic description, and allowed for the transfer of bibliographic data for sharing data between libraries (e.g. interlibrary loan requests), reducing duplicated cataloging (lead to OCLC and other shared cataloging collaborations) and allowing data to be transferred from one system to another. If our OPACs were still as simple as they were back in the day, we might be able to create something from scratch. However, along the way, we integrated all sorts of other data into our OPACs and they have become for many libraries, automated systems that handle everything from ordering, tracking, cataloging, to checkout, patron information and WEBPAC interfaces. We run acquisition reports, receive patron comments, have automated overdue notices, handle bindery maintenance, and manage our serials. I’m no expert on OPACs, but I doubt all of the data are easily transferred in a MARC format to an entirely new systems

Submitted by art on

Permalink

I remember a developer for one major ILS vendor estimating that they put as little as 10% of their resources into the IR side of the system, and that the ledger and serials control prediction patterns made up the bulk of their table layouts. Modern ILS applications have many hundreds of tables and walls full of relationship diagrams, and the same plumbing that carries out tricky constructs like serials control and produces purchase orders is also largely responsible for the capabilities of the interface and the searching/retrieval possibilities supported by the system. For better or for worse, the public side of the ILS is usually the tip of the iceberg, and tends to be somewhat dysfunctional because the rest of the iceberg is so darn big and complicated (remember that ERP systems, which are comparable in terms of inventory management, also tend to have really lousy interfaces). But I think one of the keys to Peter’s approach is that he has bypassed the limitations of one part of the system to provide a new service. Forget depending on the vendors, it’s a niche market at best, and they are too busy working on transaction issues anyway.

Submitted by cj on

Permalink

So many things to write - so tired. So, will take the easy road and just point to the as of yet UnNamed OSS ILS project in Georgia - http://www.open-ils.org/.

Their FAQs do a good job of laying out the changes in the landscape that make this feasible at this point. It looks like this could take care of the BIG systems and Koha might cover the little systems, so how do we get libraries aware and willing to pay for this? And do we just end up in the same boat eventually if we bring the care and feeding of the ILS back into the library? Has anyone set up a framework for evaluating such a question so that libraries could decide whether they would participate in a group-funded development of an ILS?

Submitted by David Bigwood on

Permalink

I think the GA project is called PINES. That is worth looking at. It is a big project for big libraries. On the other end of the scale are the thousands of school, small public and corporate libraries that don’t need all that power. KOHA http://www.sourceforge.net/pro… is an example of one option for them. There are others.

Hackfest is an intresting idea. Maybe it will be picked up at more places. http://library.acadiau.ca/acce…

Add new comment

Plain text

  • Allowed HTML tags: <b> <i> <em> <strong> <blockquote> <code> <ul> <ol> <li>
  • Lines and paragraphs break automatically.

About This Blog

Photo of Peter RukavinaI am . I am a writer, letterpress printer, and a curious person.

To learn more about me, read my /nowlook at my bio, listen to audio I’ve posted, read presentations and speeches I’ve written, or get in touch (peter@rukavina.net is the quickest way). 

You can subscribe to an RSS feed of posts, an RSS feed of comments, or a podcast RSS feed that just contains audio posts. You can also receive a daily digests of posts by email.

Search