Monthly Archives: May 2010

Evergreen ILS in the Enterprise

We’ve just recently passed our one year anniversary following our first libraries’ migration to Evergreen, and now 3 months past the last of our libraries to migrate.  Our operating environment is a mid-to-large size government department, with an above average technical literacy amongst our clients. We have some serious requirements for interacting with the ILS from certain key internal clients, and I see this enterprise interaction as a bit of a litmus test for the future of the ILS, both in our department as well as elsewhere.

Some examples from NRCan Library:

  • Integration with our Autonomy full text search – like most everyone else, one focus of our Enterprise search team is to expand the indexing of full-text web content to include other types of collections. So when our search team [1] came knocking with an opportunity to include the library catalogue, we were confident that our Evergreen ILS would prove itself well with this integration.  Getting Autonomy to crawl the library catalogue demonstrated to be no problem with little development investment from our part (just a page or two documentation on the SuperCat API and that’s it!). We even had it crawl during business hours and saw little performance hit on the Evergreen server (the bottleneck was whether Autonomy could ingest records as fast as Evergreen could serve them up). Here’s the first e-mail I got from our outside developer – definitely the kind of feedback I like to see and I know with previous ILS systems this would not have been anywhere nearly as easy and pain-free to do. Next step: automating the Autonomy engine with RSS feeds direct from Evergreen (for new vs. modified vs. deleted records, etc.).
  • PeopleSoft / Oracle Integration with Evergreen’s patron database – Part of the weakness of the traditional way we maintained our patron database is that we captured only those who actually walked in to use our library services. Everyone else didn’t make it into the ILS, so we lacked an opportunity to reach a wider audience in marketing “My Account” type services.  By migrating / linking all NRCan employees from our Peoplesoft & MS Exchange directories into Evergreen ILS, we are now better positioned to accomplish two important goals: 1) market the heck out of “My Account” to expose some pretty cool features (like Evergreen’s Bookbags / patron RSS feeds); and 2) getting us a step closer to single sign-on (Evergreen already has some sites authenticating against LDAP, and we want to move in that direction too for our internal clients). My first run at this integration involved a very manageable series of manual SQL interactions, but automating the interaction would be the next step once I’ve done this a few more times and understand all the issues (e.g. I discovered that our internal  staff directory actually had a couple dozen duplicate “keys” that don’t make sense to me – so still need to explore a few issues & implications before fully automating, etc.).
  • Dealing with the Local Requirements (aka the Itchy-Scratchies) – Every library situated in a larger enterprise has them, and you know how frustrating it can be to want a feature that may not be in demand at the moment, etc. The beauty of the open source model is that we don’t have to care if our community wants what you need. It’s better and more fun if they do, but not necessary to move on your must have “killer features”.  In our context, geographic and geospatial search is very important to us, so with Evergreen’s extensible platform, we can say to the world we want FGDC or geographic search indexes, or GeoRSS support NOW and we’re going to get it NOW (Note to map libraries: contact us if you’re interested in collaborating further on this).  Our support for GeoRSS just needs a bit more testing, but should be released within a couple of weeks, and other “geo-related” features coming later on this year.

Notably absent from the equation is the extreme “pay per use” / pay-per-product module approach so dominant in the library automation industry. In some cases, it is about not having to throw money at an out of date, and IMHO overly segmented ‘product’ marketplace (we would have had to buy the “Patron API” from a previous vendor to accomplish the PeopleSoft integration), but in other cases it’s not so much about the money as much as it is about doing business smarter. We’re still making investments, only those investments are paying back more for us and our colleagues.

[1] Just to be clear, NRCan Library doesn’t manage Enterprise search in our department, but we do work with the search team more and more..

Spotted at PGCon 2010

Getting out of the office can be a liability some times – too much to keep up with, too many fires, and a ton of work-backlog to complete.   But with PGCon 2010 hosted in town at the University of Ottawa and being very affordable, I almost got it together to make some sessions.

But not being satisfied with missing yet another PGCon [1], I joined up at the the pub night hoping to rub shoulders with one or another of the speakers / participants who might cover some of the “noob” material with me over beer, and also I hoped to network with any other local gc.ca users of PostgreSQL (used at NRCan Library by our Evergreen ILS).

I was fortunate to grab a table with the good folks at PGExperts and were later on joined by a crew from a small city whose name was very familiar to me. I know only one company from Emeryville, CA. My jaw dropped and my eyes nearly popped out when I realized that III had sent a couple of senior staff to scope things out at PGCon regarding a possible move towards having some PostgreSQL in the backend of III’s products. To be clear, this wasn’t a move towards any open source model surfacing to replace their same old business model, only III was looking at a possible strategic move towards PostgreSQL for some of it’s new products, leaving one big question still open: namely, what database will III use to replace its proprietary system, having successfully exploited it to near end-of-life status?

Very interesting indeed.  I already knew that III is using Lucene (Hibernate?) in it’s Encore product, some MySQL in for selected circulation and patron data, IndexData’s Z39.50 server and finally Apache web server, but none of this signaled much for me if users can’t touch the base tools. It might be there, but you can’t find it or use it your own way, and I suspect they’ll hold onto that model for some time yet even if PostgreSQL gets some traction with any of III’s products. [2]

[1] Recent years’ PGCon’s were also held in Ottawa, and I found out that Ottawa is a recurring meet up place due to the visa hassles many international users have getting into the USA, it’s an affordable city to host events, etc.

[2]Example: before Encore, III’s official policy was to not disclose what web server it was using. I even had a senior Linux expert come in and try to determine what web server was running our WebOPAC back a few years ago, with no success. I eventually found out that it was apparently running a proprietary instance of Apache 0.98, and IMHO probably still running that way with most versions of WebOPAC / WebOPAC Pro out there.

System upgrade: oopsie doopsie at major SD shop

It can happen to any vendor, but this kind of upgrade trouble can’t be too much fun for both the library and SirsiDynix. The 4 day downtime at the Ottawa Public Public Library even elicited talk from a city councilor to seek compensation from SirsiDynix.

I know that in late 2009 the city approved upgrading from SirsiDynix Horizon to SirsiDynix Symphony, and I believe the official target was June 2010, so perhaps week’s event was related to the big upgrade?

Definitely not good to get this kind of press – see today’s Ottawa Citizen article for full details.