Category Archives: Integrated Library System (ILS)

Evergreen ILS in the Enterprise

We’ve just recently passed our one year anniversary following our first libraries’ migration to Evergreen, and now 3 months past the last of our libraries to migrate.  Our operating environment is a mid-to-large size government department, with an above average technical literacy amongst our clients. We have some serious requirements for interacting with the ILS from certain key internal clients, and I see this enterprise interaction as a bit of a litmus test for the future of the ILS, both in our department as well as elsewhere.

Some examples from NRCan Library:

  • Integration with our Autonomy full text search – like most everyone else, one focus of our Enterprise search team is to expand the indexing of full-text web content to include other types of collections. So when our search team [1] came knocking with an opportunity to include the library catalogue, we were confident that our Evergreen ILS would prove itself well with this integration.  Getting Autonomy to crawl the library catalogue demonstrated to be no problem with little development investment from our part (just a page or two documentation on the SuperCat API and that’s it!). We even had it crawl during business hours and saw little performance hit on the Evergreen server (the bottleneck was whether Autonomy could ingest records as fast as Evergreen could serve them up). Here’s the first e-mail I got from our outside developer – definitely the kind of feedback I like to see and I know with previous ILS systems this would not have been anywhere nearly as easy and pain-free to do. Next step: automating the Autonomy engine with RSS feeds direct from Evergreen (for new vs. modified vs. deleted records, etc.).
  • PeopleSoft / Oracle Integration with Evergreen’s patron database – Part of the weakness of the traditional way we maintained our patron database is that we captured only those who actually walked in to use our library services. Everyone else didn’t make it into the ILS, so we lacked an opportunity to reach a wider audience in marketing “My Account” type services.  By migrating / linking all NRCan employees from our Peoplesoft & MS Exchange directories into Evergreen ILS, we are now better positioned to accomplish two important goals: 1) market the heck out of “My Account” to expose some pretty cool features (like Evergreen’s Bookbags / patron RSS feeds); and 2) getting us a step closer to single sign-on (Evergreen already has some sites authenticating against LDAP, and we want to move in that direction too for our internal clients). My first run at this integration involved a very manageable series of manual SQL interactions, but automating the interaction would be the next step once I’ve done this a few more times and understand all the issues (e.g. I discovered that our internal  staff directory actually had a couple dozen duplicate “keys” that don’t make sense to me – so still need to explore a few issues & implications before fully automating, etc.).
  • Dealing with the Local Requirements (aka the Itchy-Scratchies) – Every library situated in a larger enterprise has them, and you know how frustrating it can be to want a feature that may not be in demand at the moment, etc. The beauty of the open source model is that we don’t have to care if our community wants what you need. It’s better and more fun if they do, but not necessary to move on your must have “killer features”.  In our context, geographic and geospatial search is very important to us, so with Evergreen’s extensible platform, we can say to the world we want FGDC or geographic search indexes, or GeoRSS support NOW and we’re going to get it NOW (Note to map libraries: contact us if you’re interested in collaborating further on this).  Our support for GeoRSS just needs a bit more testing, but should be released within a couple of weeks, and other “geo-related” features coming later on this year.

Notably absent from the equation is the extreme “pay per use” / pay-per-product module approach so dominant in the library automation industry. In some cases, it is about not having to throw money at an out of date, and IMHO overly segmented ‘product’ marketplace (we would have had to buy the “Patron API” from a previous vendor to accomplish the PeopleSoft integration), but in other cases it’s not so much about the money as much as it is about doing business smarter. We’re still making investments, only those investments are paying back more for us and our colleagues.

[1] Just to be clear, NRCan Library doesn’t manage Enterprise search in our department, but we do work with the search team more and more..

Spotted at PGCon 2010

Getting out of the office can be a liability some times – too much to keep up with, too many fires, and a ton of work-backlog to complete.   But with PGCon 2010 hosted in town at the University of Ottawa and being very affordable, I almost got it together to make some sessions.

But not being satisfied with missing yet another PGCon [1], I joined up at the the pub night hoping to rub shoulders with one or another of the speakers / participants who might cover some of the “noob” material with me over beer, and also I hoped to network with any other local users of PostgreSQL (used at NRCan Library by our Evergreen ILS).

I was fortunate to grab a table with the good folks at PGExperts and were later on joined by a crew from a small city whose name was very familiar to me. I know only one company from Emeryville, CA. My jaw dropped and my eyes nearly popped out when I realized that III had sent a couple of senior staff to scope things out at PGCon regarding a possible move towards having some PostgreSQL in the backend of III’s products. To be clear, this wasn’t a move towards any open source model surfacing to replace their same old business model, only III was looking at a possible strategic move towards PostgreSQL for some of it’s new products, leaving one big question still open: namely, what database will III use to replace its proprietary system, having successfully exploited it to near end-of-life status?

Very interesting indeed.  I already knew that III is using Lucene (Hibernate?) in it’s Encore product, some MySQL in for selected circulation and patron data, IndexData’s Z39.50 server and finally Apache web server, but none of this signaled much for me if users can’t touch the base tools. It might be there, but you can’t find it or use it your own way, and I suspect they’ll hold onto that model for some time yet even if PostgreSQL gets some traction with any of III’s products. [2]

[1] Recent years’ PGCon’s were also held in Ottawa, and I found out that Ottawa is a recurring meet up place due to the visa hassles many international users have getting into the USA, it’s an affordable city to host events, etc.

[2]Example: before Encore, III’s official policy was to not disclose what web server it was using. I even had a senior Linux expert come in and try to determine what web server was running our WebOPAC back a few years ago, with no success. I eventually found out that it was apparently running a proprietary instance of Apache 0.98, and IMHO probably still running that way with most versions of WebOPAC / WebOPAC Pro out there.

System upgrade: oopsie doopsie at major SD shop

It can happen to any vendor, but this kind of upgrade trouble can’t be too much fun for both the library and SirsiDynix. The 4 day downtime at the Ottawa Public Public Library even elicited talk from a city councilor to seek compensation from SirsiDynix.

I know that in late 2009 the city approved upgrading from SirsiDynix Horizon to SirsiDynix Symphony, and I believe the official target was June 2010, so perhaps week’s event was related to the big upgrade?

Definitely not good to get this kind of press – see today’s Ottawa Citizen article for full details.

This is what I’m talking about…(Evergreen ILS)

Whenever I’ve talked to colleagues about our move to the Evergreen ILS, one of many “high level” overview criteria that gets discussed was our requirement for supporting vendor neutral discovery alternatives. Moving to a new ILS is one thing, but betting all the farm on a single discovery solution is a lesson we should have all learned: proprietary systems continue to significantly lock in customers either technically and/or via business model traps (e.g. witness the plethora of discussions on next gen catalogue listservs dealing with *basic* data access issues, or vendors hiding API’s behind costly add-on requirements, etc.).
Discovery interfaces

So when we discuss a slide like the one above, we make note of all the exciting work happening at the discovery level for augmenting your OPAC. There are some compelling proprietary solutions and lots of exciting open source projects now gaining visibility and marketplace traction. But where do you place your bet on? One of our requirements was NOT having to place any one single bet.. We need to invest in a foundation that values keeping our options open, and protecting our ongoing investment in the ILS.

Evergreen’s OPAC search is very good and getting better all the time. And there’s lots of exciting work in the pipe from the innovators at ESI as well as community implementors. But for those libraries (especially academics) looking for integrating other digital content, providing complementary feature sets, or just integrating the ILS with a pre-existing search toolbox, Evergreen’s openness and flexible API provides a refreshing and solid foundation.

So This is What I’m Talking About: Evergreen 1.6 with Endeca at McMaster University. Kudos to Wiktor for getting that started.

But wait, maybe Endeca is not your flavour. My colleague Warren Layton began some proof of concept work on a connector to vuFind. But vuFind not your cup of tea (where’s that Andrew Nagy when you need him!)? How about an Evergreen connector for Blacklight available in next Spring?

Or how about…we just say that things are just getting started. Evergreen’s providing us with a a solid and authoritative indexing engine for our OPAC, and now we also see how a diverse community is beginning to lay out the options, all-you-can-eat buffet style…

NRCan Library Update: our department uses Autonomy for full-text search of all website contents. We’ve recently been asked about exposing our Evergreen content into the site wide search, and hope to have something completed next Spring or early Summer. Right now, our current focus is our second and last phase of migrations into our new Evergreen system.

Koha manoeuvres

Very interesting developments in the Koha community, with lots of discussion brewing since Liblime announced its enterprise version last week. Lots of concerns about forks (read also atz’s comments), and a public response from Liblime’s CEO – it’s worth following that thread.

Not being a Koha user, I don’t have much of an immediate stake in the maneuvering going on, but I was struck by Marshall Breeding’s Open Letter to the Koha community where he writes:

There comes a point where an open source software projects grows beyond what can be managed through informal channels…Recent events suggest that it’s time to take a closer look at the governance of the Koha project.

I suggest a shift from a community comprised of developers to that of a community focused on the libraries that use the software.

I appreciate Breeding’s industry reviews, but I have to say that he’s been a bit of a downer and confusion-monger on open source IMHO: late on the train, and mis-reading the terrain. The observation about “informal channels” is both inaccurate and a bit of a red herring, and so is the suggestion that “a community comprised of developers” is what needs to be shifted to one “focused on the libraries that use the software.”

On “what can be managed through informal channels,” what is he talking about? Anybody with even the minimal experience with these communities can quickly see much blood, sweat and effort goes into “formal channels,” however you want to define them: commercial support options, community investment models (foundations, vendors, sponsorships, etc.),  documentation and support, exploring business models, community growing, and so on. But does a small technology startup become Cisco Systems overnight? How many years did it take for some of the more successful FLOSS projects to ‘mature’? The fact is there is running software out there successfully implanted in a fast growing segment of libraries.

Second, many of these developers are straight from the library community and the developer orientation – to the extent that you can imply it’s a dominating community feature  – is and was needed due to the limited leadership and vision coming out of the library land to make sensible technology investment decisions. Without them, you can’t build something from nothing, and that libraries are somehow divorced from this process is ludicrous. You just couldn’t have had the success that projects like Koha, Evergreen and others have achieved without the focus being on  “libraries that use the software.”

In fact, it’s overwhelmingly the case that library involvement and control is one of the key business drivers for the selection of a FL/OSS system.

On the foundation proposal — a brash opportunistic plug for the OLE approach — this is nothing new (the open letter was posted before any of the dust settled – LOL). Foundation support has been discussed in the community for years but there’s effort and organization involved and no shortage of other high priority developments that need to be addressed.  So recent events have Liblime re-examining Foundation development, and other Koha community memberships are looking at options too. But not much interest expressed in the OLE model and further, a very challenging thing to pull off any way you take it.  Foundation support also won’t address the immediate concerns about Liblime’s direction etc.

The periodic ‘spasms’, tweaks in vendors’ business models, blowout discussions about forks, and so on – all of that is important, expected, and part of the terrain to be negotiated.   There should be no surprises here, and I’m glad to see that at least it’s out in the open for all to see and assess…

NRCan Library selects Evergreen ILS

It’s official – Natural Resources Canada Library (NRCan Library) has selected the Evergreen ILS.

Evergreen is one several open source systems on the marketplace, and was selected on the basis of functional requirements but with serious consideration given to several important marketplace trends:

  • one big library
  • vendor neutrality, especially in regards to discovery interfaces
  • strong support for a functional API (application programming interface)

The collaborative, open source development model used by the Evergreen ILS community is anticipated to give us better long term options and situate our library to respond to these and other important library trends. The move addresses duplicative ILS infrastructure as a result of a consolidation of departmental libraries.

ILS Migration: SirsiDynix and III on exit support and maintenance

Planning a migration? Eventually you’ll have to determine when you’re going to unplug your existing maintenance contract, an important factor given the ILS marketplace norm of restricting usage to paying maintenance customers.

Most proprietary software at least gives you some illusion of ownership and control by letting you run the software for years after you paid for it (why pay for support if you don’t need it?). For the growing number of SaaS-based network services (e.g. RefWorks or Ebsco AtoZ), this is not the case, but at least it’s clear up front that customers are entering into what is  essentially a car rental type of agreement for software use.

The wonky thing about most ILS EULAs is that you normally don’t think of the SaaS model when you’re running the software locally.  You buy the software but that “purchase” should really be understood as a down payment against a lease agreement. You didn’t buy anything that you can keep or share!

When I look at my desktop & server-based applications that we run (excluding the few SaaS providers we use), I can’t identify a single vendor that would unplug me effective termination of our annual support & maintenance (or prevent me from using the software without a paid support contract in place).  Not a single one.

In any event, we’re exiting our Unicorn & Millennium systems and moving to a new open source ILS  (more on this later), and here’s how the two vendors responded to our request to go month-to-month or quarterly (your results may differ):

  • SirsiDynix: not allowed, we must purchase another full year’s annual support and maintenance
  • III:  accepted our request to go with 2 additional quarterly payments, rather than paying for an unwanted full year’s maintenance.

[BTW, we have two systems in place here as we’re in the process of amalgamating several libraries]

Personally, it’s frustrating that we can’t run the software on terms that even Microsoft would permit (i.e. without annual support & maintenace), but ok I get the deal: we rented a car.  I also didn’t expect III’s “flexibility” here since they’re arguably the most proprietary of the bunch, so good on them.

And finally, it’s a bit of a shame in that this situation significantly impedes many libraries’ ability to re-direct scarce funds towards any new ILS investment (as well as manage a migration with more  flexibility) since you’re not always able to optimize migration scheduling with the end of your maintenace contract. – kudos..

Just signed up and logged onto the new site. Although I’m not a cataloguer, this is a very impressive first launch.

It has been clear for many years now that developments towards “One Big Library” were going to radically change workflows and strategies for library automation. The promise of the network, the insane ease and seduction that came with accessible and largely free (or freer) web 2 services, cheap and then more cheaper infrastructure, and so on. shows how it can be done with a clean, simple interface, and an impressive share & collaborate model that looks very compelling.

We’ve already seen great success and potential demonstrated by other ‘One Big Library’ services like LibraryThing and the OpenLibrary – still largely underappreciated and exploited imho –  and brings its own take on this kind of service.  But unlike LibraryThing and OpenLibrary, they have the potential advantage of being associated with a growing open source ILS community, so they already have a target audience primed for introducing this kind of service and building a community.  Cataloguing workflow is just ripe for this kind of disruption.

Also intriguing is yet another manifestation of the “mini-OCLC” model at play – only minus the Big Brother monopoly control aspects.  I say “mini” in the context of number of contributing libraries (so far), but as we move forward these types of services are not going to be that dis-advantaged by the number of shared records. According to Nicole Engard’s post,  they are starting out with  a “30 million strong shared database of catalog records” – pretty impressive for a start.

I would expect more vendor specific community offerings to be announced for similar shared repositories and types of services, even from the proprietary folks.  All of this is good for libraries, so long as the shared network opportunities are getting larger and more accessible, it can only reduce the size of our silos.

It’s going to be fascinating to see how all of this plays out.

For more details on, there is an overview article published recently in  The Code4Lib Journal – ‡biblios: An Open Source Cataloging Editor, and a few recent blog posts from Nicole Engard and from Jonathan Rochkind.

OSS: “come for the price, stay for the quality”

The financial meltdown impacting global economies shows every sign of being a longer term crisis. O’Reilly Radar’s Nat Torkington has an excellent overview on The Effect of the Depression on Technology. His suggestion that open source will likely benefit is expected IMHO, and I like his phrase: “come for the price, stay for the quality” as well as his thought that “many of the applications (CRM, finance, etc.) higher up the stack” may benefit too.

What this means for the library automation (“ILS”) and related discovery tools and technologies?

Those of us who have been following open source in libraries carefully over the last few years will be cognizant of a very significant fact: the money hasn’t even hit the table yet. Think about how far Koha, Evergreen or vuFind have come in the last few years (or the last 6 months!) and frame their achievements in the context of their available (albeit growing) resources.

Most significantly, many of the those adopting these new tools haven’t yet fully re-directed their “cold hard cash” and internal staff resources.  For example, a common strategy is to maintain your legacy vendor’s contract during the planning, evaluation and implementation phase, with the intent to re-assign those resources more fully once you’ve made the transition. Remember, we are leasing our software and so we have limited rights beyond the heavily restricted terms of lease. Your vendor could legitimately unplug you if you try to run your system without a valid support contract, and so those transitioning to an open source stack need to manage the transition by keeping their lease agreements active until they go live.

So while some are able to re-deploy their staff and financial investments more fully from the get go, many others will be phasing in investments over a longer period of time.  We’re talking a significant ramp up that has yet to even be realized.  The BC Sitka move to Evergreen estimated a total expenditure at over $10 million for annual software maintenance and support with their legacy vendor (SirsiDynix) to 2011. How much of this formally planned expenditure will be re-deployed towards open source development, maintenance and support is anybody’s guess, but I would suggest even half of that would go twice as far.

Ditto for personnel resources: OSS communities are growing and growing fast. But staff expertise, code contribution, and so on will take some time to be fully realized. There’s training, getting familiar with the tools, building capacity with local contractors and developers, all part of the “opportunity costs” of making the move.

So I expect the investment to pay off, and pay off BIG, and the financial crisis will likely accelerate an already growing trend towards OSS in libraries.

Sydney Plus acquires Cuadra STAR product line

As users of Cuadra STAR, we were alerted late last week by Cuadra Associates of their acquisition by SydneyPlus Group of Companies. It was a considerate call to make, reassuring us that all will remain intact (management team, technical support and development staff to remain with the company, etc.). The deal was not unexpected given the current marketplace for smaller players like Cuadra Associates. There’s some news on the deal here.

Having worked in special libraries for the bulk of my career, I have a special fondness and high respect for the class of products that Cuadra STAR fits into. If you don’t know Cuadra STAR, you may have heard of a similar product in the same marketplace from InMagic. Both products have their strengths, but I personally give my thumbs up to STAR despite InMagic’s bigger market share.

I’ve used both products and I recall the sense of WTF I had with my first experience working with Innopac (and also later SirsiDynix’s Unicorn). With Cuadra STAR you can change indexes, work with non-MARC fields and databases, use one of their turnkey solutions or build your own from scratch.  You have accessible interfaces, very good API support (we’re big users of  Cuadra’s XML and STARDB APIs), and with a reasonable learning curve, STAR remains a strong candidate as a backend database for any special library. Affordable and cost-effective, I was always surprised how many smaller libraries still overlooked these tools in favour of the big clunkers.  For reasons I won’t go into detail here, we use both Cuadra STAR and Millennium in our shop (the former due to bibliographic database requirements that just can’t be met with the big clunker).

Cuadra STAR was in many ways providing a model for what I thought a big vendor systems should be like: cost-effective + turnkey options + general database functionality + power API support when the job requires it.  Cuadra Associates did well by not exercising any of the usual vendor lock-in tricks, so what data goes in can come out and their licensing model was reasonable. They’ve been around and stable for decades, so it looks as if this acquisition will be a good move for the privately held SidneyPlus.

As usual with these acquisitions, what will happen to the smaller company’s product? Clearly, SidneyPlus wants to get a foothold into those niche markets where most of Cuadra’s customers reside: special libraries, museums, archives and company libraries. In the short term, I don’t think Cuadra’s customers have much to worry about as it’ll take some time just to integrate the businesses, identify and exploit the synergies, etc.  And I can think of a lot of worse players to be driving this acquisition. It’ll also be good for Cuadra Associates: the company has a stable and loyal customer base, but marketing and growing the company to the next level would have been a serious challenge given the competitive pressures coming from open source as well as other vendors reaching into their marketplace.

So for now it’ll be in SidneyPlus’s court to decide: the usual customer grab, or something better for all their customers?