Federal libraries taking a hit – ‘troubling trend’ with ‘bottom line impact’

Federal Government of Canada libraries are taking a hit.  The Ottawa Citizen covered the closure of the HRSDC library earlier here and the HRSDC Library posted their update earlier today on the FLC-CBF listserv (login required) and copied here.

This is not the beginning nor the end – inside reports confirm a range of libraries have already experienced cut backs in personnel, with some of these cuts going back over the years.  Health Canada library, for example, had about 50 staff only three years ago – now they’re down to about 8.  Our own shop (Natural Resources Canada library) lost 7 library staff in December 2011 when a number of term, co-ops and contract staff were not able to be renewed.

To be clear, libraries are not alone – the restructuring and related layoffs are pretty widespread and crossing numerous occupational categories and programs, with many of the initial waves of layoffs not widely reported.   The cuts have generally been made under the rubric of functional review or a separate process called strategic review and the anticipated to be more impactful outcomes of  Strategic and  Operating Review (SOR), ultimately to be confirmed in the upcoming Federal 2012 budget (February/March).

Next steps / Reprieve?

This is hard to say, but I think things are going to get worse before – well – they get worse. If one major department can close its library, why not others? At the very least it points to a trend that hasn’t appeared to find its bottom yet.

The reprieve may be possible positive outcomes from preliminary discussions at various levels about moving towards either a more clustered approach to library services or some kind of “whole of government” library service model.  This would see varying degrees of increased integration and sharing of technical and other infrastructure, including people and services.

Top 10 bonkers things about the universe (Marcus Chown)

I subscribe to TV Ontario’s Big Ideas podcast and found this presentation especially interesting.  I must have replayed this one 3 or 4 times already.  Bonkers is right!

Marcus Chown wrote about this in one of last year’s postings from the New Scientist magazine, but this recently became available as a video from a Big Ideas presentation (and as a separate podcast for audio subscribers).

As Chown says: “how much stranger science is than science fiction..”

Link to video.

Google Instant or Instant Commercial?

Andre Vellino blogs about the new Google Instant service. In performing a very smart and simple search experiment, Vellino noted what each letter of the alphabet first suggests when using Google Instant.

Very interesting results from his blog post (“Feedback Effects in Google Instant”) reproduced here:

Vellino asks:

does “Instant” degrade the quality of “Suggest”.  i.e. the more people use “Instant” the more the “top-N” suggested terms are reinforced, thus thinning out the “long tail” of queries.  Is “Instant” going to increasingly cater to the lowest common denominator?

There’s a notion shared amongst certain observers of the traditional print media that advertising is “the license to do business”  and that this “license” can be coercive and domineering in some very harmful ways (direct and indirect censorship and influence, corruption of content, and so on).  In the online world it can be much more subtle and insidious, but the threat remains the same: how far do you go to please your advertisers and in this case, how much will it corrupt search?

Comment of the day plus bonus ‘ouch’ of the day

Comment of the day

I found myself returning back a couple of times to Ken’s comment in reply to Karen Coyle’s post on the III SkyRiver lawsuit against OCLC:

I see your point about the ‘zero sum game’–to a point. However if we take a broader, more inclusive view of the ‘library function’ we’d see it’s a *booming* market-and seems to be practically recession proof. Google’s mission statement defines it squarely as a ‘library’ company. Interviewed in Wired their CEO proudly talks about how they are ‘good now at cataloguing and indexing stuff’. And it’s not just commercial organisations like Google and Amazon that have ‘disrupted’ the conventional library market. Wikipedia and OpenLibrary are part of the growing ‘social economy’. So I think the action is a symptom of a much wider change. ‘Conventional’ libraries and library companies are scrambling around trying to get their slice of the ‘zero sum game’ because they can’t see beyond it and develop radically new services and products. They are leaving it to others. It’s the ‘Innovator’s dilemma’ if you like.

It summarizes nicely some thoughts I’ve been harbouring for some time as well. Apologies to Ken, but I didn’t have the time to get a Blogger account to view your profile to cite your name more fully.

Ouch of the day — ‘We were thinking about librarians’

Tuesday’s Ottawa Citizen carried an article (original here) on the “wild ducks” at IBM’s Almaden labs, where the company gives its employees a significant amount of leeway to run amok with creative and innovative technological explorations – kind of a think lab on steroids.  Here’s an extract that struck me:

Some projects don’t pan out. In the early 1990s, Haas and IBM’s database management team wanted to figure out a way of searching the web, something that didn’t click.

“If we did it right, we might have invented search before Google,” Haas said. “Be we had the wrong model, and we totally missed the boat. We were thinking about librarians.” [my emphasis]

Ouch!

I also like the quote that “We’re different. It takes a different kind of craziness here” — an appropriate statement on what I think we need a bit more of in the library community.

Evergreen ILS in the Enterprise

We’ve just recently passed our one year anniversary following our first libraries’ migration to Evergreen, and now 3 months past the last of our libraries to migrate.  Our operating environment is a mid-to-large size government department, with an above average technical literacy amongst our clients. We have some serious requirements for interacting with the ILS from certain key internal clients, and I see this enterprise interaction as a bit of a litmus test for the future of the ILS, both in our department as well as elsewhere.

Some examples from NRCan Library:

  • Integration with our Autonomy full text search – like most everyone else, one focus of our Enterprise search team is to expand the indexing of full-text web content to include other types of collections. So when our search team [1] came knocking with an opportunity to include the library catalogue, we were confident that our Evergreen ILS would prove itself well with this integration.  Getting Autonomy to crawl the library catalogue demonstrated to be no problem with little development investment from our part (just a page or two documentation on the SuperCat API and that’s it!). We even had it crawl during business hours and saw little performance hit on the Evergreen server (the bottleneck was whether Autonomy could ingest records as fast as Evergreen could serve them up). Here’s the first e-mail I got from our outside developer – definitely the kind of feedback I like to see and I know with previous ILS systems this would not have been anywhere nearly as easy and pain-free to do. Next step: automating the Autonomy engine with RSS feeds direct from Evergreen (for new vs. modified vs. deleted records, etc.).
  • PeopleSoft / Oracle Integration with Evergreen’s patron database – Part of the weakness of the traditional way we maintained our patron database is that we captured only those who actually walked in to use our library services. Everyone else didn’t make it into the ILS, so we lacked an opportunity to reach a wider audience in marketing “My Account” type services.  By migrating / linking all NRCan employees from our Peoplesoft & MS Exchange directories into Evergreen ILS, we are now better positioned to accomplish two important goals: 1) market the heck out of “My Account” to expose some pretty cool features (like Evergreen’s Bookbags / patron RSS feeds); and 2) getting us a step closer to single sign-on (Evergreen already has some sites authenticating against LDAP, and we want to move in that direction too for our internal clients). My first run at this integration involved a very manageable series of manual SQL interactions, but automating the interaction would be the next step once I’ve done this a few more times and understand all the issues (e.g. I discovered that our internal  staff directory actually had a couple dozen duplicate “keys” that don’t make sense to me – so still need to explore a few issues & implications before fully automating, etc.).
  • Dealing with the Local Requirements (aka the Itchy-Scratchies) – Every library situated in a larger enterprise has them, and you know how frustrating it can be to want a feature that may not be in demand at the moment, etc. The beauty of the open source model is that we don’t have to care if our community wants what you need. It’s better and more fun if they do, but not necessary to move on your must have “killer features”.  In our context, geographic and geospatial search is very important to us, so with Evergreen’s extensible platform, we can say to the world we want FGDC or geographic search indexes, or GeoRSS support NOW and we’re going to get it NOW (Note to map libraries: contact us if you’re interested in collaborating further on this).  Our support for GeoRSS just needs a bit more testing, but should be released within a couple of weeks, and other “geo-related” features coming later on this year.

Notably absent from the equation is the extreme “pay per use” / pay-per-product module approach so dominant in the library automation industry. In some cases, it is about not having to throw money at an out of date, and IMHO overly segmented ‘product’ marketplace (we would have had to buy the “Patron API” from a previous vendor to accomplish the PeopleSoft integration), but in other cases it’s not so much about the money as much as it is about doing business smarter. We’re still making investments, only those investments are paying back more for us and our colleagues.

[1] Just to be clear, NRCan Library doesn’t manage Enterprise search in our department, but we do work with the search team more and more..

Spotted at PGCon 2010

Getting out of the office can be a liability some times – too much to keep up with, too many fires, and a ton of work-backlog to complete.   But with PGCon 2010 hosted in town at the University of Ottawa and being very affordable, I almost got it together to make some sessions.

But not being satisfied with missing yet another PGCon [1], I joined up at the the pub night hoping to rub shoulders with one or another of the speakers / participants who might cover some of the “noob” material with me over beer, and also I hoped to network with any other local gc.ca users of PostgreSQL (used at NRCan Library by our Evergreen ILS).

I was fortunate to grab a table with the good folks at PGExperts and were later on joined by a crew from a small city whose name was very familiar to me. I know only one company from Emeryville, CA. My jaw dropped and my eyes nearly popped out when I realized that III had sent a couple of senior staff to scope things out at PGCon regarding a possible move towards having some PostgreSQL in the backend of III’s products. To be clear, this wasn’t a move towards any open source model surfacing to replace their same old business model, only III was looking at a possible strategic move towards PostgreSQL for some of it’s new products, leaving one big question still open: namely, what database will III use to replace its proprietary system, having successfully exploited it to near end-of-life status?

Very interesting indeed.  I already knew that III is using Lucene (Hibernate?) in it’s Encore product, some MySQL in for selected circulation and patron data, IndexData’s Z39.50 server and finally Apache web server, but none of this signaled much for me if users can’t touch the base tools. It might be there, but you can’t find it or use it your own way, and I suspect they’ll hold onto that model for some time yet even if PostgreSQL gets some traction with any of III’s products. [2]

[1] Recent years’ PGCon’s were also held in Ottawa, and I found out that Ottawa is a recurring meet up place due to the visa hassles many international users have getting into the USA, it’s an affordable city to host events, etc.

[2]Example: before Encore, III’s official policy was to not disclose what web server it was using. I even had a senior Linux expert come in and try to determine what web server was running our WebOPAC back a few years ago, with no success. I eventually found out that it was apparently running a proprietary instance of Apache 0.98, and IMHO probably still running that way with most versions of WebOPAC / WebOPAC Pro out there.

System upgrade: oopsie doopsie at major SD shop

It can happen to any vendor, but this kind of upgrade trouble can’t be too much fun for both the library and SirsiDynix. The 4 day downtime at the Ottawa Public Public Library even elicited talk from a city councilor to seek compensation from SirsiDynix.

I know that in late 2009 the city approved upgrading from SirsiDynix Horizon to SirsiDynix Symphony, and I believe the official target was June 2010, so perhaps week’s event was related to the big upgrade?

Definitely not good to get this kind of press – see today’s Ottawa Citizen article for full details.

This is what I’m talking about…(Evergreen ILS)

Whenever I’ve talked to colleagues about our move to the Evergreen ILS, one of many “high level” overview criteria that gets discussed was our requirement for supporting vendor neutral discovery alternatives. Moving to a new ILS is one thing, but betting all the farm on a single discovery solution is a lesson we should have all learned: proprietary systems continue to significantly lock in customers either technically and/or via business model traps (e.g. witness the plethora of discussions on next gen catalogue listservs dealing with *basic* data access issues, or vendors hiding API’s behind costly add-on requirements, etc.).
Discovery interfaces

So when we discuss a slide like the one above, we make note of all the exciting work happening at the discovery level for augmenting your OPAC. There are some compelling proprietary solutions and lots of exciting open source projects now gaining visibility and marketplace traction. But where do you place your bet on? One of our requirements was NOT having to place any one single bet.. We need to invest in a foundation that values keeping our options open, and protecting our ongoing investment in the ILS.

Evergreen’s OPAC search is very good and getting better all the time. And there’s lots of exciting work in the pipe from the innovators at ESI as well as community implementors. But for those libraries (especially academics) looking for integrating other digital content, providing complementary feature sets, or just integrating the ILS with a pre-existing search toolbox, Evergreen’s openness and flexible API provides a refreshing and solid foundation.

So This is What I’m Talking About: Evergreen 1.6 with Endeca at McMaster University. Kudos to Wiktor for getting that started.

But wait, maybe Endeca is not your flavour. My colleague Warren Layton began some proof of concept work on a connector to vuFind. But vuFind not your cup of tea (where’s that Andrew Nagy when you need him!)? How about an Evergreen connector for Blacklight available in next Spring?

Or how about…we just say that things are just getting started. Evergreen’s providing us with a a solid and authoritative indexing engine for our OPAC, and now we also see how a diverse community is beginning to lay out the options, all-you-can-eat buffet style…

NRCan Library Update: our department uses Autonomy for full-text search of all website contents. We’ve recently been asked about exposing our Evergreen content into the site wide search, and hope to have something completed next Spring or early Summer. Right now, our current focus is our second and last phase of migrations into our new Evergreen system.

Koha manoeuvres

Very interesting developments in the Koha community, with lots of discussion brewing since Liblime announced its enterprise version last week. Lots of concerns about forks (read also atz’s comments), and a public response from Liblime’s CEO – it’s worth following that thread.

Not being a Koha user, I don’t have much of an immediate stake in the maneuvering going on, but I was struck by Marshall Breeding’s Open Letter to the Koha community where he writes:

There comes a point where an open source software projects grows beyond what can be managed through informal channels…Recent events suggest that it’s time to take a closer look at the governance of the Koha project.

I suggest a shift from a community comprised of developers to that of a community focused on the libraries that use the software.

I appreciate Breeding’s industry reviews, but I have to say that he’s been a bit of a downer and confusion-monger on open source IMHO: late on the train, and mis-reading the terrain. The observation about “informal channels” is both inaccurate and a bit of a red herring, and so is the suggestion that “a community comprised of developers” is what needs to be shifted to one “focused on the libraries that use the software.”

On “what can be managed through informal channels,” what is he talking about? Anybody with even the minimal experience with these communities can quickly see much blood, sweat and effort goes into “formal channels,” however you want to define them: commercial support options, community investment models (foundations, vendors, sponsorships, etc.),  documentation and support, exploring business models, community growing, and so on. But does a small technology startup become Cisco Systems overnight? How many years did it take for some of the more successful FLOSS projects to ‘mature’? The fact is there is running software out there successfully implanted in a fast growing segment of libraries.

Second, many of these developers are straight from the library community and the developer orientation – to the extent that you can imply it’s a dominating community feature  – is and was needed due to the limited leadership and vision coming out of the library land to make sensible technology investment decisions. Without them, you can’t build something from nothing, and that libraries are somehow divorced from this process is ludicrous. You just couldn’t have had the success that projects like Koha, Evergreen and others have achieved without the focus being on  “libraries that use the software.”

In fact, it’s overwhelmingly the case that library involvement and control is one of the key business drivers for the selection of a FL/OSS system.

On the foundation proposal — a brash opportunistic plug for the OLE approach — this is nothing new (the open letter was posted before any of the dust settled – LOL). Foundation support has been discussed in the community for years but there’s effort and organization involved and no shortage of other high priority developments that need to be addressed.  So recent events have Liblime re-examining Foundation development, and other Koha community memberships are looking at options too. But not much interest expressed in the OLE model and further, a very challenging thing to pull off any way you take it.  Foundation support also won’t address the immediate concerns about Liblime’s direction etc.

The periodic ‘spasms’, tweaks in vendors’ business models, blowout discussions about forks, and so on – all of that is important, expected, and part of the terrain to be negotiated.   There should be no surprises here, and I’m glad to see that at least it’s out in the open for all to see and assess…