A network for new and aspiring library professionals
UX and usability have been buzz words in the library world ever since I gained my first professional post. Being mindful of user experience has proven to be the most important part of my roles, both as a subject librarian and in digital support. I’m going to share some of the lessons learned from the two service-wide usability studies I’ve been involved in, as well as a few tips for those of you who are keen to run usability studies of your own. (It would make a great dissertation topic!)
The first thing I would say, in a work based scenario, is not to be intimidated by time consuming methodologies or tech-heavy approaches used by very serious usability testers. It will almost certainly be the case that a usability study would be something you have to carry out alongside all the usual parts of your day job. You may not have access to eye-tracking technology, and you may not have users queuing up to be part of a study. Time, money and resources will inevitably be a factor, but some testing is better than no testing. You will learn something about your users and your systems by carrying out the most basic of tests.
I have worked at two institutions that have carried out usability studies on their most prominent systems – the catalogue and respective discovery systems. As the focus of both studies was on user and use, it was very important to go beyond surveying students and to gather more robust data and information. In both cases, a three-fold approach was used, survey, observed search tasks and focus groups.
A survey is the simplest and quickest way to capture key information about information seeking behaviours and experiences of the systems, while the observed search tasks allowed us to monitor users interacting with the system and provided useful data to help us determine how useable it is in practice rather than in theory. A study of this kind has the potential to help identify patterns of use or information seeking behaviours that affect the performance of systems. In both studies we observed which features users were prioritizing, and we were able to identify “unknown unknowns,” instances in which users don’t know that they aren’t using the system effectively. These unknowns would not have been captured if analysis were confined to surveys only; observing what users do rather than simply listening to what they say can therefore be more telling.
In both studies, participants were asked to use the systems for fifteen to twenty minutes as though they were looking for information for an assignment. We didn’t want the tasks to be too prescriptive, so whilst there was some known-item searching to test how the systems responded, there was also the opportunity for students to research a topic of their own choosing, using their own terms.
In one of the studies, facilitating staff observed one or two students each, and recorded their observations on a checklist, e.g. which filters the student used, did they make use of any advanced settings etc. In the second, we relied on screen capture technology and provided students with headsets to talk us through their searches. The second approach was more comprehensive (although we did have one technology blip with a student not saving their recording!) but the checklist approach provided us with lots of data when such equipment had not been available to us.
Finally, usability participants were involved in focus group which provided in-depth and contextual information, which could not be measured or quantified through the usability observation checklists or recordings. Perceptions of relevance and satisfaction are particularly subjective and would be difficult to quantify without a focus group following on from the usability study.
Both studies provided useful information about search techniques, use of filters, and features which were never or very rarely used (almost all personalisation settings and advanced search). It was clear which elements were useful and straightforward, and which aspects were confusing, unobvious and even useless. In the second study, we have used this information to strip back and redesign the interface of the discovery tool, putting user experience front and centre. However it is also the case that you may learn about things that you have no power to change – a feature that causes confusion to every user that encounters it, but you can’t tweak it. This is still useful! You can feed this back to vendors to encourage them to improve their product – evidence based change.
Another pragmatic use for the insight you gain from usability testing is to let it inform your teaching. It can direct your approach, and some of your content. You can begin mindful of the way students encounter the system and you can flag up quick tips to users that will save loads of time, before it becomes a problem for them.
Another tentative example of usability testing I have been involved in was as a new member of staff. I started a new role at the beginning of the year and a very savvy, user-experience focused Collections Manager seized the opportunity to record my first impressions of the institution’s catalogue and discovery service. My professional training and experience means I couldn’t exactly claim to be a typical new user, rather an experienced user of new systems. But before I became familiar with systems, I recorded all the things that made me stop, hesitate, instances in which I was uncertain of what I was looking at or if I was navigating efficiently. I then presented this to library staff in a training session where it generated a great deal of discussion. Many members of helpdesk staff commented that users essentially flag up the same things – though they don’t articulate it as a design flaw in the way I had done. They remarked how easy it is to get used to the way a system works, and forget that parts of it are genuinely problematic. So if you have a new starter in your team, or if you are starting a new role – this is a useful exercise to complete. It helps you be a little bit closer to that first-time user.
Information professionals do very quickly become familiar and expert with the systems they use. Though we are often mindful of a system’s capabilities and limitations, this doesn’t necessarily correspond with a user’s experience. When observing students, there have been occasions when I have been truly baffled by the way a user navigates their way through a system, only to find that several other users I went on to observe did exactly the same thing. It is eye-opening and a fantastic reminder that putting up with and figuring out poor system design is simply not something many of our users are prepared to do. Whilst we may have got used to the way a system works and forget that it once caused us confusion, poor usability will drive many of our users away from our library sponsored products. The most basic of tests can challenge your assumptions and give you invaluable insights. So, never assume. We all know why….