Nigel Bevan's second presentation was a report on two European projects, INUSE and RESPECT, both dealing with the evaluation of the usability of software, and with how to determine whether a given software is suitable for a given task.
He started by suggesting that usability involves a great deal more than screen layouts. At a general level, its goals are to enable more efficient workflow and transactions, to take away distraction from customer engagement and to enhance staff and customer satisfaction. Given these goals, it is clear that peripherals are as important as software. Organisations owe it to themselves to take usability seriously. Good usability will enable more efficient business processes and more efficient communication, will improve the intuitiveness (the guessability) of the software and therefore ease of learning how to use it, and will, once again, improve staff satisfaction.
If we focus on user and organisational needs, usability reduces development times, as well as the amount of training and documentation required. It improves productivity by providing simpler interfaces and thereby reducing the number of user errors, improves the competitive edge of the organisation by increasing expectations for ease of use, increasing the usability of competitive products and by facilitating a high profile of usability in advertising. Since usability also improves the quality of life by reducing stress and ensuring increased user satisfaction, staff turnover is lower and greater stability achieved.
Put more informally, attention to usability means faster service, for
example, from a bank teller, faster learning of a new program or a new
environment, less trouble shooting at the help desk.
There is also a legal aspect to usability in the form of health and safety
legislation such as the European Directive on Display Screen Equipment.
To make the benefits of usability more concrete, Nigel described a case study, based on analysis of the introduction of new software into Hewlett Packard. User-centred design methods were used to redesign software used for identifying network problems. The results were measured over several dimensions :
Old New time to finish task 9.4 mins 4.1 mins problems identified 16% 78% average length of call 30 mins 10 mins size of manual 25 pages 4 pages people needing the manual 53% 3% user satisfaction rating 3.5 6.8
In addition to the obvious benefits to its customers, Hewlett Packard recovered their costs in eighteen months.
ISO recognised the importance of usability. ISO DIS 9241 Ergonomics
requirements for office work with VDI's, Part II Guidance on usability
offers the following definition of usability :
'The extent to which a product can be used by specific users to achieve
specified goals with effectiveness, efficiency and satisfaction in a specified
context of use.'
This definition suggests an operationalised definition of usability as quality of a product in use. The definition concerns the use of an interactive system by its (intended) users for achieving specific work goals in particular work environments.
Usability is achieved by taking it into account already at the design stage. It is based on a sound understanding of the users and their tasks. Iterative development and evaluation of prototypes with the users will help to ensure usability.
ISO 134407 sets out the human centred design process for interactive systems in the following steps.
1. Plan the human centred process
2. Specify the context of use
3. Specify user and organisational requirements
4. produce design solutions
5. Evaluate designs against user requirements.
The last four steps constitute a cycle : if evaluation reveals that user
requirements are not met, the process returns to step 2, and steps 2 through
5 are re-iterated.
Nigel then went through these steps one by one.
Planning the human centred process involves three elements :
In the context of planning the human centred process, the INUSE project had identified a process of growth towards usability maturity, which can be set out in the following stages:
The usability of a product is affected not only by the features of the
product itself but also by its context of use. The second step, specifying
the context of use, involves specifying users, the tasks they carry out
and the technical, organisational and physical environment in which the
product will be used in sufficient detail to support design, making sure
that information comes from credible sources. The date and time when the
product is used is also part of the context. The specification should be
confirmed by the users and, of course, made available to the design team.
MUSiC, the product of another European project, offers tools for usability context analysis. It identifies characteristics of the (intended) context of use in terms of users, tasks, environments and time, and offers a structured method for considering and documenting key aspects of a system which may affect its usability. The context specification serves two purposes during the design process : it helps the designers to consider the intended contexts in which a product will actually be used and provides a checklist to ensure that all relevant design issues have been considered. In terms of evaluation, the context specification specifies also the context in which usability should be evaluated.
Step 3 involves specifying the user and organisational requirements. This implies analysis of the user interface, of the jobs, of task performance, and of work design. Management considerations include the management of change and training. Both operational and financial objectives have to be taken into account.
ISO 14598-1 Software Product Evaluation - General Overview relates usability to quality. Three levels are distinguished, requirements definition, specification, and design and development. (The overall scenario supposed is very closely related to evaluation for development purposes). A chain passing through each level and back again can be discerned. Requirements are determined by the real world, which determines what the needs are. At the end of the chain, needs relate very strongly to quality in use, and there is a feedback loop between the two. Before getting to the end of the chain, though, at the specification level, needs determine the external quality requirements, which relate to the specification of system behaviour. External quality requirements in their turn determine, at the design and development level, the internal quality requirements, which are related to software attributes. The chain now turns back towards the specification level, since internal quality, measured by internal metrics, both verifies that the internal quality requirements have been met, and serves as an indicator of external quality, which, at the specifications level is related to system integration and testing and is measured by external metrics. External quality in turn is an indicator of quality in use, which, back at the requirements level and at the end of the chain, relates to the system in operation and is also measured by external metrics.
In terms of the new draft of ISO 9126, quality in use is seen as a composite function of all the quality characteristics.
MUSiC, primarily based on ISO 9241 - 11, sets out requirements for effectiveness, efficiency and satisfaction. These requirements are expressed in very concrete terms. For example, a requirement for user performance might be 'all data entry clerks will be able to complete the task with at least 95% accuracy in under 10 minutes'. A requirement for user satisfaction might be 'the mean score on the SUMI scale will be greater than 50'. (Author's note : the SUMI scale is a validated scale used for the measurement of user satisfaction. It is publicly available.)
Step 4 is to produce design solutions. Basic to this is understanding the users and their tasks. Clearly understanding is based on the context of use. It will also take into account known problems with current systems being used. Producing solutions will make use of existing knowledge in the form of standards and guidelines, for example ISO 9241, which sets out ergonomic requirements for office work with visual display terminals. Parts 10 to 17 give guidance on software
Finally, mock-ups and prototypes will be produced, either by using programming tools which allow rapid production of prototypes (Visual Basic, for example) or on paper.
Step 5 involves evaluating designs against user requirements, by obtaining user feedback and, if necessary, iterating the design.
User feedback is obtained through user-based evaluation, where MUSiC again has advice to offer. Evaluation is mainly carried out by observing the user performing his task. The observation may, for example, be via a video camera in order not to interfere too much with the user's behaviour. (See laboratory testing and scenario testing in the first EAGLES report).
Nigel offered another concrete example, this time of a bank teller's system, to show how emphasis on usability helped to ensure efficiency and therefore user and customer satisfaction. (Author's note: I'm not sure that this example made Nigel's case. Four operations are considered : depositing cash, withdrawing cash, depositing cheques and withdrawing cheques. Although the time taken for the first two (the cash operations) is reduced with the new system as compared to the old one, the time taken for the latter increases, if only slightly. Can anyone help me to clarify this ?).
On a user satisfaction measure which took into account six factors, global satisfaction, efficiency, affect, helpfulness, control and learnability, the new system scored well compared to the old on every factor other than efficiency, and here learning may be involved.
INUSE and RESPECT are two projects under the Telematics Applications Programme, in the sub-programme on Information Engineering and Telematics Engineering. They run from January 1996 to March of 1998, with a manpower allocation of 12 person years and 5 person years respectively. The goal of the two projects is to establish a network of Usability Support Centres, which will provide guidance on user-centred design, usability validation, multimedia design and requirements engineering.
The Guide to Methods for User Centred Design developed by MAPI and extended by INUSE supports three life cycle phases, for each of which INUSE proposes methods:
Usability, once seen as a craft, is moving towards being an engineering discipline. Making the move involves asking questions like
Some criteria for selecting methods are:
Nigel offered the following tables summarising some characteristics of various methods.
Early methods.
Method Category |
Individual methods |
Applicable stages of development |
Type of results provided |
Number of analysts required |
Number of analyst days to apply |
Number of users/ developers required |
Planning |
Usability planning |
Planning (+ early and late) |
Plans |
1 |
4 |
1-4 |
Usability context analysis |
Planning (+ early and late) |
Plans |
1-2 |
2-3 |
2-8 |
|
Cost benefit analysis |
Planning (+ early and late) |
Plans |
1 |
5-20 |
1-2 |
|
Guidance and standard |
ISO 9241 applicability |
Early |
Plans/designs/QA info |
1 |
3 - 5 |
2 - 8 |
ISO 9241 conformance |
Early (+ late) |
Feedback/QA info |
1 |
5 - 10 |
1 + |
|
Early prototyping |
Paper prototyping |
Early |
Design/feedback |
2 |
5 - 6 |
2 - 5 |
Video prototyping |
Early |
Design/feedback |
2 |
2-3 |
0 |
|
Computer based prototyping |
Early |
Design/feedback |
1-2 |
see user based observation | see user based observation | |
Wizard of Oz prototyping |
Early |
Design/feedback |
2 |
see user based observation | see user based observation |
Late Methods
Method category |
Individual methods |
Applicable stages of development |
Type of results provided |
Number of analysts required |
Number of analyst days to apply |
Number of users/ developers required |
Expert-based evaluation |
Heuristic evaluation |
early (+ late) |
Feedback |
2 - 3 |
3 |
0 |
Usability walkthrough |
Early (+ late) |
Feedback |
1 |
2-3 |
4 |
|
CELLO-inspection |
Early (+ late) |
Feedback |
4 - 6 |
2 |
0 |
|
User-based testing and performance measurement |
User-based observation (for design feedback) |
Late (+ early) |
Feedback |
1 - 2 |
5 - 7 |
3 - 10 |
User-based observation (for metrics) |
Late (+ early) |
Feedback/metrics |
1 - 2 |
8 - 13 |
8 - 30 |
|
Co-operative evaluation |
Late (+ early) |
Feedback |
1 - 2 |
5 - 10 |
1 - 5 |
|
Supportive evaluation |
Late (+ early) |
Feedback |
2 |
8 - 10 |
4 |
|
Subjective assessment |
SUMI |
Late (+ early) |
Feedback/metrics |
1 |
2 - 5 |
8 - 20 |
Cognitive workload |
Late (+ early) |
Feedback/metrics |
1 |
2 - 5 |
8 - 20 |
|
Focus groups |
Late (+ early) |
Feedback |
1 |
3 - 5 |
6 - 8 |
|
Individual interviews |
Late (+ early) |
Feedback |
1 |
1 - 3 |
3 - 10 |
Nigel's presentation closed with a summary of user-centred requirements engineering. Its main stages were:
Its outputs, ordered in accordance with the stages above, are :
In all usability evaluation, however, the major problems remain the selection of typical user(s), the definition of the typical task(s), the definition of the typical environment.
Much discussion concerned the point raised by Nigel, that it was difficult
to select the typical user. A point made with respect to other talks resurfaced
: when a human is involved it is difficult to disentangle assessment of
the human himself from assessment of a tool he is using, computerised or
not. Nigel accepted this point, emphasising that selection of users was
indeed the hardest part of the task and that dumb users do indeed exist.
He also pointed out, though, that if in a given work environment dumb users
are the norm, then the system has to adapt to them : they are, in this
context, the typical user.
Nigel also suggested that in a dumb user situation, it might be fruitful
to compare two softwares by assessing which one is easier or quicker when
used to perform the user's normal task, rather than asking him to undertake
some new task. This would minimise the effect of the user's relative competence
whilst at the same time offering information about his needs.