Back to main workshop report
Bert Arnold reported on the MEGATAQ project, which is concerned with evaluation in the context of the needs of designers of multi-media applications. The project sets out a general framework for evaluation, introduces a reference model and provides guidelines for the application of specific methods and tools. Telematics applications are defined quite loosely to mean applications of information technology supported by telecommunication technology and infrastructures. For example, electronic data exchange systems, telematics systems that give users access to data bases at a distance and systems that provide support for groups working together (audio/video conferencing, computer conferencing, shared workflow systems) are all covered by the definition.
A leading principle of the project is that evaluation should be concerned not only with the technical evaluation tools used, but also with the interaction between users, tools and tasks in the context of social interaction in a wider environment.
Evaluation design implies knowing or deciding:
This initial evaluation reference model is fleshed out by considering the types of interaction in which a new system can play a role ; four types of interaction are distinguished:
The MEGATAQ reference model then relates inputs/context of use to processes/interactions and both of these to first order and second order outcomes. In addition, the actual consequences of a system depend not only on the characteristics of input and interaction factors but also on the way the new configuration is introduced into the organisation. Evaluation must therefore take this too into account.
The evaluation process converges to a product, but often this product is updated regularly, which implies an ongoing series of iterative design cycles. The cyclical process starts with a problem situation or an idea about potential improvement of a situation. This initiates an analysis of the present situation through more or less in-depth and systematic approaches. In later stages design and evaluation of prototypes takes place, until the system in its operational use can finally be assessed for its effects on the context in which it functions. In this cyclical process, evaluation is an on-going affair. Evaluation is not necessarily very time consuming or very complex. It requires however a systematic approach and a careful choice, based upon the evaluation questions that have to be answered and on the resources available.
The implications of telematics systems, particularly of advanced multi-media systems, are often difficult to foresee. Systematic future usage scenario development and the analysis of those scenarios can provide a better view of the implications, and can therefore support the choice of the total configuration to be designed and of the success criteria to be used in evaluation.
MEGATAQ supplies a variety of tools to support the evaluation process.
The MEGATAQ Assessment Reference checklist (MARC-A, MARC-B) helps to assess the actual impacts of the introduction of a new (technological and/or social) system. It consists of open interview questions concerning the impact on all aspects of the reference model. MARC-A refers to a situation where no comparison can be made, MARC-B compares the new situation with a previous one. MARC-C is comparable to MARC-A and MARC-B, but instead of open interview questions it consists of a set of small standardised and calibrated questionnaire modules.
MUSC, the MEGATAQ Usage Scenario Checklist supports the formulation of usage scenarios.
MACC, the MEGATAQ Anticipated Consequence Checklist, supports the identification of potential impacts of a new system. These impacts can be translated into criteria for future evaluation.
MEGATAQ also provides a summary of a variety of tools developed elsewhere, giving for each a brief description, technical specifications and an assessment or reliability and validity. These tools range from questionnaire type tests to tests depending on physiological monitoring, such as heart rate tests used to measure stress.
Discussion was mainly limited to clarification questions.
Back to main workshop report