Re: applications vs. underlying level
I really agree on this one: you can't set about looking for a good way
to measure something unless you know what it is you're trying to
measure, and a lot of the time we don't.
That's partly why the EAGLES stuff puts such a lot of emphasis on trying
to break down the "quality" of a system into a concrete feature
structure: each node in the structure corresponds to some attribute
you're trying to find a value for.
Lin Chase said:
Our real difficulty is that we don't know how
> to articulte what the "underlying problems" *are*. For example, in the
> area of spoken language dialog systems the question of how to measure
> whether or not a system is really "understanding" a user is still open. A
> general measure of "understanding" quality has never been developed.
> Everyone agrees that this is critical, but there's wide disagreement about
> what to measure and when. This is not because picking a measure is hard,
> but rather because we don't really know what we mean by "understanding".
> If we can agree on what we mean by "understanding" then we'll be able to
> agree on a measure for it. One good way to get clearer on what we mean by
> "understanding" is to make and test definitions in a variety of application
> contexts. This approach has the advantage of yielding both
> intermediate-term concrete results (as in ATIS) and an increment in
> long-term comprehension of the overall problem.
> Getting applications-level evaluation results is not necessarily juxtaposed
> with making important progress on the underlying issues.
> Thanks for listening,
> Lin Chase
Please note my new e-mail address (old address was email@example.com)
Maghi King | E-mail: Margaret.King@issco.unige.ch
ISSCO, University of Geneva | WWW: http://issco-www.unige.ch/
54 route des Acacias | Tel: +41/22/705 71 14
CH-1227 GENEVA (Switzerland) | Fax: +41/22/300 10 86