[Date Prev][Date Next][Thread Prev][Thread Next][Chronological Index][Thread Index]

Re: EAGLES II workshop

 As promised, here is a first batch of questions raised at the LREC
Conference in Granada.  More will follow over he next few days.

Please let us have your reactions to them through the list.

Maghi King.

Questions asked by Lin Chase (LIMSI).

1.  Has there been any discussion of confidence measures for human
annotated data?

2. By recording only categorial decisions (what class is this
word/phrase/document) we lose important information that we might retain
by allowing the human annotator to say if s/he was uncertain about that
judgement. Comments?

3. The Americans all seem to assume that the  'evaluator' of the
technology (or the 'organizer') is also the funding agency. This
alignment of performance demand with financial incentive seems to be a
major characteristic of the US programs. Are we in Europe going to be
succesful if we don't choose a similar approach?

4. Raj Reddy, who runs the speech recognition research group at Carnegie
Mellon University. always says that the only real test of whether you've
succeeded in developing a system is that if you give it to users, when
you come to take it away, they fight you for it. What can we do (say with
spoken language information systems) to use this kind of "evaluation"

Please note my new e-mail address (old address was king@divsun.unige.ch)
Maghi King                   | E-mail: Margaret.King@issco.unige.ch
ISSCO, University of Geneva  | WWW: http://issco-www.unige.ch/
54 route des Acacias         | Tel: +41/22/705 71 14
CH-1227 GENEVA (Switzerland) | Fax: +41/22/300 10 86