However, it seems important to signal that a wider acceptance of vocabulary might not necessarily lead to a better product: it might, for example, be desirable that a checker does not accept archaisms, since those can be the mis-spelled version of the current word. This aspect is strongly user-dependent.
Suggestion adequacy was evaluated on the basis of the original list of words, i.e. how the suggested words matched exactly the source list. For instance, if we insert an error in the word " cot " and obtain " *cit ", ideally we would like the first suggestion offered by the spelling checker to be " cot ". If other correct words are just as plausible, we cannot take that into consideration. So, if the first suggestion is an equally plausible word (for instance " cut ") the checker is, in a certain sense, mis-evaluated.
This is a problem we could not envisage solving during the present project; it would take much more work to take this fact into consideration in the computing of the percentages of good suggestions, especially in an automatic way.