Today I read a paper titled “Forgetting Exceptions is Harmful in Language Learning”
The abstract is:
We show that in language learning, contrary to received wisdom, keeping exceptional training instances in memory can be beneficial for generalization accuracy.
We investigate this phenomenon empirically on a selection of benchmark natural language processing tasks: grapheme-to-phoneme conversion, part-of-speech tagging, prepositional-phrase attachment, and base noun phrase chunking.
In a first series of experiments we combine memory-based learning with training set editing techniques, in which instances are edited based on their typicality and class prediction strength.
Results show that editing exceptional instances (with low typicality or low class prediction strength) tends to harm generalization accuracy.
In a second series of experiments we compare memory-based learning and decision-tree learning methods on the same selection of tasks, and find that decision-tree learning often performs worse than memory-based learning.
Moreover, the decrease in performance can be linked to the degree of abstraction from exceptions (i.e., pruning or eagerness).
We provide explanations for both results in terms of the properties of the natural language processing tasks and the learning algorithms.