Today I read a paper titled “Neural network ensembles: Evaluation of aggregation algorithms”
The abstract is:
Ensembles of artificial neural networks show improved generalization capabilities that outperform those of single networks.
However, for aggregation to be effective, the individual networks must be as accurate and diverse as possible.
An important problem is, then, how to tune the aggregate members in order to have an optimal compromise between these two conflicting conditions.
We present here an extensive evaluation of several algorithms for ensemble construction, including new proposals and comparing them with standard methods in the literature.
We also discuss a potential problem with sequential aggregation algorithms: the non-frequent but damaging selection through their heuristics of particularly bad ensemble members.
We introduce modified algorithms that cope with this problem by allowing individual weighting of aggregate members.
Our algorithms and their weighted modifications are favorably tested against other methods in the literature, producing a sensible improvement in performance on most of the standard statistical databases used as benchmarks.