Neural Network Benchmark - Dietmar Heinke's Homepage

 
Go to content

Main menu:

Neural Network Benchmark

Work prior to Bham

Benchmark of Growing Neural Gas (GNG), Growing Cell Structures (GCS) and Fuzzy Artmap (FAM): A Summary

The benchmark began with the question which is the best neural network for solving a pattern classification task -- the well known MLP, or one of the more recently developed incremental networks FAM, GCS or GNG?

This question was examined in the framework of four real-world datasets (a subset of the Proben1 dataset) representing different levels of difficulty of classification problems. The first dataset (cancer) is a relatively easy classification problem with complex boundaries between classes, only little overlap between classes and sufficient number of data points. The second dataset (diabetes) increases the degree of difficulty by having overlapping classes in addition to complex boundaries. The third dataset (glass) shows, besides complex boundaries and overlapping of classes, a lack of sufficient number of data points. The same is true for the fourth dataset (thyroid). However, thyroid shows an additional feature of linear boundaries between the classes due to boolean input variables.

The reference in this benchmark was the result of the extensive study of MLP by L. Prechelt (1994). From a theoretical view-point, one could expect a better performance of MLP than of the incremental networks because MLP perform a global adaptation to the training dataset, whereas the incremental networks perform a local adaptation. The results of this benchmark show that this is clearly not the case. On the contrary, MLP performs in the same range as the incremental networks. Thus, the elimination of the parameter of number of hidden nodes through the incremental mechanism outweighs the disadvantage of local adaption in the incremental networks.

Originally, we aimed at finding a clear answer to the question which is the best network in terms of the classification error. Since none of the networks always performed significantly better than the other networks, there is no clear answer to our question. However, we found some rules which state, how well a certain networks performs, given certain properties of a dataset. These rules are summarised here: Except for the fourth dataset MLP, GCS, GNG perform similarly, whereas FAM behaves worse and for the first two datasets this behaviour is significantly worse according to the t-test. Hence, FAM tends not only to have problems with datasets with overlapping classes but also with datasets with complex boundaries. For the third dataset, despite its overlapping properties, the performance of FAM is not significantly worse, because its more geometrically oriented behaviour has less problems with the few data points in this dataset than the statistically oriented GNG and GCS.

For the fourth dataset a different picture emerges: GNG and GCS behave significantly worse than MLP and FAM. This is mainly due to the linear boundaries between classes following the boolean input variables. For these boundaries, the hyperrectangle-based regions of FAM and the polygon-based regions of MLP are more suitable than the radial-based regions of GNG and GCS.

Apart from the classification error, other performance measures were examined in this paper. For the number of inserted nodes which is an important measure for the incremental networks FAM performs best. However, the training of GNG and GCS can be tuned so that they insert less units but still perform better than FAM. For the number of epochs FAM shows the shortest training time. However, GNG and GCS also show a rapid convergence, whereas MLP typically shows a slow convergence. Finally, the variation of performance depending on the variation of parameters was examined. Here, GNG clearly outperform the other networks. For MLP the time consuming search of a good architecture and the best choice of parameters plays a crucial role. Only for the datasets with few data samples FAM shows fewer variations in behaviour than GNG.

In sum, considering the similar classification performance of MLP, GNG, GCS, the rapid convergence of GNG and GCS and the small dependence on variation of parameters of GNG, the overall ranking of networks in descending order is: GNG, GCS, MLP and FAM. Only, when the dataset shows linear boundaries between classes, FAM and MLP could perform better than GNG and GCS.


F. Hamker and D. Heinke. Implementation and Comparison of Growing Neural Gas, Growing Cell Structures and Fuzzy Artmap (Report 1/97). Schriftenreihe des Fachgebietes Neuroinformatik der TU Ilmenau, ISSN 0945-7518. Fachgebiet Neuroinformatik, TU Ilmenau, 1997.

D. Heinke and F. Hamker, Comparing Neural Networks: A Benchmark on Growing Neural Gas, Growing Cell Structures, and Fuzzy ARTMAP, IEEE Transactions on Neural Networks.

 
Back to content | Back to main menu