Convex Hulls and the Size of the Hidden Layer in a MLP Based Classifier

Authors

  • Ricardo Majalca Mr.

Abstract

Designing a feedforward neural network has always posed many questions regarding the number of hidden layers and the number of neurons on each hidden layer. While there is no unique and global solution, it is known that the particularities of the problem to solve, especially for classification, offer some guidance about how to solve these questions. One heuristic approach, when only a hidden layer is involved, analyzes how the involved classes are separated from each other using a finite number of hyperplanes, thereby defining the size of the hidden layer on the network. On this article, using computational geometry concepts, an automated and time efficient method is presented and discussed for estimating the quantity of neurons in the hidden layer by computing the number of hyperplanes separating the classes, based on convex hulls and approximation to alpha shapes. Examples on different situations that may arise and the results on using it are illustrated. It can be seen from the results that the proposed method gives very good estimation for the number of hidden neurons.

Downloads

Download data is not yet available.

Published

2019-11-07

How to Cite

Majalca, R. (2019). Convex Hulls and the Size of the Hidden Layer in a MLP Based Classifier. IEEE Latin America Transactions, 17(6), 991–999. Retrieved from https://latamt.ieeer9.org/index.php/transactions/article/view/1062