Augusto Montisci
Adv. Artif. Intell. Mach. Learn., 4 (1):2103-2112
Augusto Montisci : University of Cagliari - Italy
DOI: https://dx.doi.org/10.54364/AAIML.2024.41120
Article History: Received on: 08-Jan-24, Accepted on: 15-Mar-24, Published on: 22-Mar-24
Corresponding Author: Augusto Montisci
Email: augusto.montisci@unica.it
Citation: Augusto Montisci (2024). A free from local minima algorithm for training regressive MLP neural networks. Adv. Artif. Intell. Mach. Learn., 4 (1 ):2103-2112
In
this article an innovative method for training regressive MLP networks is presented,
which is not subject to local minima. The Error-Back-Propagation algorithm,
proposed by William-Hinton-Rummelhart, has had the merit of favoring the
development of machine learning techniques, which has permeated every branch of
research and technology since the mid-1980s. This extraordinary success is
largely due to the black-box approach, but this same factor was also seen as a
limitation, as soon more challenging problems were approached. One of the most
critical aspects of the training algorithms was that of local minima of the
loss function, typically the mean squared error of the output on the training set.
In fact, as the most popular training algorithms are driven by the derivatives
of the loss function, there is no possibility to evaluate if a reached minimum
is local or global. The algorithm presented in this paper avoids the problem of
local minima, as the training is based on the properties of the distribution of
the training set, or better on its image internal to the neural network. The
performance of the algorithm is shown for a well-known benchmark.