ISSN :2582-9793

Experimental Indication to Improve the NN Learning Accuracy by Integrity Constraints From the NN Training Data

Original Research (Published On: 28-Jun-2024 )
Experimental Indication to Improve the NN Learning Accuracy by Integrity Constraints From the NN Training Data
DOI : https://dx.doi.org/10.54364/AAIML.2024.42134

Alexander Maximilian Röser and Roman Alexander Englert

Adv. Artif. Intell. Mach. Learn., 4 (2):2324-2337

Alexander Maximilian Röser : István Széchenyi Economics and Management Doctoral School, University of Sopron; isf - Institute for Strategic Finance, FOM University of Applied Science

Roman Alexander Englert : New Media and Information Systems, Faculty III, University of Siegen Siegen, Germany Computer Science, FOM University of Applied Science Essen, North Rhine–Westphalia, Germany

Download PDF Here

DOI: https://dx.doi.org/10.54364/AAIML.2024.42134

Article History: Received on: 07-Apr-24, Accepted on: 21-Jun-24, Published on: 28-Jun-24

Corresponding Author: Alexander Maximilian Röser

Email: alexander_maximilian.roeser@fom-net.de

Citation: Alexander Maximilian Röser, Roman Alexander Englert. (2024). Experimental Indication to Improve the NN Learning Accuracy by Integrity Constraints From the NN Training Data. Adv. Artif. Intell. Mach. Learn., 4 (2 ):2324-2337

          

Abstract

    

Various approaches to improve the classification rate of neural networks (NN) exist. Nevertheless, the application of integrity constraints to mitigate this rate is novel. This paper investigates the effectiveness of integrity constraints (ICs or short: constraints) to improve the performance of neural networks (NNs). This applies in particular to data reduction in training data. The study starts with the application of ICs to the initial NN classification, focusing on the development of data set-specific constraints. These constraints are created by machine learning algorithms, such as multiple linear regression. The method consists of applying these constraints to the misclassified data sets from different tests with the aim of reducing the misclassification rate. The effectiveness of this approach is quantified by comparing the original misclassification rates with those after applying the IC, where a significant reduction was observed in three different test cases. For example, one test case can be described as significant: In Test 1, the misclassification rate decreased from 0.78% to 0.19%, which corresponds to a reduction of 75.6%. Similar improvements were observed in the subsequent tests, underlining the potential of ICs to improve classification accuracy. Thus, this study provides convincing evidence that the NNIC approach is a valuable tool for mitigating misclassification problems in neural network applications. The combination of training a NN and subsequently apply ICs from the training data (NNIC approach) is new and the experimental results indicate evidence for improving the classification rate.

Statistics

   Article View: 217
   PDF Downloaded: 0