Beltran-Royo, C.Llopis-Ibor, L.Ramirez, I.Pantrigo, J.J.2024-09-162024-09-162024-01-11Cesar Beltran-Royo, Laura Llopis-Ibor, Juan J. Pantrigo, Iván Ramírez, DC Neural Networks avoid overfitting in one-dimensional nonlinear regression, Knowledge-Based Systems, Volume 283, 2024, 111154, ISSN 0950-7051, https://doi.org/10.1016/j.knosys.2023.1111540950-7051 (print)1872-7409 (online)https://hdl.handle.net/10115/39554In this paper, we analyze Difference of Convex Neural Networks in the context of one-dimensional nonlinear regression. Specifically, we show the surprising ability of the Difference of Convex Multilayer Perceptron (DC-MLP) to avoid overfitting in nonlinear regression. Otherwise said, DC-MLPs self-regularize (do not require additional regularization techniques). Thus, DC-MLPs could result very useful for practical purposes based on one-dimensional nonlinear regression. It turns out that shallow MLPs with a convex activation (ReLU, softplus, etc.) fall in the class of DC-MLPs. On the other hand, we call SQ-MLP the shallow MLP with a Squashing activation (logistic, hyperbolic tangent, etc.). In the numerical experiments, we show that DC-MLPs used for nonlinear regression avoid overfitting, in contrast with SQ-MLPs. We also compare DC-MLPs and SQ-MLPs from a theoretical point of viewengAttribution-NonCommercial-NoDerivs 4.0 Internationalhttps://creativecommons.org/licenses/by-nc-nd/4.0/DC Neural Networks avoid overfitting in one-dimensional nonlinear regressioninfo:eu-repo/semantics/article10.1016/j.knosys.2023.111154info:eu-repo/semantics/embargoedAccess