Logotipo del repositorio
Comunidades
Todo DSpace
  • English
  • Español
Iniciar sesión
  1. Inicio
  2. Buscar por autor

Examinando por Autor "Ramirez, I."

Seleccione resultados tecleando las primeras letras
Mostrando 1 - 1 de 1
  • Resultados por página
  • Opciones de ordenación
  • Cargando...
    Miniatura
    Ítem
    DC Neural Networks avoid overfitting in one-dimensional nonlinear regression
    (Elsevier, 2024-01-11) Beltran-Royo, C.; Llopis-Ibor, L.; Ramirez, I.; Pantrigo, J.J.
    In this paper, we analyze Difference of Convex Neural Networks in the context of one-dimensional nonlinear regression. Specifically, we show the surprising ability of the Difference of Convex Multilayer Perceptron (DC-MLP) to avoid overfitting in nonlinear regression. Otherwise said, DC-MLPs self-regularize (do not require additional regularization techniques). Thus, DC-MLPs could result very useful for practical purposes based on one-dimensional nonlinear regression. It turns out that shallow MLPs with a convex activation (ReLU, softplus, etc.) fall in the class of DC-MLPs. On the other hand, we call SQ-MLP the shallow MLP with a Squashing activation (logistic, hyperbolic tangent, etc.). In the numerical experiments, we show that DC-MLPs used for nonlinear regression avoid overfitting, in contrast with SQ-MLPs. We also compare DC-MLPs and SQ-MLPs from a theoretical point of view

© Universidad Rey Juan Carlos

  • Enviar Sugerencias