A Universal Learning Rule that Minimizes Well-formed Cost Functions
Date:
2009-07-29
Résumé
In this paper, we analyze stochastic gradient learning rules for posterior probability estimation using networks with a single layer of weights and a general nonlinear activation function. We provide necessary and sufficient conditions on the learning rules and the activation function to obtain probability estimates. Also, we extend the concept of well-formed cost function, proposed by Wittner and Denker, to multiclass problems, and we provide theoretical results showing the advantages of this kind of objective functions.
Colecciones
- Artículos de Revista [3647]
Herramientas
https://eciencia.urjc.es/themes/Mirage2/lib/js/urjc.js
Estadísticas
Statistiques d'usage de visualisationCitas
Excepté là où spécifié autrement, la license de ce document est décrite en tant que Atribución-NoComercial-SinDerivadas 3.0 España