A Universal Learning Rule that Minimizes Well-formed Cost Functions
Date:
2009-07-29
Abstract
In this paper, we analyze stochastic gradient learning rules for posterior probability estimation using networks with a single layer of weights and a general nonlinear activation function. We provide necessary and sufficient conditions on the learning rules and the activation function to obtain probability estimates. Also, we extend the concept of well-formed cost function, proposed by Wittner and Denker, to multiclass problems, and we provide theoretical results showing the advantages of this kind of objective functions.
Collections
- Artículos de Revista [3516]
Except where otherwise noted, this item's license is described as Atribución-NoComercial-SinDerivadas 3.0 España