Examinando por Autor "Zufiria, Pedro"
Mostrando 1 - 7 de 7
- Resultados por página
- Opciones de ordenación
Ítem Algorithms and Implementation Architectures for Hebbian Neural Networks(Springer, Berlin, Heidelberg, 2001) Berzal, José Andrés; Zufiria, PedroSystolic architectures for Sanger and Rubner Neural Networks (NNs) are proposed, and the local stability of their learning rules is taken into account based on the indirect Lyapunov method. In addition, these learning rules are improved for applications based on Principal Component Analysis (PCA). The local stability analysis and the systolic architectures for Sanger NN and Rubner NN are presented in a common framework.Ítem Analysis of hebbian models with lateral weight connections(Springer, Berlin, Heidelberg, 2007) Berzal, José Andrés; Zufiria, PedroIn this paper, the behavior of some hebbian artificial neural networks with lateral weights is analyzed. Hebbian neural networks are employed in communications and signal processing applications for implementing on-line Principal Component Analysis (PCA). Different improvements over the original Oja model have been developed in the last two decades. Among them, models with lateral weights have been designed to directly provide the eigenvectors of the correlation matrix [1,5,6,9]. The behavior of hebbian models has been traditionally studied by resorting to an associated continuous-time formulation under some questionable assumptions which are not guaranteed in real implementations. In this paper we employ the alternative deterministic discrete-time (DDT) formulation that characterizes the average evolution of these nets and gathers the influence of the learning gains time evolution [12]. The dynamic behavior of some of these hebbian models is analytically characterized in this context and several simulations complement this comparative study.Ítem Analysis of the Sanger Hebbian Neural Network(Springer, Berlin, Heidelberg, 2005) Berzal, José Andrés; Zufiria, PedroIn this paper, the behavior of the Sanger hebbian artificial neural networks is analyzed. Hebbian neural networks are employed in communications and signal processing applications, among others, due to their capability to implement Principal Component Analysis (PCA). Different improvements over the original model due to Oja have been developed in the last two decades. Among them, Sanger model was designed to directly provide the eigenvectors of the correlation matrix. The behavior of these models has been traditionally considered on a continuous-time formulation whose validity is justified via some analytical procedures that presume, among other hypotheses, an specific asymptotic behavior of the learning gain. In practical applications, these assumptions cannot be guaranteed. This paper addresses the study of a deterministic discrete-time (DDT) formulation that characterizes the average evolution of the net, preserving the discrete-time form of the original network and gathering a more realistic behavior of the learning gain. The dynamics behavior Sanger model is analyzed in this more realistic context. The results thoroughly characterize the relationship between the learning gain and the eigenvalue structure of the correlation matrix.Ítem Convergence analysis of a linearized Rubner network with modified lateral weight behavior(Vigo Aguiar, Jesús, 2006) Berzal, José Andrés; Zufiria, PedroIn this work a novel hebbian neural network architecture for Principal Component Analysis is presented. The proposed network is obtained via a linearization and modification of the standard Rubner model. The new antihebbian connections and learning laws define a partially decoupled net structure. This specific connectivity among the neurons allows for a stability analysis of the whole network where there is no need to assume a priori a time-scale hypothesis between the neurons dynamics.Ítem Local analysis of a new Rubner-type neural network via a DDT formulation(IEEE, 2005) Berzal, José Andrés; Zufiria, PedroIn this paper, the behavior of the Rubner Hebbian artificial neural network is analyzed. Hebbian neural networks are employed in communications and signal processing applications, among others, due to their capability to implement principal component analysis (PCA). Different improvements over the original model due to Oja have been developed in the last two decades. Among them, Sanger and Rubner models were designed to directly provide the eigenvectors of the correlation matrix. The behavior of these models has been traditionally considered on a continuous-time formulation whose validity is justified via some analytical procedures that presume, among other hypotheses, an specific asymptotic behavior of the learning gain. In practical applications, these assumptions cannot be guaranteed. This paper addresses the study of a deterministic discrete-time (DDT) formulation of the Rubner net, that characterizes the average evolution of the net, preserving the discrete-time form of the original network and gathering the influence of the learning gain. This way, the dynamical behavior of Rubner model is analyzed in this more realistic context. The results thoroughly characterize the relationship between the learning gain and the eigenvalue structure of the correlation matrix.Ítem Satellite Data Processing for Meteorological Nowcasting and Very Short Range Forecasting Using Neural Networks(IOS Press, 2001) Berzal, José Andrés; Zufiria, PedroThis paper addresses the processing of satellite data with meteorological nowcasting and very short range forecasting purposes in the context of the SAF NWC (Satellite Application Facility for NoWCasting) project for Meteosat Second Generation (EUMESAT). Among the many aspects involved in nowcasting, air mass analysis (including vertical stability and water vapour distribution, and total water vapour content) is considered. Hence, the forecast characterization requires the quantification of the corresponding meteorological parameters. In general, this quantification has to rely on traditional tools, such as linear regression models, which provide partial information of the involved phenomena. Here, a Neural Network (NN) based model is proposed, where a Hebbian Neural Network (HNN) is combined with a Multilayer Perceptron (MLP), supervised NN. HNNs are used to perform a principal component analysis of the multi-spectral images so that the dimensionality of the problem is reduced keeping the relevant information. Then, the MLP is trained to perform a diagnosis associated with each pixel. The proposed combined architecture is evaluated with real data.Ítem Video Sequence Compression via Supervised Training on Cellular Neural Networks(Ios Press, 1997) Berzal, José Andrés; Zufiria, Pedro; Rodriguez, LuisIn this paper, a novel approach for video sequence compression using Cellular Neural Networks (CNN's) is presented. CNN's are nets characterized by local interconnections between neurons (usually called cells), and can be modeled as dynamical systems. From among many different types, a CNN model operating in discrete-time (DT-CNN) has been chosen, its parameters being defined so that they are shared among all the cells in the network. The compression process proposed in this work is based on the possibility of replicating a given video sequence as a trajectory generated by the DT-CNN. In order for the CNN to follow a prescribed trajectory, a supervised training algorithm is implemented. Compression is achieved due to the fact that all the information contained in the sequence can be stored into a small number of parameters and initial conditions once training is stopped. Different improvements upon the basic formulation are analyzed and issues such as feasibility and complexity of the compression problem are also addressed. Finally, some examples with real video sequences illustrate the applicability of the method.