在结果中搜索
收录类型 (Indexed By)
语言 (Language)
 已选条件:
 1

 Universal approximation with convex optimization: Gimmick or reality? [Discussion Forum
 [期刊] , 2015, 10(2): 6877 EI SCIE SCOPUS
 被引用 19 (Web of Science℠)

摘要This paper surveys in a tutorial fashion the recent history of universal learning machines starting with the multilayer perceptron. The big push in recent years has been on the design of universal learning machines using optimization methods linear in the parameters, such as the Echo State Network, the Extreme Learning Machine and the Kernel Adaptive filter. We call this class of learning machines convex universal learning machines or CULMs. The purpose of the paper is to compare the methods behind these CULMs, highlighting their features using concepts of vector spaces (i.e. basis functions and projections), which are easy to understand by the computational intelligence community. We illustrate how two of the CULMs behave in a simple example, and we conclude that indeed it is practical to create universal mappers with convex adaptation, which is an improvement over backpropagation. © 20052012 IEEE.关键词Kernel adaptive filters , Optimization method , Multi layer perceptron , Echo state networks , Discussion forum , Learning machines , Universal approximation , Extreme learning machine
 2
 3

 Survival Kernel with Application to Kernel Adaptive Filtering

[其他]
International Joint Conference on Neural Networks (IJCNN),
2013 :
EI
CPCIS
 被引用 0 (Web of Science℠)

 4

 Learning Nonlinear Generative Models of Time Series With a Kalman Filter in RKHS
 [期刊] , 2014, 62(1): 141155 EI SCIE SCOPUS 2.787
 被引用 12 (Web of Science℠)

摘要This paper presents a novel generative model for time series based on the Kalman filter algorithm in a reproducing kernel Hilbert space (RKHS) using the conditional embedding operator. The end result is a nonlinear model that quantifies the hidden state uncertainty and propagates its probability distribution forward as in the Kalman algorithm. The embedded dynamics can be described by the estimated conditional embedding operator constructed directly from the training measurement data. Using this operator as the counterpart of the state transition matrix, we reformulate the Kalman filter algorithm in RKHS. For the state model, the hidden states are the estimated embeddings of the measurement distribution, while the measurement model serves to connect the estimated measurement embeddings with the current mapped measurements in the RKHS. This novel algorithm is applied to noisy timeseries estimation and prediction, and simulation results show that it outperforms other existing algorithms. In addition, improvements are proposed to reduce the size of the operator and reduce the computation complexity.关键词Kalman filter , RKHS , KRLS , conditional embedding operator
 5

 Sparse kernel recursive least squares using L1 regularization and a fixedpoint subiteration
 [其他] 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014, 2014 : 52575261 EI SCOPUS CPCIS
 被引用 0 (Web of Science℠)

摘要A new kernel adaptive filtering (KAF) algorithm, namely the sparse kernel recursive least squares (SKRLS), is derived by adding a 1norm penalty on the center coefficients to the least squares (LS) cost (i.e. the sum of the squared errors). In each iteration, the center coefficients are updated by a fixedpoint subiteration. Compared with the original KRLS algorithm, the proposed algorithm can produce a much sparser network, in which many coefficients are negligibly small. A much more compact structure can thus be achieved by pruning these negligible centers. Simulation results show that the SKRLS performs very well, yielding a very sparse network while preserving a desirable performance. © 2014 IEEE.
 6

 Survival kernel with application to kernel adaptive filtering
 [其他] 2013 International Joint Conference on Neural Networks, IJCNN 2013, 2013 : EI SCOPUS CPCIS
 被引用 0 (Web of Science℠)

摘要In this paper, we define a new Mercer kernel, namely survival kernel, which is closely related to our recently proposed survival information potential (SIP). The new kernel function is parameter free, simple in calculation, and strictly positivedefinite (SPD) over Rm+, hence it has potential utility in machine learning especially in online kernel learning. In this work we apply the survival kernel to kernel adaptive filtering, in particular the kernel least mean square (KLMS) algorithm. Simulation results show that KLMS with survival kernel may achieve satisfactory performance with little computational time and without the choice of free parameters. © 2013 IEEE.
 7

 A Note on the WS Lower Bound of the MEE Estimation
 [期刊] , 2014, 16(2): 814824 SCIE SCOPUS 1.502
 被引用 0 (Web of Science℠)

摘要The minimum error entropy (MEE) estimation is concerned with the estimation of a certain random variable (unknown variable) based on another random variable (observation), so that the entropy of the estimation error is minimized. This estimation method may outperform the wellknown minimum mean square error (MMSE) estimation especially for nonGaussian situations. There is an important performance bound on the MEE estimation, namely the WS lower bound, which is computed as the conditional entropy of the unknown variable given observation. Though it has been known in the literature for a considerable time, up to now there is little study on this performance bound. In this paper, we reexamine the WS lower bound. Some basic properties of the WS lower bound are presented, and the characterization of Gaussian distribution using the WS lower bound is investigated.关键词estimation , entropy , MEE estimation , WS lower bound
 8

 An Extended Result on the Optimal Estimation Under the Minimum Error Entropy Criterion
 [期刊] , 2014, 16(4): 22232233 SCIE SCOPUS 1.502
 被引用 0 (Web of Science℠)

摘要The minimum error entropy (MEE) criterion has been successfully used in fields such as parameter estimation, system identification and the supervised machine learning. There is in general no explicit expression for the optimal MEE estimate unless some constraints on the conditional distribution are imposed. A recent paper has proved that if the conditional density is conditionally symmetric and unimodal (CSUM), then the optimal MEE estimate (with Shannon entropy) equals the conditional median. In this study, we extend this result to the generalized MEE estimation where the optimality criterion is the Renyi entropy or equivalently, the alphaorder information potential (IP).关键词estimation , minimum error entropy , Renyi entropy , information potential
 9

 Fixed budget quantized kernel leastmeansquare algorithm
 [期刊] , 2013, 93(9): 27592770 EI SCIE SCOPUS 2.238
 被引用 28 (Web of Science℠)

摘要This paper presents a quantized kernel least mean square algorithm with a fixed memory budget, named QKLMSFB. In order to deal with the growing support inherent in online kernel methods, the proposed algorithm utilizes a pruning criterion, called significance measure, based on a weighted contribution of the existing data centers. The basic idea of the proposed methodology is to discard the center with the smallest influence on the whole system, when a new sample is included in the dictionary. The significance measure can be updated recursively at each step which is suitable for online operation. Furthermore, the proposed methodology does not need any a priori knowledge about the data and its computational complexity is linear with the center number. Experiments show that the proposed algorithm successfully prunes the least "significant" centers and preserves the important ones, resulting in a compact KLMS model with little loss in accuracy. (c) 2013 Elsevier B.V. All rights reserved.关键词Kernel methods , Quantized kernel least mean square , Fixed budget , Growing and pruning
 10

 Quantized Kernel Recursive Least Squares Algorithm
 [期刊] , 2013, 24(9): 14841491 EI SCIE SCOPUS 4.37
 被引用 92 (Web of Science℠)

摘要In a recent paper, we developed a novel quantized kernel least mean square algorithm, in which the input space is quantized (partitioned into smaller regions) and the network size is upper bounded by the quantization codebook size (number of the regions). In this brief, we propose the quantized kernel least squares regression, and derive the optimal solution. By incorporating a simple online vector quantization method, we derive a recursive algorithm to update the solution, namely the quantized kernel recursive least squares algorithm. The good performance of the new algorithm is demonstrated by Monte Carlo simulations.关键词Kernel recursive least squares (KRLS) , quantization , quantized kernel recursive least squares (QKRLS) , sparsification
每页： 条
 <<
 <
 >
 >>