在结果中搜索
收录类型 (Indexed By)
语言 (Language)
 已选条件:
 1

 RiskSensitive Loss in Kernel Space for Robust Adaptive Filtering

[其他]
IEEE International Conference on Digital Signal Processing (DSP),
2015 : 921925
EI
CPCIS
SCOPUS
 被引用 1 (Web of Science℠)

摘要Recently, a robust cost function called CLoss was proposed for signal processing and machine learning, which is essentially the mean square error (MSE) in a reproducing kernel Hilbert space (RKHS). In this paper, we propose a new cost function, called the kernelized risksensitive (KRS), which is, in essence, the risksensitive loss in kernel space. The risksensitive cost is a wellknown optimization cost in control and estimation communities. In estimation theory, the risksensitive cost is defined as the expectation of an exponential function of the squared estimation error. The KRS cost is insensitive to large outliers and can be applied in robust adaptive filtering. Compared with CLoss, the KRS can achieve faster convergence speed especially when filter is far away from the optimal solution.关键词kernelized risksensitive (KRS) , robustness , adaptive filtering
 2

 Universal approximation with convex optimization: Gimmick or reality? [Discussion Forum
 [期刊] , 2015, 10(2): 6877 EI SCIE SCOPUS
 被引用 19 (Web of Science℠)

摘要This paper surveys in a tutorial fashion the recent history of universal learning machines starting with the multilayer perceptron. The big push in recent years has been on the design of universal learning machines using optimization methods linear in the parameters, such as the Echo State Network, the Extreme Learning Machine and the Kernel Adaptive filter. We call this class of learning machines convex universal learning machines or CULMs. The purpose of the paper is to compare the methods behind these CULMs, highlighting their features using concepts of vector spaces (i.e. basis functions and projections), which are easy to understand by the computational intelligence community. We illustrate how two of the CULMs behave in a simple example, and we conclude that indeed it is practical to create universal mappers with convex adaptation, which is an improvement over backpropagation. © 20052012 IEEE.关键词Kernel adaptive filters , Optimization method , Multi layer perceptron , Echo state networks , Discussion forum , Learning machines , Universal approximation , Extreme learning machine
 3
 4

 Survival Kernel with Application to Kernel Adaptive Filtering

[其他]
International Joint Conference on Neural Networks (IJCNN),
2013 :
EI
CPCIS
 被引用 0 (Web of Science℠)

 5

 Kernel adaptive filtering with confidence intervals
 [其他] 2013 International Joint Conference on Neural Networks, IJCNN 2013, 2013 : EI SCOPUS CPCIS
 被引用 0 (Web of Science℠)

摘要Since its introduction, kernel adaptive filtering (KAF) has attracted considerable attention in recent years. Its main advantages include universal nonlinear approximation using kernel methods, linearity with convex learning in the Reproducing Kernel Hilbert Space (RKHS), and online adaptation with moderate complexity. Among its applications, the kernel least mean square (KLMS) algorithm deserves particular attention due to its simplicity and effectiveness for learning complex systems. A major drawback of current implementations of KAF is the lack of a simple determination of the certainty of each estimate. In this paper, we present a novel kernel adaptive filtering architecture with confidence intervals. By introducing an auxiliary filter, the variance of each estimate can be computed using stochastic gradient descent in O(N). Results show that the proposed algorithm produces comparable estimates of the mean and the variance functions, using only a fraction of the computation associated with the Gaussian process (GP) prediction, and is more versatile in the cases of timevarying noise variance or heteroskedasticity. © 2013 IEEE.
 6

 Learning Nonlinear Generative Models of Time Series With a Kalman Filter in RKHS
 [期刊] , 2014, 62(1): 141155 EI SCIE SCOPUS 2.787
 被引用 12 (Web of Science℠)

摘要This paper presents a novel generative model for time series based on the Kalman filter algorithm in a reproducing kernel Hilbert space (RKHS) using the conditional embedding operator. The end result is a nonlinear model that quantifies the hidden state uncertainty and propagates its probability distribution forward as in the Kalman algorithm. The embedded dynamics can be described by the estimated conditional embedding operator constructed directly from the training measurement data. Using this operator as the counterpart of the state transition matrix, we reformulate the Kalman filter algorithm in RKHS. For the state model, the hidden states are the estimated embeddings of the measurement distribution, while the measurement model serves to connect the estimated measurement embeddings with the current mapped measurements in the RKHS. This novel algorithm is applied to noisy timeseries estimation and prediction, and simulation results show that it outperforms other existing algorithms. In addition, improvements are proposed to reduce the size of the operator and reduce the computation complexity.关键词Kalman filter , RKHS , KRLS , conditional embedding operator
 7

 Sparse kernel recursive least squares using L1 regularization and a fixedpoint subiteration
 [其他] 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014, 2014 : 52575261 EI SCOPUS CPCIS
 被引用 0 (Web of Science℠)

摘要A new kernel adaptive filtering (KAF) algorithm, namely the sparse kernel recursive least squares (SKRLS), is derived by adding a 1norm penalty on the center coefficients to the least squares (LS) cost (i.e. the sum of the squared errors). In each iteration, the center coefficients are updated by a fixedpoint subiteration. Compared with the original KRLS algorithm, the proposed algorithm can produce a much sparser network, in which many coefficients are negligibly small. A much more compact structure can thus be achieved by pruning these negligible centers. Simulation results show that the SKRLS performs very well, yielding a very sparse network while preserving a desirable performance. © 2014 IEEE.
 8

 Survival kernel with application to kernel adaptive filtering
 [其他] 2013 International Joint Conference on Neural Networks, IJCNN 2013, 2013 : EI SCOPUS CPCIS
 被引用 0 (Web of Science℠)

摘要In this paper, we define a new Mercer kernel, namely survival kernel, which is closely related to our recently proposed survival information potential (SIP). The new kernel function is parameter free, simple in calculation, and strictly positivedefinite (SPD) over Rm+, hence it has potential utility in machine learning especially in online kernel learning. In this work we apply the survival kernel to kernel adaptive filtering, in particular the kernel least mean square (KLMS) algorithm. Simulation results show that KLMS with survival kernel may achieve satisfactory performance with little computational time and without the choice of free parameters. © 2013 IEEE.
 9

 Robust Adaptive Algorithm for Smart Antenna System with αStable Noise
 [期刊] , 2018, 65(11): 17831787 SCOPUS SCIE
 被引用 1 (Web of Science℠)

摘要IEEE One of the main problems facing beamforming in smart antenna system is the & #x03B1;stable noise. To address this problem, a novel algorithm, named the recursive continuous logarithmic mixed pnorm (RCLMP) algorithm, which employs a logarithmic cost, is proposed in this paper. The proposed algorithm combines the logarithmic pnorms 1 & #x2264;p & #x2264;2 which does not need the parameter selection and prior knowledge of & #x03B1;stable noise, and exhibits good robustness against & #x03B1;stable noise. Moreover, we show some H & #x221E; norm bounds for the proposed algorithm. Simulation results show that the RCLMP algorithm outperforms the existing algorithms in terms of interference rejection capability and estimation accuracy.关键词&#x03B1;stable noise , Adaptive algorithm , Adaptive arrays , Array signal processing , Beamforming , Interference , Logarithmic cost. , Mathematical model , Robustness , Sensor arrays , Smart antenna
 10

 Trimmed Diffusion Least Mean Squares for Distributed Estimation

[其他]
IEEE International Conference on Digital Signal Processing (DSP),
2015 : 643646
EI
CPCIS
SCOPUS
 被引用 0 (Web of Science℠)

摘要We consider the problem of distributed estimation, where a set of nodes is required to collectively estimate network parameters from noisy measurements. The problem is important when modeling a wide class of realtime sensor networks, where efficiency, robustness, and low power consumption are desired features. In this work, we focus on diffusionbased adaptive solutions that capable to avoid undue influence from outliers, especially in the presence of impulsive noise or dysfunction of certain nodes. We motivate and propose trimmed diffusion least mean square (TDLMS) algorithm that selects normal neighborhood to update the system estimation. We provide performance analysis together with simulation results comparing with existing methods.关键词Adaptive networks , diffusion adaptation , distributed estimation , diffusion least mean square
每页： 条
 <<
 <
 >
 >>