Indexed by:
Abstract:
Kernel least mean square is a simple and effective adaptive algorithm, but dragged by its unlimited growing network size. Many schemes have been proposed to reduce the network size, but few takes the distribution of the input data into account. Input data distribution is generally important in view of both model sparsification and generalization performance promotion. In this paper, we introduce an online density-dependent vector quantization scheme, which adopts a shrinkage threshold to adapt its output to the input data distribution. This scheme is then incorporated into the quantized kernel least mean square (QKLMS) to develop a density-dependent QKLMS (DQKLMS). Experiments on static function estimation and short-term chaotic time series prediction are presented to demonstrate the desirable performance of DQKLMS.
Keyword:
Reprint Author's Address:
Email:
Source :
2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)
ISSN: 2161-4393
Year: 2016
Page: 3564-3569
Language: English
Cited Count:
WoS CC Cited Count: 2
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: