Indexed by:
Abstract:
This paper surveys in a tutorial fashion the recent history of universal learning machines starting with the multilayer perceptron. The big push in recent years has been on the design of universal learning machines using optimization methods linear in the parameters, such as the Echo State Network, the Extreme Learning Machine and the Kernel Adaptive filter. We call this class of learning machines convex universal learning machines or CULMs. The purpose of the paper is to compare the methods behind these CULMs, highlighting their features using concepts of vector spaces (i.e. basis functions and projections), which are easy to understand by the computational intelligence community. We illustrate how two of the CULMs behave in a simple example, and we conclude that indeed it is practical to create universal mappers with convex adaptation, which is an improvement over backpropagation.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE
ISSN: 1556-603X
Year: 2015
Issue: 2
Volume: 10
Page: 68-77
3 . 6 4 7
JCR@2015
1 1 . 3 5 6
JCR@2020
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:138
JCR Journal Grade:2
CAS Journal Grade:2
Cited Count:
WoS CC Cited Count: 39
SCOPUS Cited Count: 48
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: