Indexed by:
Abstract:
We present an integrated model for Bayesian learning of sparse representation and classifier training, and apply it for the task of visual recognition. Most previous work learns the sparse representation and trains the classifier on top of it in two separate steps. We cast these two into a unified probabilistic model. This way, the supervised labels can effectively affect the learning of the sparse representation. In the training phase, the inference of the joint expectation for dictionary, code, classifier and other variables under the observation of descriptors and labels is carried out by Gibbs Sampling. In the testing phase, based on the learned parameters, the sparse code and the class label of the image are obtained by Bayesian inference. The proposed model is evaluated on Caltech 101 dataset and its efficacy is demonstrated by a careful analysis of the experimental results. © Springer International Publishing Switzerland 2013.
Keyword:
Reprint Author's Address:
Email:
Source :
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
ISSN: 0302-9743
Year: 2013
Publish Date: 2013
Volume: 8294 LNCS
Page: 620-628
Language: English
0 . 4 0 2
JCR@2005
JCR Journal Grade:2
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: