Indexed by:
Abstract:
This paper proposes a sparse representation layer in the feature extraction stage of a convolutional neural network (CNN). Our goal is to add sparse transforms to a target network to improve its performance without introducing an extra calculation burden. First, the proposed method was achieved by inserting the sparse representation layers into a target network's shallow layers, and the network was trained end-to-end using a supervised learning algorithm. Second, In the forward pass the network captured the features through the convolutional layers and sparse representation layers accomplished with wavelet and shearlet transforms. Thirdly, in the backward pass the weights of the learned kernels of the network were updated through a back-propagated error, while the sparse representation layers were fixed and did not require updating. The proposed method was verified on five datasets with the task of image classification: FOOD-101, CIFAR10/100, DTD, Brodatz and ImageNet. The experimental results show that the proposed method leads to higher recognition accuracy in image classification, and the additional computational cost is relatively small compared to the baseline CNN model. © 2020 Elsevier B.V.
Keyword:
Reprint Author's Address:
Email:
Source :
Knowledge-Based Systems
ISSN: 0950-7051
Year: 2020
Volume: 209
8 . 0 3 8
JCR@2020
8 . 0 3 8
JCR@2020
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:70
CAS Journal Grade:2
Cited Count:
WoS CC Cited Count: 6
SCOPUS Cited Count: 15
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: