Indexed by:
Abstract:
CT images are commonly used in medical clinical diagnosis. However, due to factors such as hardware and scanning time, CT images in real scenes are limited by spatial resolution so that doctors cannot perform accurate disease analysis on tiny lesion areas and pathological features. An image super-resolution (SR) method based on deep learning is a good way to solve this problem. Although many excellent networks have been proposed, but they all pay more attention to image quality indicators than image visual perception quality. Unlike other networks that focus more on image evaluation metrics, the super resolution generative adversarial network (SRGAN) has achieved tremendous improvements in image perception quality. Based on the above, this paper proposes a CT image super-resolution algorithm based on improved SRGAN. In order to improve the visual quality of CT images, a dilated convolution module is introduced. At the same time, in order to improve the overall visual effect of the image, the mean structural similarity (MSSIM) loss is also introduced to improve the perceptual loss function. Experimental results on the public CT image dataset demonstrate that our model is better than the baseline method SRGAN not only in mean opinion score(MOS), but also in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values. © 2020 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2020
Page: 363-367
Language: English
Cited Count:
WoS CC Cited Count: 3
SCOPUS Cited Count: 18
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0