• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
Search
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:孙剑

Refining:

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 2 >
A Unified Hyper-GAN Model for Unpaired Multi-contrast MR Image Translation CPCI-S
会议论文 | 2021 , 12903 , 127-137 | International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)
Abstract&Keyword Cite

Abstract :

Cross-contrast image translation is an important task for completing missing contrasts in clinical diagnosis. However, most existing methods learn separate translator for each pair of contrasts, which is inefficient due to many possible contrast pairs in real scenarios. In this work, we propose a unified Hyper-GAN model for effectively and efficiently translating between different contrast pairs. Hyper-GAN consists of a pair of hyper-encoder and hyper-decoder to first map from the source contrast to a common feature space, and then further map to the target contrast image. To facilitate the translation between different contrast pairs, contrast-modulators are designed to tune the hyper-encoder and hyper-decoder adaptive to different contrasts. We also design a common space loss to enforce that multi-contrast images of a subject share a common feature space, implicitly modeling the shared underlying anatomical structures. Experiments on two datasets of IXI and BraTS 2019 show that our Hyper-GAN achieves state-of-the-art results in both accuracy and efficiency, e.g., improving more than 1.47 and 1.09 dB in PSNR on two datasets with less than half the amount of parameters.

Keyword :

Multi-contrast MR Unpaired image translation Unified hyper-GAN

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Heran , Sun, Jian , Yang, Liwei et al. A Unified Hyper-GAN Model for Unpaired Multi-contrast MR Image Translation [C] . 2021 : 127-137 .
MLA Yang, Heran et al. "A Unified Hyper-GAN Model for Unpaired Multi-contrast MR Image Translation" . (2021) : 127-137 .
APA Yang, Heran , Sun, Jian , Yang, Liwei , Xu, Zongben . A Unified Hyper-GAN Model for Unpaired Multi-contrast MR Image Translation . (2021) : 127-137 .
Export to NoteExpress RIS BibTex
Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis EI SCIE PubMed
期刊论文 | 2021 , 21 (12) | SENSORS
Abstract&Keyword Cite

Abstract :

Shape classification and segmentation of point cloud data are two of the most demanding tasks in photogrammetry and remote sensing applications, which aim to recognize object categories or point labels. Point convolution is an essential operation when designing a network on point clouds for these tasks, which helps to explore 3D local points for feature learning. In this paper, we propose a novel point convolution (PSConv) using separable weights learned with polynomials for 3D point cloud analysis. Specifically, we generalize the traditional convolution defined on the regular data to a 3D point cloud by learning the point convolution kernels based on the polynomials of transformed local point coordinates. We further propose a separable assumption on the convolution kernels to reduce the parameter size and computational cost for our point convolution. Using this novel point convolution, a hierarchical network (PSNet) defined on the point cloud is proposed for 3D shape analysis tasks such as 3D shape classification and segmentation. Experiments are conducted on standard datasets, including synthetic and real scanned ones, and our PSNet achieves state-of-the-art accuracies for shape classification, as well as competitive results for shape segmentation compared with previous methods.

Keyword :

separable point cloud point convolution polynomial

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yu, Ruixuan , Sun, Jian . Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis [J]. | SENSORS , 2021 , 21 (12) .
MLA Yu, Ruixuan et al. "Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis" . | SENSORS 21 . 12 (2021) .
APA Yu, Ruixuan , Sun, Jian . Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis . | SENSORS , 2021 , 21 (12) .
Export to NoteExpress RIS BibTex
Joint Depth and Defocus Estimation from a Single Image Using Physical Consistency EI SCIE
期刊论文 | 2021 , 30 , 3419-3433 | IEEE Transactions on Image Processing
Abstract&Keyword Cite

Abstract :

Estimating depth and defocus maps are two fundamental tasks in computer vision. Recently, many methods explore these two tasks separately with the help of the powerful feature learning ability of deep learning and these methods have achieved impressive progress. However, due to the difficulty in densely labeling depth and defocus on real images, these methods are mostly based on synthetic training dataset, and the performance of learned network degrades significantly on real images. In this paper, we tackle a new task that jointly estimates depth and defocus from a single image. We design a dual network with two subnets respectively for estimating depth and defocus. The network is jointly trained on synthetic dataset with a physical constraint to enforce the physical consistency between depth and defocus. Moreover, we design a simple method to label depth and defocus order on real image dataset, and design two novel metrics to measure accuracies of depth and defocus estimation on real images. Comprehensive experiments demonstrate that joint training for depth and defocus estimation using physical consistency constraint enables these two subnets to guide each other, and effectively improves their depth and defocus estimation performance on real defocused image dataset. © 1992-2012 IEEE.

Keyword :

Deep learning Image enhancement

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Anmei , Sun, Jian . Joint Depth and Defocus Estimation from a Single Image Using Physical Consistency [J]. | IEEE Transactions on Image Processing , 2021 , 30 : 3419-3433 .
MLA Zhang, Anmei et al. "Joint Depth and Defocus Estimation from a Single Image Using Physical Consistency" . | IEEE Transactions on Image Processing 30 (2021) : 3419-3433 .
APA Zhang, Anmei , Sun, Jian . Joint Depth and Defocus Estimation from a Single Image Using Physical Consistency . | IEEE Transactions on Image Processing , 2021 , 30 , 3419-3433 .
Export to NoteExpress RIS BibTex
Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis EI Scopus
会议论文 | 2020 , 12355 LNCS , 217-233 | 16th European Conference on Computer Vision, ECCV 2020
Abstract&Keyword Cite

Abstract :

In this paper we propose a rotation-invariant deep network for point clouds analysis. Point-based deep networks are commonly designed to recognize roughly aligned 3D shapes based on point coordinates, but suffer from performance drops with shape rotations. Some geometric features, e.g., distances and angles of points as inputs of network, are rotation-invariant but lose positional information of points. In this work, we propose a novel deep network for point clouds by incorporating positional information of points as inputs while yielding rotation-invariance. The network is hierarchical and relies on two modules: a positional feature embedding block and a relational feature embedding block. Both modules and the whole network are proven to be rotation-invariant when processing point clouds as input. Experiments show state-of-the-art classification and segmentation performances on benchmark datasets, and ablation studies demonstrate effectiveness of the network design. © 2020, Springer Nature Switzerland AG.

Keyword :

Deep learning Classification (of information) Embeddings Computer vision Rotation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yu, Ruixuan , Wei, Xin , Tombari, Federico et al. Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis [C] . 2020 : 217-233 .
MLA Yu, Ruixuan et al. "Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis" . (2020) : 217-233 .
APA Yu, Ruixuan , Wei, Xin , Tombari, Federico , Sun, Jian . Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis . (2020) : 217-233 .
Export to NoteExpress RIS BibTex
Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN EI SCIE
期刊论文 | 2020 , 39 (12) , 4249-4261 | IEEE TRANSACTIONS ON MEDICAL IMAGING | IF: 10.048
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

Synthesizing a CT image from an available MR image has recently emerged as a key goal in radiotherapy treatment planning for cancer patients. CycleGANs have achieved promising results on unsupervised MR-to-CT image synthesis; however, because they have no direct constraints between input and synthetic images, cycleGANs do not guarantee structural consistency between these two images. This means that anatomical geometry can be shifted in the synthetic CT images, clearly a highly undesirable outcome in the given application. In this paper, we propose a structure-constrained cycleGAN for unsupervised MR-to-CT synthesis by defining an extra structure-consistency loss based on the modality independent neighborhood descriptor. We also utilize a spectral normalization technique to stabilize the training process and a self-attention module to model the long-range spatial dependencies in the synthetic images. Results on unpaired brain and abdomen MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other unsupervised synthesis methods. We also show that an approximate affine pre-registration for unpaired training data can improve synthesis results.

Keyword :

Array signal processing 5G mobile communication MR-to-CT synthesis Antenna arrays MIND Lenses Matching pursuit algorithms CycleGAN deep learning MIMO communication Channel estimation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Heran , Sun, Jian , Carass, Aaron et al. Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN [J]. | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2020 , 39 (12) : 4249-4261 .
MLA Yang, Heran et al. "Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN" . | IEEE TRANSACTIONS ON MEDICAL IMAGING 39 . 12 (2020) : 4249-4261 .
APA Yang, Heran , Sun, Jian , Carass, Aaron , Zhao, Can , Lee, Junghoon , Prince, Jerry L. et al. Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN . | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2020 , 39 (12) , 4249-4261 .
Export to NoteExpress RIS BibTex
View-GCN: View-based Graph Convolutional Network for 3D Shape Analysis EI CPCI-S Scopus
会议论文 | 2020 , 1847-1856 | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
WoS CC Cited Count: 4 SCOPUS Cited Count: 17
Abstract&Keyword Cite

Abstract :

View-based approach that recognizes 3D shape through its projected 2D images has achieved state-of-the-art results for 3D shape recognition. The major challenge for view-based approach is how to aggregate multi-view features to be a global shape descriptor. In this work, we propose a novel view-based Graph Convolutional Neural Network, dubbed as view-GCN, to recognize 3D shape based on graph representation of multiple views in flexible view configurations. We first construct view-graph with multiple views as graph nodes, then design a graph convolutional neural network over view-graph to hierarchically learn discriminative shape descriptor considering relations of multiple views. The view-GCN is a hierarchical network based on local and non-local graph convolution for feature transform, and selective view-sampling for graph coarsening. Extensive experiments on benchmark datasets show that view-GCN achieves state-of-the-art results for 3D shape classification and retrieval.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wei, Xin , Yu, Ruixuan , Sun, Jian . View-GCN: View-based Graph Convolutional Network for 3D Shape Analysis [C] . 2020 : 1847-1856 .
MLA Wei, Xin et al. "View-GCN: View-based Graph Convolutional Network for 3D Shape Analysis" . (2020) : 1847-1856 .
APA Wei, Xin , Yu, Ruixuan , Sun, Jian . View-GCN: View-based Graph Convolutional Network for 3D Shape Analysis . (2020) : 1847-1856 .
Export to NoteExpress RIS BibTex
Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image EI
会议论文 | 2020 , 12262 LNCS , 188-198 | 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020
Abstract&Keyword Cite

Abstract :

Speeding up Magnetic Resonance Imaging (MRI) is an inevitable task in capturing multi-contrast MR images for medical diagnosis. In MRI, some sequences, e.g., in T2 weighted imaging, require long scanning time, while T1 weighted images are captured by short-time sequences. To accelerate MRI, in this paper, we propose a model-driven deep attention network, dubbed as MD-DAN, to reconstruct highly under-sampled long-time sequence MR image with the guidance of a certain short-time sequence MR image. MD-DAN is a novel deep architecture inspired by the iterative algorithm optimizing a novel MRI reconstruction model regularized by cross-contrast prior using a guided contrast image. The network is designed to automatically learn cross-contrast prior by learning corresponding proximal operator. The backbone network to model the proximal operator is designed as a dual-path convolutional network with channel and spatial attention modules. Experimental results on a brain MRI dataset substantiate the superiority of our method with significantly improved accuracy. For example, MD-DAN achieves PSNR upto 35.04dB at the ultra-fast 1/32 sampling rate. © 2020, Springer Nature Switzerland AG.

Keyword :

Diagnosis Image reconstruction Iterative methods Magnetic resonance imaging Medical computing Convolutional neural networks Medical imaging Compressed sensing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Yan , Wang, Na , Yang, Heran et al. Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image [C] . 2020 : 188-198 .
MLA Yang, Yan et al. "Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image" . (2020) : 188-198 .
APA Yang, Yan , Wang, Na , Yang, Heran , Sun, Jian , Z., Xu . Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image . (2020) : 188-198 .
Export to NoteExpress RIS BibTex
Neural Diffusion Distance for Image Segmentation EI CPCI-S
会议论文 | 2019 , 32 | 33rd Conference on Neural Information Processing Systems (NeurIPS)
Abstract&Keyword Cite

Abstract :

Diffusion distance is a spectral method for measuring distance among nodes on graph considering global data structure. In this work, we propose a spec-diff-net for computing diffusion distance on graph based on approximate spectral decomposition. The network is a differentiable deep architecture consisting of feature extraction and diffusion distance modules for computing diffusion distance on image by end-to-end training. We design low resolution kernel matching loss and high resolution segment matching loss to enforce the network's output to be consistent with human-labeled image segments. To compute high-resolution diffusion distance or segmentation mask, we design an up-sampling strategy by feature-attentional interpolation which can be learned when training spec-diff-net. With the learned diffusion distance, we propose a hierarchical image segmentation method outperforming previous segmentation methods. Moreover, a weakly supervised semantic segmentation network is designed using diffusion distance and achieved promising results on PASCAL VOC 2012 segmentation dataset.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sun, Jian , Xu, Zongben . Neural Diffusion Distance for Image Segmentation [C] . 2019 .
MLA Sun, Jian et al. "Neural Diffusion Distance for Image Segmentation" . (2019) .
APA Sun, Jian , Xu, Zongben . Neural Diffusion Distance for Image Segmentation . (2019) .
Export to NoteExpress RIS BibTex
A Prior Learning Network for Joint Image and Sensitivity Estimation in Parallel MR Imaging EI CPCI-S
会议论文 | 2019 , 11767 , 732-740 | 10th International Workshop on Machine Learning in Medical Imaging (MLMI) / 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Parallel imaging is a fast magnetic resonance imaging technique through spatial sensitivity coding using multi-coils. To reconstruct a high quality MR image from under-sampled k-space data, we propose a novel deep network, dubbed as Blind-PMRI-Net, to simultaneously reconstruct the MR image and sensitivity maps in a blind setting for parallel imaging. The Blind-PMRI-Net is a novel deep architecture inspired by the iterative algorithm optimizing a novel energy model for joint image and sensitivity estimation based on image and sensitivity priors. The network is designed to be able to automatically learn these two priors by learning their corresponding proximal operators using convolutional neural networks. Blind-PMRI-Net naturally combines the physical constraint of parallel imaging and prior learning in a single deep architecture. Experiments on a knee MRI dataset show that our network can effectively reconstruct MR image with improved accuracy than previous methods, with fast computational speed. For example, Blind-PMRI-Net takes 0.72 s on GPU to reconstruct 15-channel sensitivity maps and a complex-valued MR image in size of 320 x 320.

Keyword :

Prior learning Deep learning Parallel imaging

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Meng, Nan , Yang, Yan , Xu, Zongben et al. A Prior Learning Network for Joint Image and Sensitivity Estimation in Parallel MR Imaging [C] . 2019 : 732-740 .
MLA Meng, Nan et al. "A Prior Learning Network for Joint Image and Sensitivity Estimation in Parallel MR Imaging" . (2019) : 732-740 .
APA Meng, Nan , Yang, Yan , Xu, Zongben , Sun, Jian . A Prior Learning Network for Joint Image and Sensitivity Estimation in Parallel MR Imaging . (2019) : 732-740 .
Export to NoteExpress RIS BibTex
A tensor-based nonlocal total variation model for multi-channel image recovery EI SCIE Scopus
期刊论文 | 2018 , 153 , 321-335 | SIGNAL PROCESSING | IF: 4.086
WoS CC Cited Count: 4 SCOPUS Cited Count: 5
Abstract&Keyword Cite

Abstract :

In this paper, a new nonlocal total variation (NLTV) regularizer is proposed for solving the inverse problems in multi-channel image processing. Different from the existing nonlocal total variation regularizers that rely on the graph gradient, the proposed nonlocal total variation involves the standard image gradient and simultaneously exploits three important properties inherent in multi-channel images through a tensor nuclear norm, hence we call this proposed functional as tensor-based nonlocal total variation (TenNLTV). In specific, these three properties are the local structural image regularity, the nonlocal image self-similarity, and the image channel correlation, respectively. By fully utilizing these three properties, TenNLTV can provide a more robust measure of image variation. Then, based on the proposed regularizer TenNLTV, a novel regularization model for inverse imaging problems is presented. Moreover, an effective algorithm is designed for the proposed model, and a closed-form solution is derived for a two-order complex eigen system in our algorithm. Extensive experimental results on several inverse imaging problems demonstrate that the proposed regularizer is systematically superior over other competing local and nonlocal regularization approaches, both quantitatively and visually. (C) 2018 Elsevier B.V. All rights reserved.

Keyword :

Inverse problems Total variation Tensor Multi-channel Nonlocal regularization Image reconstruction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cao, Wenfei , Yao, Jing , Sun, Jian et al. A tensor-based nonlocal total variation model for multi-channel image recovery [J]. | SIGNAL PROCESSING , 2018 , 153 : 321-335 .
MLA Cao, Wenfei et al. "A tensor-based nonlocal total variation model for multi-channel image recovery" . | SIGNAL PROCESSING 153 (2018) : 321-335 .
APA Cao, Wenfei , Yao, Jing , Sun, Jian , Han, Guodong . A tensor-based nonlocal total variation model for multi-channel image recovery . | SIGNAL PROCESSING , 2018 , 153 , 321-335 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 2 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:960/98213832
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.