• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
Search
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:孙剑

Refining:

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 2 >
Modality-Adaptive Feature Interaction for Brain Tumor Segmentation with Missing Modalities CPCI-S Scopus
期刊论文 | 2022 , 13435 , 183-192 | MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT V
SCOPUS Cited Count: 12
Abstract&Keyword Cite

Abstract :

Multi-modal Magnetic Resonance Imaging (MRI) plays a crucial role in brain tumor segmentation. However, missing modality is a common phenomenon in clinical practice, leading to performance degradation in tumor segmentation. Considering that there exist complementary information among modalities, feature interaction among modalities is important for tumor segmentation. In this work, we propose Modality-adaptive Feature Interaction (MFI) with multi-modal code to adaptively interact features among modalities in different modality missing situations. MFI is a simple yet effective unit, based on graph structure and attention mechanism, to learn and interact complementary features between graph nodes (modalities). Meanwhile, the proposed multi-modal code, indicating whether each modality is missing or not, guides MFI to learn adaptive complementary information between nodes in different missing situations. Applying MFI with multi-modal code in different stages of a U-shaped architecture, we design a novel network U-Net-MFI to interact multi-modal features hierarchically and adaptively for brain tumor segmentation with missing modality (ies). Experiments show that our model outperforms the current state-of-the-art methods for brain tumor segmentation with missing modalities.

Keyword :

Brain tumor segmentation Graph Missing modalities Multi-modal feature interaction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Zechen , Yang, Heran , Sun, Jian . Modality-Adaptive Feature Interaction for Brain Tumor Segmentation with Missing Modalities [J]. | MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT V , 2022 , 13435 : 183-192 .
MLA Zhao, Zechen 等. "Modality-Adaptive Feature Interaction for Brain Tumor Segmentation with Missing Modalities" . | MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT V 13435 (2022) : 183-192 .
APA Zhao, Zechen , Yang, Heran , Sun, Jian . Modality-Adaptive Feature Interaction for Brain Tumor Segmentation with Missing Modalities . | MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT V , 2022 , 13435 , 183-192 .
Export to NoteExpress RIS BibTex
A Unified Hyper-GAN Model for Unpaired Multi-contrast MR Image Translation CPCI-S
会议论文 | 2021 , 12903 , 127-137 | International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)
Abstract&Keyword Cite

Abstract :

Cross-contrast image translation is an important task for completing missing contrasts in clinical diagnosis. However, most existing methods learn separate translator for each pair of contrasts, which is inefficient due to many possible contrast pairs in real scenarios. In this work, we propose a unified Hyper-GAN model for effectively and efficiently translating between different contrast pairs. Hyper-GAN consists of a pair of hyper-encoder and hyper-decoder to first map from the source contrast to a common feature space, and then further map to the target contrast image. To facilitate the translation between different contrast pairs, contrast-modulators are designed to tune the hyper-encoder and hyper-decoder adaptive to different contrasts. We also design a common space loss to enforce that multi-contrast images of a subject share a common feature space, implicitly modeling the shared underlying anatomical structures. Experiments on two datasets of IXI and BraTS 2019 show that our Hyper-GAN achieves state-of-the-art results in both accuracy and efficiency, e.g., improving more than 1.47 and 1.09 dB in PSNR on two datasets with less than half the amount of parameters.

Keyword :

Multi-contrast MR Unified hyper-GAN Unpaired image translation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Heran , Sun, Jian , Yang, Liwei et al. A Unified Hyper-GAN Model for Unpaired Multi-contrast MR Image Translation [C] . 2021 : 127-137 .
MLA Yang, Heran et al. "A Unified Hyper-GAN Model for Unpaired Multi-contrast MR Image Translation" . (2021) : 127-137 .
APA Yang, Heran , Sun, Jian , Yang, Liwei , Xu, Zongben . A Unified Hyper-GAN Model for Unpaired Multi-contrast MR Image Translation . (2021) : 127-137 .
Export to NoteExpress RIS BibTex
Joint Depth and Defocus Estimation from a Single Image Using Physical Consistency EI SCIE
期刊论文 | 2021 , 30 , 3419-3433 | IEEE Transactions on Image Processing
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Estimating depth and defocus maps are two fundamental tasks in computer vision. Recently, many methods explore these two tasks separately with the help of the powerful feature learning ability of deep learning and these methods have achieved impressive progress. However, due to the difficulty in densely labeling depth and defocus on real images, these methods are mostly based on synthetic training dataset, and the performance of learned network degrades significantly on real images. In this paper, we tackle a new task that jointly estimates depth and defocus from a single image. We design a dual network with two subnets respectively for estimating depth and defocus. The network is jointly trained on synthetic dataset with a physical constraint to enforce the physical consistency between depth and defocus. Moreover, we design a simple method to label depth and defocus order on real image dataset, and design two novel metrics to measure accuracies of depth and defocus estimation on real images. Comprehensive experiments demonstrate that joint training for depth and defocus estimation using physical consistency constraint enables these two subnets to guide each other, and effectively improves their depth and defocus estimation performance on real defocused image dataset. © 1992-2012 IEEE.

Keyword :

Deep learning Image enhancement

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Anmei , Sun, Jian . Joint Depth and Defocus Estimation from a Single Image Using Physical Consistency [J]. | IEEE Transactions on Image Processing , 2021 , 30 : 3419-3433 .
MLA Zhang, Anmei et al. "Joint Depth and Defocus Estimation from a Single Image Using Physical Consistency" . | IEEE Transactions on Image Processing 30 (2021) : 3419-3433 .
APA Zhang, Anmei , Sun, Jian . Joint Depth and Defocus Estimation from a Single Image Using Physical Consistency . | IEEE Transactions on Image Processing , 2021 , 30 , 3419-3433 .
Export to NoteExpress RIS BibTex
An Interpretable Early Dynamic Sequential Predictor for Sepsis-Induced Coagulopathy Progression in the Real-World Using Machine Learning SCIE
期刊论文 | 2021 , 8 | FRONTIERS IN MEDICINE
Abstract&Keyword Cite

Abstract :

Sepsis-associated coagulation dysfunction greatly increases the mortality of sepsis. Irregular clinical time-series data remains a major challenge for AI medical applications. To early detect and manage sepsis-induced coagulopathy (SIC) and sepsis-associated disseminated intravascular coagulation (DIC), we developed an interpretable real-time sequential warning model toward real-world irregular data. Eight machine learning models including novel algorithms were devised to detect SIC and sepsis-associated DIC 8n (1 <= n <= 6) hours prior to its onset. Models were developed on Xi'an Jiaotong University Medical College (XJTUMC) and verified on Beth Israel Deaconess Medical Center (BIDMC). A total of 12,154 SIC and 7,878 International Society on Thrombosis and Haemostasis (ISTH) overt-DIC labels were annotated according to the SIC and ISTH overt-DIC scoring systems in train set. The area under the receiver operating characteristic curve (AUROC) were used as model evaluation metrics. The eXtreme Gradient Boosting (XGBoost) model can predict SIC and sepsis-associated DIC events up to 48 h earlier with an AUROC of 0.929 and 0.910, respectively, and even reached 0.973 and 0.955 at 8 h earlier, achieving the highest performance to date. The novel ODE-RNN model achieved continuous prediction at arbitrary time points, and with an AUROC of 0.962 and 0.936 for SIC and DIC predicted 8 h earlier, respectively. In conclusion, our model can predict the sepsis-associated SIC and DIC onset up to 48 h in advance, which helps maximize the time window for early management by physicians.

Keyword :

early real-time prediction irregular time-series data machine learning sepsis-associated DIC SIC

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cui, Ruixia , Hua, Wenbo , Qu, Kai et al. An Interpretable Early Dynamic Sequential Predictor for Sepsis-Induced Coagulopathy Progression in the Real-World Using Machine Learning [J]. | FRONTIERS IN MEDICINE , 2021 , 8 .
MLA Cui, Ruixia et al. "An Interpretable Early Dynamic Sequential Predictor for Sepsis-Induced Coagulopathy Progression in the Real-World Using Machine Learning" . | FRONTIERS IN MEDICINE 8 (2021) .
APA Cui, Ruixia , Hua, Wenbo , Qu, Kai , Yang, Heran , Tong, Yingmu , Li, Qinglin et al. An Interpretable Early Dynamic Sequential Predictor for Sepsis-Induced Coagulopathy Progression in the Real-World Using Machine Learning . | FRONTIERS IN MEDICINE , 2021 , 8 .
Export to NoteExpress RIS BibTex
Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis EI SCIE PubMed
期刊论文 | 2021 , 21 (12) | SENSORS
Abstract&Keyword Cite

Abstract :

Shape classification and segmentation of point cloud data are two of the most demanding tasks in photogrammetry and remote sensing applications, which aim to recognize object categories or point labels. Point convolution is an essential operation when designing a network on point clouds for these tasks, which helps to explore 3D local points for feature learning. In this paper, we propose a novel point convolution (PSConv) using separable weights learned with polynomials for 3D point cloud analysis. Specifically, we generalize the traditional convolution defined on the regular data to a 3D point cloud by learning the point convolution kernels based on the polynomials of transformed local point coordinates. We further propose a separable assumption on the convolution kernels to reduce the parameter size and computational cost for our point convolution. Using this novel point convolution, a hierarchical network (PSNet) defined on the point cloud is proposed for 3D shape analysis tasks such as 3D shape classification and segmentation. Experiments are conducted on standard datasets, including synthetic and real scanned ones, and our PSNet achieves state-of-the-art accuracies for shape classification, as well as competitive results for shape segmentation compared with previous methods.

Keyword :

point cloud point convolution polynomial separable

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yu, Ruixuan , Sun, Jian . Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis [J]. | SENSORS , 2021 , 21 (12) .
MLA Yu, Ruixuan et al. "Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis" . | SENSORS 21 . 12 (2021) .
APA Yu, Ruixuan , Sun, Jian . Learning Polynomial-Based Separable Convolution for 3D Point Cloud Analysis . | SENSORS , 2021 , 21 (12) .
Export to NoteExpress RIS BibTex
View-GCN: View-based Graph Convolutional Network for 3D Shape Analysis EI CPCI-S Scopus
会议论文 | 2020 , 1847-1856 | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
WoS CC Cited Count: 25 SCOPUS Cited Count: 124
Abstract&Keyword Cite

Abstract :

View-based approach that recognizes 3D shape through its projected 2D images has achieved state-of-the-art results for 3D shape recognition. The major challenge for view-based approach is how to aggregate multi-view features to be a global shape descriptor. In this work, we propose a novel view-based Graph Convolutional Neural Network, dubbed as view-GCN, to recognize 3D shape based on graph representation of multiple views in flexible view configurations. We first construct view-graph with multiple views as graph nodes, then design a graph convolutional neural network over view-graph to hierarchically learn discriminative shape descriptor considering relations of multiple views. The view-GCN is a hierarchical network based on local and non-local graph convolution for feature transform, and selective view-sampling for graph coarsening. Extensive experiments on benchmark datasets show that view-GCN achieves state-of-the-art results for 3D shape classification and retrieval.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wei, Xin , Yu, Ruixuan , Sun, Jian . View-GCN: View-based Graph Convolutional Network for 3D Shape Analysis [C] . 2020 : 1847-1856 .
MLA Wei, Xin et al. "View-GCN: View-based Graph Convolutional Network for 3D Shape Analysis" . (2020) : 1847-1856 .
APA Wei, Xin , Yu, Ruixuan , Sun, Jian . View-GCN: View-based Graph Convolutional Network for 3D Shape Analysis . (2020) : 1847-1856 .
Export to NoteExpress RIS BibTex
Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image EI
会议论文 | 2020 , 12262 LNCS , 188-198 | 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020
Abstract&Keyword Cite

Abstract :

Speeding up Magnetic Resonance Imaging (MRI) is an inevitable task in capturing multi-contrast MR images for medical diagnosis. In MRI, some sequences, e.g., in T2 weighted imaging, require long scanning time, while T1 weighted images are captured by short-time sequences. To accelerate MRI, in this paper, we propose a model-driven deep attention network, dubbed as MD-DAN, to reconstruct highly under-sampled long-time sequence MR image with the guidance of a certain short-time sequence MR image. MD-DAN is a novel deep architecture inspired by the iterative algorithm optimizing a novel MRI reconstruction model regularized by cross-contrast prior using a guided contrast image. The network is designed to automatically learn cross-contrast prior by learning corresponding proximal operator. The backbone network to model the proximal operator is designed as a dual-path convolutional network with channel and spatial attention modules. Experimental results on a brain MRI dataset substantiate the superiority of our method with significantly improved accuracy. For example, MD-DAN achieves PSNR upto 35.04dB at the ultra-fast 1/32 sampling rate. © 2020, Springer Nature Switzerland AG.

Keyword :

Compressed sensing Convolutional neural networks Diagnosis Image reconstruction Iterative methods Magnetic resonance imaging Medical computing Medical imaging

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Yan , Wang, Na , Yang, Heran et al. Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image [C] . 2020 : 188-198 .
MLA Yang, Yan et al. "Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image" . (2020) : 188-198 .
APA Yang, Yan , Wang, Na , Yang, Heran , Sun, Jian , Z., Xu . Model-Driven Deep Attention Network for Ultra-fast Compressive Sensing MRI Guided by Cross-contrast MR Image . (2020) : 188-198 .
Export to NoteExpress RIS BibTex
Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN EI SCIE
期刊论文 | 2020 , 39 (12) , 4249-4261 | IEEE TRANSACTIONS ON MEDICAL IMAGING | IF: 10.048
WoS CC Cited Count: 31
Abstract&Keyword Cite

Abstract :

Synthesizing a CT image from an available MR image has recently emerged as a key goal in radiotherapy treatment planning for cancer patients. CycleGANs have achieved promising results on unsupervised MR-to-CT image synthesis; however, because they have no direct constraints between input and synthetic images, cycleGANs do not guarantee structural consistency between these two images. This means that anatomical geometry can be shifted in the synthetic CT images, clearly a highly undesirable outcome in the given application. In this paper, we propose a structure-constrained cycleGAN for unsupervised MR-to-CT synthesis by defining an extra structure-consistency loss based on the modality independent neighborhood descriptor. We also utilize a spectral normalization technique to stabilize the training process and a self-attention module to model the long-range spatial dependencies in the synthetic images. Results on unpaired brain and abdomen MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other unsupervised synthesis methods. We also show that an approximate affine pre-registration for unpaired training data can improve synthesis results.

Keyword :

5G mobile communication Antenna arrays Array signal processing Channel estimation CycleGAN deep learning Lenses Matching pursuit algorithms MIMO communication MIND MR-to-CT synthesis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Heran , Sun, Jian , Carass, Aaron et al. Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN [J]. | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2020 , 39 (12) : 4249-4261 .
MLA Yang, Heran et al. "Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN" . | IEEE TRANSACTIONS ON MEDICAL IMAGING 39 . 12 (2020) : 4249-4261 .
APA Yang, Heran , Sun, Jian , Carass, Aaron , Zhao, Can , Lee, Junghoon , Prince, Jerry L. et al. Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN . | IEEE TRANSACTIONS ON MEDICAL IMAGING , 2020 , 39 (12) , 4249-4261 .
Export to NoteExpress RIS BibTex
Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis EI Scopus
会议论文 | 2020 , 12355 LNCS , 217-233 | 16th European Conference on Computer Vision, ECCV 2020
SCOPUS Cited Count: 25
Abstract&Keyword Cite

Abstract :

In this paper we propose a rotation-invariant deep network for point clouds analysis. Point-based deep networks are commonly designed to recognize roughly aligned 3D shapes based on point coordinates, but suffer from performance drops with shape rotations. Some geometric features, e.g., distances and angles of points as inputs of network, are rotation-invariant but lose positional information of points. In this work, we propose a novel deep network for point clouds by incorporating positional information of points as inputs while yielding rotation-invariance. The network is hierarchical and relies on two modules: a positional feature embedding block and a relational feature embedding block. Both modules and the whole network are proven to be rotation-invariant when processing point clouds as input. Experiments show state-of-the-art classification and segmentation performances on benchmark datasets, and ablation studies demonstrate effectiveness of the network design. © 2020, Springer Nature Switzerland AG.

Keyword :

Classification (of information) Computer vision Deep learning Embeddings Rotation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yu, Ruixuan , Wei, Xin , Tombari, Federico et al. Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis [C] . 2020 : 217-233 .
MLA Yu, Ruixuan et al. "Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis" . (2020) : 217-233 .
APA Yu, Ruixuan , Wei, Xin , Tombari, Federico , Sun, Jian . Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis . (2020) : 217-233 .
Export to NoteExpress RIS BibTex
Neural Diffusion Distance for Image Segmentation EI CPCI-S
会议论文 | 2019 , 32 | 33rd Conference on Neural Information Processing Systems (NeurIPS)
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Diffusion distance is a spectral method for measuring distance among nodes on graph considering global data structure. In this work, we propose a spec-diff-net for computing diffusion distance on graph based on approximate spectral decomposition. The network is a differentiable deep architecture consisting of feature extraction and diffusion distance modules for computing diffusion distance on image by end-to-end training. We design low resolution kernel matching loss and high resolution segment matching loss to enforce the network's output to be consistent with human-labeled image segments. To compute high-resolution diffusion distance or segmentation mask, we design an up-sampling strategy by feature-attentional interpolation which can be learned when training spec-diff-net. With the learned diffusion distance, we propose a hierarchical image segmentation method outperforming previous segmentation methods. Moreover, a weakly supervised semantic segmentation network is designed using diffusion distance and achieved promising results on PASCAL VOC 2012 segmentation dataset.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sun, Jian , Xu, Zongben . Neural Diffusion Distance for Image Segmentation [C] . 2019 .
MLA Sun, Jian et al. "Neural Diffusion Distance for Image Segmentation" . (2019) .
APA Sun, Jian , Xu, Zongben . Neural Diffusion Distance for Image Segmentation . (2019) .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 2 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:1275/188893855
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.