• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
Search
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:王进军

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 8 >
Collaborative Attention Network for Person Re-identification EI
会议论文 | 2021 , 1848 (1) | 2021 4th International Conference on Advanced Algorithms and Control Engineering, ICAACE 2021
Abstract&Keyword Cite

Abstract :

The quality of visual feature representation has always been a key factor in many computer vision tasks. In the person re-identification (Re-ID) problem, combining global and local features to improve model performance is becoming a popular method, because previous works only used global features alone, which is very limited at extracting discriminative local patterns from the obtained representation. Some existing works try to collect local patterns explicitly slice the global feature into several local pieces in a handcrafted way. By adopting the slicing and duplication operation, models can achieve relatively higher accuracy but we argue that it still does not take full advantage of partial patterns because the rule and strategy local slices are defined. In this paper, we show that by firstly over-segmenting the global region by the proposed multi-branch structure, and then by learning to combine local features from neighbourhood regions using the proposed Collaborative Attention Network (CAN), the final feature representation for Re-ID can be further improved. The experiment results on several widely-used public datasets prove that our method outperforms many existing state-of-the-art methods. © Published under licence by IOP Publishing Ltd.

Keyword :

Physics

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Wenpeng , Sun, Yongli , Wang, Jinjun et al. Collaborative Attention Network for Person Re-identification [C] . 2021 .
MLA Li, Wenpeng et al. "Collaborative Attention Network for Person Re-identification" . (2021) .
APA Li, Wenpeng , Sun, Yongli , Wang, Jinjun , Cao, Junliang , Xu, Han , Yang, Xiangru et al. Collaborative Attention Network for Person Re-identification . (2021) .
Export to NoteExpress RIS BibTex
Hierarchical and Interactive Refinement Network for Edge-Preserving Salient Object Detection SCIE
期刊论文 | 2021 , 30 , 1-14 | IEEE TRANSACTIONS ON IMAGE PROCESSING
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Salient object detection has undergone a very rapid development with the blooming of Deep Neural Network (DNN), which is usually taken as an important preprocessing procedure in various computer vision tasks. However, the down-sampling operations, such as pooling and striding, always make the final predictions blurred at edges, which has seriously degenerated the performance of salient object detection. In this paper, we propose a simple yet effective approach, i.e., Hierarchical and Interactive Refinement Network (HIRN), to preserve the edge structures in detecting salient objects. In particular, a novel multi-stage and dual-path network structure is designed to estimate the salient edges and regions from the low-level and high-level feature maps, respectively. As a result, the predicted regions will become more accurate by enhancing the weak responses at edges, while the predicted edges will become more semantic by suppressing the false positives in background. Once the salient maps of edges and regions are obtained at the output layers, a novel edge-guided inference algorithm is introduced to further filter the resulting regions along the predicted edges. Extensive experiments on several benchmark datasets have been conducted, in which the results show that our method significantly outperforms a variety of state-of-the-art approaches.

Keyword :

Feature extraction Object detection Prediction algorithms Semantics Training Salient object detection edge-guided inference Inference algorithms Image edge detection Hierarchical and Interactive Refinement Network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Sanping , Wang, Jinjun , Wang, Le et al. Hierarchical and Interactive Refinement Network for Edge-Preserving Salient Object Detection [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 : 1-14 .
MLA Zhou, Sanping et al. "Hierarchical and Interactive Refinement Network for Edge-Preserving Salient Object Detection" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 30 (2021) : 1-14 .
APA Zhou, Sanping , Wang, Jinjun , Wang, Le , Zhang, Jimuyang , Wang, Fei , Huang, Dong et al. Hierarchical and Interactive Refinement Network for Edge-Preserving Salient Object Detection . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 , 1-14 .
Export to NoteExpress RIS BibTex
Tracking Beyond Detection: Learning a Global Response Map for End-to-End Multi-Object Tracking EI SCIE
期刊论文 | 2021 , 30 , 8222-8235 | IEEE TRANSACTIONS ON IMAGE PROCESSING
Abstract&Keyword Cite

Abstract :

Most of the existing Multi-Object Tracking (MOT) approaches follow the Tracking-by-Detection and Data Association paradigm, in which objects are firstly detected and then associated in the tracking process. In recent years, deep neural network has been utilized to obtain more discriminative appearance features for cross-frame association, and noticeable performance improvement has been reported. On the other hand, the Tracking-by-Detection framework is yet not completely end-to-end, which leads to huge computation and limited performance especially in the inference (tracking) process. To address this problem, we present an effective end-to-end deep learning framework which can directly take image-sequence/video as input and output the located and tracked objects of learned types. Specifically, a novel global response network is learned to project multiple objects in the image-sequence/video into a continuous response map, and the trajectory of each tracked object can then be easily picked out. The overall process is similar to how a detector inputs an image and outputs the bounding boxes of each detected object. Experimental results based on the MOT16 and MOT17 benchmarks show that our proposed on-line tracker achieves state-of-the-art performance on several tracking metrics.

Keyword :

deep neural network Multi-object tracking Feature extraction Measurement Task analysis Target tracking Object detection Data models Trajectory global response map

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wan, Xingyu , Cao, Jiakai , Zhou, Sanping et al. Tracking Beyond Detection: Learning a Global Response Map for End-to-End Multi-Object Tracking [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 : 8222-8235 .
MLA Wan, Xingyu et al. "Tracking Beyond Detection: Learning a Global Response Map for End-to-End Multi-Object Tracking" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 30 (2021) : 8222-8235 .
APA Wan, Xingyu , Cao, Jiakai , Zhou, Sanping , Wang, Jinjun , Zheng, Nanning . Tracking Beyond Detection: Learning a Global Response Map for End-to-End Multi-Object Tracking . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 , 8222-8235 .
Export to NoteExpress RIS BibTex
Single-Image super-resolution-When model adaptation matters EI SCIE
期刊论文 | 2021 , 116 | PATTERN RECOGNITION
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

In recent years, impressive advances have been made in single-image super-resolution. Deep learning is behind much of this success. Deep(er) architecture design and external prior modeling are the key ingredients. The internal contents of the low-resolution input image are neglected with deep modeling, despite earlier works that show the power of using such internal priors. In this paper, we propose a variation of deep residual convolutional neural networks, which has been carefully designed for robustness and efficiency in both learning and testing. Moreover, we propose multiple strategies for model adaptation to the internal contents of the low-resolution input image and analyze their strong points and weaknesses. By trading runtime and using internal priors, we achieve improvements from 0.1 to 0.3 dB PSNR over the reported results on standard datasets. Our adaptation especially favors images with repetitive structures or high resolutions. It indicates a more practical usage when our adaption approach applies to sequences or videos in which adjacent frames are strongly correlated in their contents. Moreover, the approach can be combined with other simple techniques, such as back-projection and enhanced prediction, to realize further improvements. (c) 2021 Published by Elsevier Ltd.

Keyword :

Model adaptation Deep convolutional neural network Projection skip connection Internal prior

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liang, Yudong , Timofte, Radu , Wang, Jinjun et al. Single-Image super-resolution-When model adaptation matters [J]. | PATTERN RECOGNITION , 2021 , 116 .
MLA Liang, Yudong et al. "Single-Image super-resolution-When model adaptation matters" . | PATTERN RECOGNITION 116 (2021) .
APA Liang, Yudong , Timofte, Radu , Wang, Jinjun , Zhou, Sanping , Gong, Yihong , Zheng, Nanning . Single-Image super-resolution-When model adaptation matters . | PATTERN RECOGNITION , 2021 , 116 .
Export to NoteExpress RIS BibTex
Weather Classification for Outdoor Power Monitoring based on Improved SqueezeNet EI
会议论文 | 2020 , 11-15 | 5th International Conference on Information Science, Computer Technology and Transportation, ISCTT 2020
Abstract&Keyword Cite

Abstract :

To solve the weather classification problem in outdoor power monitoring, this paper proposes a weather classification algorithm based on improved SqueezeNet. In the proposed network, three modifications are made: Firstly, the input size is increased in the first convolution layer and the convolution kernel is reduced to make it more suitable for high-resolution image classification. Secondly, the combination of global average pooling and small fully connected layers leads to a proper tradeoff between computational burden and classification performance of the proposed network. Thirdly, the introduction of batch normalization not only suppresses the over-fitting phenomenon, but also increases classification accuracy and converging speed. According to the actual application scenario, the multi-weather image dataset is constructed and used for training and test. Experimental results verify the effectiveness of the proposed network, and reveal the proposed network, compared with the original SqueezeNet, could improve the performance of classification accuracy and suppress the over-fitting. © 2020 IEEE.

Keyword :

Statistical tests Convolution

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fang, Chao , Lv, Changfeng , Cai, Fudong et al. Weather Classification for Outdoor Power Monitoring based on Improved SqueezeNet [C] . 2020 : 11-15 .
MLA Fang, Chao et al. "Weather Classification for Outdoor Power Monitoring based on Improved SqueezeNet" . (2020) : 11-15 .
APA Fang, Chao , Lv, Changfeng , Cai, Fudong , Liu, Huanyun , Wang, Jinjun , Shuai, Minwei . Weather Classification for Outdoor Power Monitoring based on Improved SqueezeNet . (2020) : 11-15 .
Export to NoteExpress RIS BibTex
Temporal Aggregation with Clip-level Attention for Video-based Person Re-identification CPCI-S
会议论文 | 2020 , 3365-3373 | IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
Abstract&Keyword Cite

Abstract :

Video-based person re-identification (Re-ID) methods can extract richer features than image-based ones from short video clips. The existing methods usually apply simple strategies, such as average/max pooling, to obtain the tracklet-level features, which has been proved hard to aggregate the information from all video frames. In this paper, we propose a simple yet effective Temporal Aggregation with Clip-level Attention Network (TACAN) to solve the temporal aggregation problem in a hierarchal way. Specifically, a tracklet is firstly broken into different numbers of clips, through a two-stage temporal aggregation network we can get the tracklet-level feature representation. A novel min-max loss is introduced to learn both a clip-level attention extractor and a clip-level feature representer in the training process. Afterwards, the resulting clip-level weights are further taken to average the clip-level features, which can generate a robust tracklet-level feature representation at the testing stage. Experimental results on four benchmark datasets, including the MARS, iLIDS-VID, PRID-2011 and DukeMTMC-VideoReID, show that our TACAN has achieved significant improvements as compared with the state-of-the-art approaches.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Mengliu , Xu, Han , Wang, Jinjun et al. Temporal Aggregation with Clip-level Attention for Video-based Person Re-identification [C] . 2020 : 3365-3373 .
MLA Li, Mengliu et al. "Temporal Aggregation with Clip-level Attention for Video-based Person Re-identification" . (2020) : 3365-3373 .
APA Li, Mengliu , Xu, Han , Wang, Jinjun , Li, Wenpeng , Sun, Yongli . Temporal Aggregation with Clip-level Attention for Video-based Person Re-identification . (2020) : 3365-3373 .
Export to NoteExpress RIS BibTex
Hierarchical U-Shape Attention Network for Salient Object Detection EI SCIE Scopus
期刊论文 | 2020 , 29 , 8417-8428 | IEEE Transactions on Image Processing | IF: 10.856
Abstract&Keyword Cite

Abstract :

Salient object detection aims at locating the most conspicuous objects in natural images, which usually acts as a very important pre-processing procedure in many computer vision tasks. In this paper, we propose a simple yet effective Hierarchical U-shape Attention Network (HUAN) to learn a robust mapping function for salient object detection. Firstly, a novel attention mechanism is formulated to improve the well-known U-shape network, in which the memory consumption can be extensively reduced and the mask quality can be significantly improved by the resulting U-shape Attention Network (UAN). Secondly, a novel hierarchical structure is constructed to well bridge the low-level and high-level feature representations between different UANs, in which both the intra-network and inter-network connections are considered to explore the salient patterns from a local to global view. Thirdly, a novel Mask Fusion Network (MFN) is designed to fuse the intermediate prediction results, so as to generate a salient mask which is in higher-quality than any of those inputs. Our HUAN can be trained together with any backbone network in an end-to-end manner, and high-quality masks can be finally learned to represent the salient objects. Extensive experimental results on several benchmark datasets show that our method significantly outperforms most of the state-of-the-art approaches. © 1992-2012 IEEE.

Keyword :

Object recognition Object detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Sanping , Wang, Jinjun , Zhang, Jimuyang et al. Hierarchical U-Shape Attention Network for Salient Object Detection [J]. | IEEE Transactions on Image Processing , 2020 , 29 : 8417-8428 .
MLA Zhou, Sanping et al. "Hierarchical U-Shape Attention Network for Salient Object Detection" . | IEEE Transactions on Image Processing 29 (2020) : 8417-8428 .
APA Zhou, Sanping , Wang, Jinjun , Zhang, Jimuyang , Wang, Le , Huang, Dong , Du, Shaoyi et al. Hierarchical U-Shape Attention Network for Salient Object Detection . | IEEE Transactions on Image Processing , 2020 , 29 , 8417-8428 .
Export to NoteExpress RIS BibTex
Image Inpainting Using Parallel Network EI
会议论文 | 2020 , 2020-October , 1088-1092 | 2020 IEEE International Conference on Image Processing, ICIP 2020
Abstract&Keyword Cite

Abstract :

Due to the lack of contextual information and the difficulty to directly learn the distribution of a complete image, existing image inpainting methods always use a two-stages approach to make plausible prediction for missing pixels in a coarse-to-fine manner. In this paper, we propose a novel inpainting method with two parallel pipelines. The first pipeline is a standard image completion path that takes the corrupted image as input and outputs the predicted complete image. The second pipeline exists only during the training phase that inputs a complementary image of the corrupted one and still outputs the same complete image. The two pipelines operate simultaneously, and they share identical encoder and most parameters in the decoder. Furthermore, inspired by VAE, random Gaussian noise are added to the features not only to improve the robustness of the model but also to enable generating diverse and plausible results. We evaluated our model on several public datasets and demonstrated that the proposed method outperforms several state-of-the-arts approaches. © 2020 IEEE.

Keyword :

Gaussian noise (electronic) Image processing Pipelines Arts computing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Deng, Ye , Wang, Jinjun . Image Inpainting Using Parallel Network [C] . 2020 : 1088-1092 .
MLA Deng, Ye et al. "Image Inpainting Using Parallel Network" . (2020) : 1088-1092 .
APA Deng, Ye , Wang, Jinjun . Image Inpainting Using Parallel Network . (2020) : 1088-1092 .
Export to NoteExpress RIS BibTex
Channel-Modulated Multibranch Convolutional Neural Network EI Scopus
会议论文 | 2020 , 1854-1859 | 2020 Chinese Automation Congress, CAC 2020
Abstract&Keyword Cite

Abstract :

Depth-wise convolution has become very popular recently, owing to research on efficient convolutional networks. However, it performs convolution in each channel separately, therefore, decreases the representational power of features. In this thesis, we take a systematic research on the architecture and shortcomings of depth-wise convolution. We observe that combining the benefits of multibranch depth-wise operation, shared channel modulation, and feature fusion brings us considerable improvements on both computation accuracy and convolution efficiency. This simple yet effective idea enables us to propose a novel channel-modulated multibranch convolution (CMMB-Conv). In our approach, we first use a multibranch depth-wise operation on the input feature maps to increase the channel width. Next, we gather the spatial information of feature maps from multiple branches through Max and Average Pooling layers. Followed by this, a shared hidden neural perceptron is employed to modulate the inter-channel relationship of the feature maps. Finally, we concatenate the multibranch-modulated feature maps and fuse them by using the point-wise convolution. Compared with feature maps extracted by the depth-wise separable convolution, the feature maps resulting from our CMMBConv have strong representation capability, improving the accuracy of existing MobileNets on both ImageNet and CIFAR classification. Extensive experimental results on the ImageNet2012 have shown that our CMMBConv can jointly improve accuracy and efficiency. Specifically, the Top-1 accuracy is increased by 3.3% to 73.91% in contrast with that of the depth-wise convolution. Meanwhile, the number of parameters and floating-point operations are reduced by 29.02% and 30.77%, respectively, compared with the standard convolution. © 2020 IEEE.

Keyword :

Convolution Digital arithmetic Image enhancement Convolutional neural networks Efficiency

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Wenli , Wang, Jinjun , Xin, Xiaomeng et al. Channel-Modulated Multibranch Convolutional Neural Network [C] . 2020 : 1854-1859 .
MLA Huang, Wenli et al. "Channel-Modulated Multibranch Convolutional Neural Network" . (2020) : 1854-1859 .
APA Huang, Wenli , Wang, Jinjun , Xin, Xiaomeng , Wan, Xingyu , Li, Mengliu . Channel-Modulated Multibranch Convolutional Neural Network . (2020) : 1854-1859 .
Export to NoteExpress RIS BibTex
Image Matching Algorithm based on ORB and K-Means Clustering EI
会议论文 | 2020 , 460-464 | 5th International Conference on Information Science, Computer Technology and Transportation, ISCTT 2020
Abstract&Keyword Cite

Abstract :

With the rapid development of science and technology, image processing technology plays an important role in the field of computer vision. In order to improve the matching speed and real-time requirements, this paper proposes an image matching algorithm based on ORB and K-means clustering, which can effectively improve the accuracy of image feature point location and the accuracy and efficiency of image feature matching, and reduce the time consumption. The algorithm uses sub-pixel interpolation to optimize the traditional ORB algorithm, which improves the accuracy and characteristics of clustering calculation. © 2020 IEEE.

Keyword :

Image matching Image enhancement K-means clustering

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Liye , Cai, Fudong , Wang, Jinjun et al. Image Matching Algorithm based on ORB and K-Means Clustering [C] . 2020 : 460-464 .
MLA Zhang, Liye et al. "Image Matching Algorithm based on ORB and K-Means Clustering" . (2020) : 460-464 .
APA Zhang, Liye , Cai, Fudong , Wang, Jinjun , Lv, Changfeng , Liu, Wei , Guo, Guoxin et al. Image Matching Algorithm based on ORB and K-Means Clustering . (2020) : 460-464 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 8 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:616/98472091
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.