• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
Search
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:王进军

Refining:

Source

Submit Unfold

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 6 >
Semi-supervised person Re-Identification using multi-view clustering EI SCIE
期刊论文 | 2019 , 88 , 285-297 | Pattern Recognition
Abstract&Keyword Cite

Abstract :

Person Re-Identification (Re-Id) is a challenging task focusing on identifying the same person among disjoint camera views. A number of deep learning algorithms have been reported for this task in fully-supervised fashion which requires a large amount of labeled training data, while obtaining high quality labels for Re-Id is extremely time consuming. To address this problem, we propose a semi-supervised Re-Id framework by using only a small portion of labeled data and some additional unlabeled samples. This paper approaches the problem by constructing a set of heterogeneous Convolutional Neural Networks (CNNs) fine-tuned using the labeled portion, and then propagating the labels to the unlabeled portion for further fine-tuning the overall system. In this work, label estimation is a key component during the propagation process. We propose a novel multi-view clustering method, which integrates features of multiple heterogeneous CNNs to cluster and generate pseudo labels for unlabeled samples. Then we fine-tune each of the multiple heterogeneous CNNs by minimizing an identification loss and a verification loss simultaneously, using training data with both true labels and pseudo labels. The procedure is iterated until the estimation of pseudo labels no longer changes. Extensive experiments on three large-scale person Re-Id datasets demonstrate the effectiveness of the proposed method. © 2018 Elsevier Ltd

Keyword :

Convolutional neural network Labeled training data Multi-view clustering Person re identifications Propagation process Semi- supervised learning Semi-supervised Unlabeled samples

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xin, Xiaomeng , Wang, Jinjun , Xie, Ruji et al. Semi-supervised person Re-Identification using multi-view clustering [J]. | Pattern Recognition , 2019 , 88 : 285-297 .
MLA Xin, Xiaomeng et al. "Semi-supervised person Re-Identification using multi-view clustering" . | Pattern Recognition 88 (2019) : 285-297 .
APA Xin, Xiaomeng , Wang, Jinjun , Xie, Ruji , Zhou, Sanping , Huang, Wenli , Zheng, Nanning . Semi-supervised person Re-Identification using multi-view clustering . | Pattern Recognition , 2019 , 88 , 285-297 .
Export to NoteExpress RIS BibTex
Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification EI SCIE Scopus
期刊论文 | 2018 , 20 (3) , 593-604 | IEEE TRANSACTIONS ON MULTIMEDIA
WoS CC Cited Count: 4 SCOPUS Cited Count: 4
Abstract&Keyword Cite

Abstract :

Person reidentification aims at matching images of the same person across disjoint camera views, which is a challenging problem in multimedia analysis, multimedia editing, and content-based media retrieval communities. The major challenge lies in how to preserve similarity of the same person across video footages with large appearance variations, while discriminating different individuals. To address this problem, conventional methods usually consider the pairwise similarity between persons by only measuring the point-to-point distance. In this paper, we propose using a deep learning technique to model a novel set-to-set (S2S) distance, in which the underline objective focuses on preserving the compactness of intraclass samples for each camera view, while maximizing the margin between the intraclass set and interclass set. The S2S distance metric consists of three terms, namely, the class-identity term, the relative distance term, and the regularization term. The class-identity term keeps the intraclass samples within each camera view gathering together, the relative distance term maximizes the distance between the intraclass class set and interclass set across different camera views, and the regularization term smoothes the parameters of the deep convolutional neural network. As a result, the final learned deep model can effectively find out the matched target to the probe object among various candidates in the video gallery by learning discriminative and stable feature representations. Using the CUHK01, CUHK03, PRID2011, and Market1501 benchmark datasets, we extensively conducted comparative evaluations to demonstrate the advantages of our method over the state-of-the-art approaches.

Keyword :

deep learning metric learning Person re-identification set to set similarity comparison

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Sanping , Wang, Jinjun , Shi, Rui et al. Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2018 , 20 (3) : 593-604 .
MLA Zhou, Sanping et al. "Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification" . | IEEE TRANSACTIONS ON MULTIMEDIA 20 . 3 (2018) : 593-604 .
APA Zhou, Sanping , Wang, Jinjun , Shi, Rui , Hou, Qiqi , Gong, Yihong , Zheng, Nanning . Large Margin Learning in Set-to-Set Similarity Comparison for Person Reidentification . | IEEE TRANSACTIONS ON MULTIMEDIA , 2018 , 20 (3) , 593-604 .
Export to NoteExpress RIS BibTex
Deep ranking model by large adaptive margin learning for person re-identification EI SCIE Scopus
期刊论文 | 2018 , 74 , 241-252 | PATTERN RECOGNITION
WoS CC Cited Count: 1 SCOPUS Cited Count: 3
Abstract&Keyword Cite

Abstract :

Person re-identification aims to match images of the same person across disjoint camera views, which is a challenging problem in video surveillance. The major challenge of this task lies in how to preserve the similarity of the same person against large variations caused by complex backgrounds, mutual occlusions and different illuminations, while discriminating the different individuals. In this paper, we present a novel deep ranking model with feature learning and fusion by learning a large adaptive margin between the intra-class distance and inter-class distance to solve the person re-identification problem. Specifically, we organize the training images into a batch of pairwise samples. Treating these pairwise samples as inputs, we build a novel part-based deep convolutional neural network (CNN) to learn the layered feature representations by preserving a large adaptive margin. As a result, the final learned model can effectively find out the matched target to the anchor image among a number of candidates in the gallery image set by learning discriminative and stable feature representations. Overcoming the weaknesses of conventional fixed-margin loss functions, our adaptive margin loss function is more appropriate for the dynamic feature space. On four benchmark datasets, PRID2011, Market1501, CUHK01 and 3DPeS, we extensively conduct comparative evaluations to demonstrate the advantages of the proposed method over the state-of-the-art approaches in person re-identification. (C) 2017 Published by Elsevier Ltd.

Keyword :

Deep ranking model Person re-identification Metric learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Jiayun , Zhou, Sanping , Wang, Jinjun et al. Deep ranking model by large adaptive margin learning for person re-identification [J]. | PATTERN RECOGNITION , 2018 , 74 : 241-252 .
MLA Wang, Jiayun et al. "Deep ranking model by large adaptive margin learning for person re-identification" . | PATTERN RECOGNITION 74 (2018) : 241-252 .
APA Wang, Jiayun , Zhou, Sanping , Wang, Jinjun , Hou, Qiqi . Deep ranking model by large adaptive margin learning for person re-identification . | PATTERN RECOGNITION , 2018 , 74 , 241-252 .
Export to NoteExpress RIS BibTex
Face alignment recurrent network EI SCIE Scopus
期刊论文 | 2018 , 74 , 448-458 | PATTERN RECOGNITION
Abstract&Keyword Cite

Abstract :

This paper presents a new facial landmark detection method for images and videos under uncontrolled conditions, based on a proposed Face Alignment Recurrent Network (FARN). The network works in recurrent fashion and is end-to-end trained to help avoid over-strong early stage regressors and over-weak later stage regressors as in many existing works. Long Short Term Memory (LSTM) model is employed in our network to make full use of the spatial and temporal middle stage information in a natural way, where by spatial we mean that for each image (frame), the predicted landmark position in the current stage will be used to guide the estimation for the next stage, and by temporal we mean that the predicted landmark position in the current frame will be used to guide the estimation for the next frame, and thus providing an unified framework for facial landmark detection in both images and videos. We conduct experiments on public image datasets (COFW, Helen, 300-W) as well as on video datasets (300-VW), and results show clear improvement over most of the current state-of-the-art approaches. In addition, it works in 18 ms per image (frame).(1) (C) 2017 Published by Elsevier Ltd.

Keyword :

Recurrent network Face alignment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hou, Qiqi , Wang, Jinjun , Bai, Ruibin et al. Face alignment recurrent network [J]. | PATTERN RECOGNITION , 2018 , 74 : 448-458 .
MLA Hou, Qiqi et al. "Face alignment recurrent network" . | PATTERN RECOGNITION 74 (2018) : 448-458 .
APA Hou, Qiqi , Wang, Jinjun , Bai, Ruibin , Zhou, Sanping , Gong, Yihong . Face alignment recurrent network . | PATTERN RECOGNITION , 2018 , 74 , 448-458 .
Export to NoteExpress RIS BibTex
Deep self-paced learning for person re-identification EI SCIE Scopus
期刊论文 | 2018 , 76 , 739-751 | PATTERN RECOGNITION
WoS CC Cited Count: 5 SCOPUS Cited Count: 6
Abstract&Keyword Cite

Abstract :

Person re-identification (Re-ID) usually suffers from noisy samples with background clutter and mutual occlusion, which makes it extremely difficult to distinguish different individuals across the disjoint camera views. In this paper, we propose a novel deep self-paced learning (DSPL) algorithm to alleviate this problem, in which we apply a self-paced constraint and symmetric regularization to help the relative distance metric training the deep neural network, so as to learn the stable and discriminative features for person Re-ID. Firstly, we propose a soft polynomial regularizer term which can derive the adaptive weights to samples based on both the training loss and model age. As a result, the high-confidence fidelity samples will be emphasized and the low-confidence noisy samples will be suppressed at early stage of the whole training process. Such a learning regime is naturally implemented under a self-paced learning (SPL) framework, in which samples weights are adaptively updated based on both model age and sample loss using an alternative optimization method. Secondly, we introduce a symmetric regularizer term to revise the asymmetric gradient back-propagation derived by the relative distance metric, so as to simultaneously minimize the intra-class distance and maximize the inter-class distance in each triplet unit. Finally, we build a part-based deep neural network, in which the features of different body parts are first discriminately learned in the lower convolutional layers and then fused in the higher fully connected layers. Experiments on several benchmark datasets have demonstrated the superior performance of our method as compared with the state-of-the-art approaches. (C) 2017 Published by Elsevier Ltd.

Keyword :

Self-paced learning Convolutional neural network Person re-identification Metric learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhou, Sanping , Wang, Jinjun , Meng, Deyu et al. Deep self-paced learning for person re-identification [J]. | PATTERN RECOGNITION , 2018 , 76 : 739-751 .
MLA Zhou, Sanping et al. "Deep self-paced learning for person re-identification" . | PATTERN RECOGNITION 76 (2018) : 739-751 .
APA Zhou, Sanping , Wang, Jinjun , Meng, Deyu , Xin, Xiaomeng , Li, Yubing , Gong, Yihong et al. Deep self-paced learning for person re-identification . | PATTERN RECOGNITION , 2018 , 76 , 739-751 .
Export to NoteExpress RIS BibTex
Improving CNN Performance Accuracies With Min-Max Objective EI SCIE Scopus
期刊论文 | 2018 , 29 (7) , 2872-2885 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
WoS CC Cited Count: 1 SCOPUS Cited Count: 3
Abstract&Keyword Cite

Abstract :

We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.

Keyword :

image classification Convolutional neural network (CNN) incremental minibatch training procedure face verification min-max objective

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shi, Weiwei , Gong, Yihong , Tao, Xiaoyu et al. Improving CNN Performance Accuracies With Min-Max Objective [J]. | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2018 , 29 (7) : 2872-2885 .
MLA Shi, Weiwei et al. "Improving CNN Performance Accuracies With Min-Max Objective" . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 29 . 7 (2018) : 2872-2885 .
APA Shi, Weiwei , Gong, Yihong , Tao, Xiaoyu , Wang, Jinjun , Zheng, Nanning . Improving CNN Performance Accuracies With Min-Max Objective . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2018 , 29 (7) , 2872-2885 .
Export to NoteExpress RIS BibTex
Continuous Action Recognition and Segmentation in Untrimmed Videos CPCI-S
会议论文 | 2018 , 2534-2539 | 24th International Conference on Pattern Recognition (ICPR)
Abstract&Keyword Cite

Abstract :

Recognizing continuous human action is a fundamental task in many real-world computer vision applications including video surveillance, video retrieval, and human-computer interaction, etc. It requires to recognize each action performed as well as their segmentation boundaries in a continuous sequence. In previous works, great progress has been reported for single action recognition, by using deep convolutional networks. In order to further improve the performance for continuous action recognition, in this paper, we introduce a discriminative approach consisting of three modules. The first feature extraction module uses a two stream Convolutional Neural Network to capture the appearance and the short-term motion information from the raw video input. Based on the obtained features, the second classification module performs spatial and temporal recognition and then fuses the two scores from respective feature stream. In the final segmentation module, a semi-Markov Conditional Field model, capable of handling long-term action interactions, is built to partition the action sequence. As can be seen in the experimental results, our approach obtains state-of-the-art performance on public datasets including 50Salads, Breakfast, and MERL Shopping. We have also visualized the continuous actions segmentation results for more insightful discussion in the paper.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Bai, Ruibin , Zhao, Qing , Zhou, Sanping et al. Continuous Action Recognition and Segmentation in Untrimmed Videos [C] . 2018 : 2534-2539 .
MLA Bai, Ruibin et al. "Continuous Action Recognition and Segmentation in Untrimmed Videos" . (2018) : 2534-2539 .
APA Bai, Ruibin , Zhao, Qing , Zhou, Sanping , Li, Yubing , Zhao, Xueji , Wang, Jinjun . Continuous Action Recognition and Segmentation in Untrimmed Videos . (2018) : 2534-2539 .
Export to NoteExpress RIS BibTex
MULTI-OBJECT TRACKING USING ONLINE METRIC LEARNING WITH LONG SHORT-TERM MEMORY CPCI-S
会议论文 | 2018 , 788-792 | 25th IEEE International Conference on Image Processing (ICIP)
Abstract&Keyword Cite

Abstract :

The capacity to model temporal dependency by Recurrent Neural Networks (RNNs) makes it a plausible selection for the multi-object tracking (MOT) problem. Due to the nonlinear transformations and the unique memory mechanism, Long Short-Term Memory (LSTM) can consider a window of history when learning discriminative features, which suggests that the LSTM is suitable for state estimation of target objects as they move around. This paper focuses on association based MOT, and we propose a novel Siamese LSTM Network to interpret both temporal and spatial components nonlinearly by learning the feature of trajectories, and outputs the similarity score of two trajectories for data association. In addition, we also introduce an online metric learning scheme to update the state estimation of each trajectory dynamically. Experimental evaluation on MOT16 benchmark shows that the proposed method achieves competitive performance compared with other state-of-the-art works.

Keyword :

Multiple Object Tracking Long Short-Term Memory Metric Learning Data Association

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wan, Xingyu , Zhao, Qing , Wang, Jinjun et al. MULTI-OBJECT TRACKING USING ONLINE METRIC LEARNING WITH LONG SHORT-TERM MEMORY [C] . 2018 : 788-792 .
MLA Wan, Xingyu et al. "MULTI-OBJECT TRACKING USING ONLINE METRIC LEARNING WITH LONG SHORT-TERM MEMORY" . (2018) : 788-792 .
APA Wan, Xingyu , Zhao, Qing , Wang, Jinjun , Deng, Shunming , Kong, Zhifeng . MULTI-OBJECT TRACKING USING ONLINE METRIC LEARNING WITH LONG SHORT-TERM MEMORY . (2018) : 788-792 .
Export to NoteExpress RIS BibTex
An Online and Flexible Multi-Object Tracking Framework using Long Short-Term Memory CPCI-S
会议论文 | 2018 , 1311-1319 | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Abstract&Keyword Cite

Abstract :

The capacity to model temporal dependency by Recurrent Neural Networks (RNNs) makes it a plausible selection for the multi-object tracking (MOT) problem. Due to the non-linear transformations and the unique memory mechanism, Long Short-Term Memory (LSTM) can consider a window of history when learning discriminative features, which suggests that the LSTM is suitable for state estimation of target objects as they move around. This paper focuses on association based MOT, and we propose a novel Siamese LSTM Network to interpret both temporal and spatial components nonlinearly by learning the feature of trajectories, and outputs the similarity score of two trajectories for data association. In addition, we also introduce an online metric learning scheme to update the state estimation of each trajectory dynamically. Experimental evaluation on MOT16 benchmark shows that the proposed method achieves competitive performance compared with other state-of-the-art works.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wan, Xingyu , Wang, Jinjun , Zhou, Sanping . An Online and Flexible Multi-Object Tracking Framework using Long Short-Term Memory [C] . 2018 : 1311-1319 .
MLA Wan, Xingyu et al. "An Online and Flexible Multi-Object Tracking Framework using Long Short-Term Memory" . (2018) : 1311-1319 .
APA Wan, Xingyu , Wang, Jinjun , Zhou, Sanping . An Online and Flexible Multi-Object Tracking Framework using Long Short-Term Memory . (2018) : 1311-1319 .
Export to NoteExpress RIS BibTex
Part-aware trajectories association across non-overlapping uncalibrated cameras EI SCIE Scopus
期刊论文 | 2017 , 230 , 30-39 | NEUROCOMPUTING | IF: 3.241
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

This paper focuses on the problem of multi-person tracking across non-overlapping uncalibrated cameras using data association method. The problem is extremely difficult as we have very limited cues to associate persons between cameras. To tackle the problem, our system consists of firstly building multiple trajectories from each camera independently, and then finding associations of trajectories between every two cameras of interest, where the later is the most challenging process. Our contributions are mainly two folds: First, we introduce a method to explore the human part configurations on every trajectory to describe the inter-camera spatial temporal constraints for trajectories association. Second, we formulate trajectories association across non overlapping cameras as a multi-class classification problem via the Markov Random Field (MRF) to effectively utilize domain priors such as group activity between persons. With the proposed part-aware correspondences and pair-wise group activity constraints of trajectories, we can achieve robust multi-person tracking. Experimental results on a benchmark dataset validates the effectiveness of our proposed approach.

Keyword :

Part-aware MRF Group activity Association

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cheng, De , Gong, Yihong , Wang, Jinjun et al. Part-aware trajectories association across non-overlapping uncalibrated cameras [J]. | NEUROCOMPUTING , 2017 , 230 : 30-39 .
MLA Cheng, De et al. "Part-aware trajectories association across non-overlapping uncalibrated cameras" . | NEUROCOMPUTING 230 (2017) : 30-39 .
APA Cheng, De , Gong, Yihong , Wang, Jinjun , Hou, Qiqi , Zheng, Nanning . Part-aware trajectories association across non-overlapping uncalibrated cameras . | NEUROCOMPUTING , 2017 , 230 , 30-39 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 6 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:2999/54983897
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.