• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
Search
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:薛建儒

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 15 >
Unifying Sum and Weighted Aggregations for Efficient Yet Effective Image Representation Computation EI Scopus SCIE
期刊论文 | 2019 , 28 (2) , 841-852 | IEEE Transactions on Image Processing
Abstract&Keyword Cite

Abstract :

© 1992-2012 IEEE. Embedding and aggregating a set of local descriptors (e.g., SIFT) into a single vector is normally used to represent images in image search. Standard aggregation operations include sum and weighted aggregations. While showing high efficiency, sum aggregation lacks discriminative power. In contrast, weighted aggregation shows promising retrieval performance but suffers extremely high time cost. In this paper, we present a general mixed aggregation method that unifies sum and weighted aggregation methods. Owing to its general formulation, our method is able to balance the trade-off between retrieval quality and image representation efficiency. Additionally, to improve query performance, we propose computing multiple weighting coefficients rather than one for each to be aggregated vector by partitioning them into several components with negligible computational cost. Extensive experimental results on standard public image retrieval benchmarks demonstrate that our aggregation method achieves state-of-the-art performance while showing over ten times speedup over baselines.

Keyword :

Aggregation image representation image search multiple weights

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pang, Shanmin , Xue, Jianru , Zhu, Jihua et al. Unifying Sum and Weighted Aggregations for Efficient Yet Effective Image Representation Computation [J]. | IEEE Transactions on Image Processing , 2019 , 28 (2) : 841-852 .
MLA Pang, Shanmin et al. "Unifying Sum and Weighted Aggregations for Efficient Yet Effective Image Representation Computation" . | IEEE Transactions on Image Processing 28 . 2 (2019) : 841-852 .
APA Pang, Shanmin , Xue, Jianru , Zhu, Jihua , Zhu, Li , Tian, Qi . Unifying Sum and Weighted Aggregations for Efficient Yet Effective Image Representation Computation . | IEEE Transactions on Image Processing , 2019 , 28 (2) , 841-852 .
Export to NoteExpress RIS BibTex
Adding attentiveness to the neurons in recurrent neural networks EI Scopus
会议论文 | 2018 , 11213 LNCS , 136-152 | 15th European Conference on Computer Vision, ECCV 2018
Abstract&Keyword Cite

Abstract :

Recurrent neural networks (RNNs) are capable of modeling the temporal dynamics of complex sequential information. However, the structures of existing RNN neurons mainly focus on controlling the contributions of current and historical information but do not explore the different importance levels of different elements in an input vector of a time slot. We propose adding a simple yet effective Element-wise-Attention Gate (EleAttG) to an RNN block (e.g., all RNN neurons in a network layer) that empowers the RNN neurons to have the attentiveness capability. For an RNN block, an EleAttG is added to adaptively modulate the input by assigning different levels of importance, i.e., attention, to each element/dimension of the input. We refer to an RNN block equipped with an EleAttG as an EleAtt-RNN block. Specifically, the modulation of the input is content adaptive and is performed at fine granularity, being element-wise rather than input-wise. The proposed EleAttG, as an additional fundamental unit, is general and can be applied to any RNN structures, e.g., standard RNN, Long Short-Term Memory (LSTM), or Gated Recurrent Unit (GRU). We demonstrate the effectiveness of the proposed EleAtt-RNN by applying it to the action recognition tasks on both 3D human skeleton data and RGB videos. Experiments show that adding attentiveness through EleAttGs to RNN blocks significantly boosts the power of RNNs. © Springer Nature Switzerland AG 2018.

Keyword :

Action recognition Element-wise-Attention Gate (EleAttG) Historical information Recurrent neural network (RNNs) RGB video Sequential information Skeleton Temporal dynamics

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xue, Jianru , Lan, Cuiling , Zeng, Wenjun et al. Adding attentiveness to the neurons in recurrent neural networks [C] . 2018 : 136-152 .
MLA Xue, Jianru et al. "Adding attentiveness to the neurons in recurrent neural networks" . (2018) : 136-152 .
APA Xue, Jianru , Lan, Cuiling , Zeng, Wenjun , Gao, Zhanning , Zheng, Nanning . Adding attentiveness to the neurons in recurrent neural networks . (2018) : 136-152 .
Export to NoteExpress RIS BibTex
A Survey of Scene Understanding by Event Reasoning in Autonomous Driving EI Scopus CSCD
期刊论文 | 2018 , 15 (3) , 249-266 | International Journal of Automation and Computing
Abstract&Keyword Cite

Abstract :

Realizing autonomy is a hot research topic for automatic vehicles in recent years. For a long time, most of the efforts to this goal concentrate on understanding the scenes surrounding the ego-vehicle (autonomous vehicle itself). By completing low-level vision tasks, such as detection, tracking and segmentation of the surrounding traffic participants, e.g., pedestrian, cyclists and vehicles, the scenes can be interpreted. However, for an autonomous vehicle, low-level vision tasks are largely insufficient to give help to comprehensive scene understanding. What are and how about the past, the on-going and the future of the scene participants? This deep question actually steers the vehicles towards truly full automation, just like human beings. Based on this thoughtfulness, this paper attempts to investigate the interpretation of traffic scene in autonomous driving from an event reasoning view. To reach this goal, we study the most relevant literatures and the state-of-the-arts on scene representation, event detection and intention prediction in autonomous driving. In addition, we also discuss the open challenges and problems in this field and endeavor to provide possible solutions. © 2018, Institute of Automation, Chinese Academy of Sciences and Springer-Verlag GmbH Germany, part of Springer Nature.

Keyword :

Autonomous Vehicles event reasoning Intention predictions Scene representation Scene understanding

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xue, Jian-Ru , Fang, Jian-Wu , Zhang, Pu . A Survey of Scene Understanding by Event Reasoning in Autonomous Driving [J]. | International Journal of Automation and Computing , 2018 , 15 (3) : 249-266 .
MLA Xue, Jian-Ru et al. "A Survey of Scene Understanding by Event Reasoning in Autonomous Driving" . | International Journal of Automation and Computing 15 . 3 (2018) : 249-266 .
APA Xue, Jian-Ru , Fang, Jian-Wu , Zhang, Pu . A Survey of Scene Understanding by Event Reasoning in Autonomous Driving . | International Journal of Automation and Computing , 2018 , 15 (3) , 249-266 .
Export to NoteExpress RIS BibTex
Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization EI
会议论文 | 2018 , 2018-June , 734-739 | 2018 IEEE Intelligent Vehicles Symposium, IV 2018
Abstract&Keyword Cite

Abstract :

In this paper, we propose a robust point set registration algorithm which combines correntropy and point-to-plane distance, which can register rigid point sets with noises and outliers. Firstly, as correntropy performs well in handling data with non-Gaussian noises, we introduce it to model rigid point set registration problem based on point-to-plane distance; Secondly, we propose an iterative algorithm to solve this problem, which repeats to compute correspondence and transformation parameters respectively in closed form solutions. Simulated experimental results demonstrate the high precision and robustness of the proposed algorithm. In addition, LiDAR based localization experiments on automated vehicle performs satisfactory for localization accuracy and time consumption. © 2018 IEEE.

Keyword :

Automated vehicles Closed form solutions Iterative algorithm Localization accuracy Non-Gaussian noise Point-set registrations Time consumption Transformation parameters

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Guanglin , Du, Shaoyi , Cui, DIxiao et al. Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization [C] . 2018 : 734-739 .
MLA Xu, Guanglin et al. "Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization" . (2018) : 734-739 .
APA Xu, Guanglin , Du, Shaoyi , Cui, DIxiao , Zhang, Sirui , Chen, Badong , Zhang, Xuetao et al. Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization . (2018) : 734-739 .
Export to NoteExpress RIS BibTex
Temporality-enhanced knowledge memory network for factoid question answering EI SCIE Scopus CSCD
期刊论文 | 2018 , 19 (1) , 104-115 | FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING
WoS CC Cited Count: 1 SCOPUS Cited Count: 2
Abstract&Keyword Cite

Abstract :

Question answering is an important problem that aims to deliver specific answers to questions posed by humans in natural language. How to efficiently identify the exact answer with respect to a given question has become an active line of research. Previous approaches in factoid question answering tasks typically focus on modeling the semantic relevance or syntactic relationship between a given question and its corresponding answer. Most of these models suffer when a question contains very little content that is indicative of the answer. In this paper, we devise an architecture named the temporality-enhanced knowledge memory network (TE-KMN) and apply the model to a factoid question answering dataset from a trivia competition called quiz bowl. Unlike most of the existing approaches, our model encodes not only the content of questions and answers, but also the temporal cues in a sequence of ordered sentences which gradually remark the answer. Moreover, our model collaboratively uses external knowledge for a better understanding of a given question. The experimental results demonstrate that our method achieves better performance than several state-of-the-art methods.

Keyword :

Temporality interaction Question answering Knowledge memory

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Duan, Xin-yu , Tang, Si-liang , Zhang, Sheng-yu et al. Temporality-enhanced knowledge memory network for factoid question answering [J]. | FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING , 2018 , 19 (1) : 104-115 .
MLA Duan, Xin-yu et al. "Temporality-enhanced knowledge memory network for factoid question answering" . | FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING 19 . 1 (2018) : 104-115 .
APA Duan, Xin-yu , Tang, Si-liang , Zhang, Sheng-yu , Zhang, Yin , Zhao, Zhou , Xue, Jian-ru et al. Temporality-enhanced knowledge memory network for factoid question answering . | FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING , 2018 , 19 (1) , 104-115 .
Export to NoteExpress RIS BibTex
Large-scale vocabularies with local graph diffusion and mode seeking EI SCIE Scopus
期刊论文 | 2018 , 63 , 1-8 | SIGNAL PROCESSING-IMAGE COMMUNICATION
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

In this work, we propose a large-scale clustering method that captures the intrinsic manifold structure of local features by graph diffusion for image retrieval. The proposed method is a mode seeking like algorithm, and it finds the mode of each data point with the defined stochastic matrix resulted by a same local graph diffusion process. While mode seeking algorithms are normally costly, our method is efficient to generate large-scale vocabularies as it is not iterative, and the major computational steps ere done in parallel. Furthermore, unlike other clustering methods, such as k-means and spectral clustering, the proposed clustering algorithm does not need to empirically appoint the number of clusters beforehand, and its time complexity is independent on the number of clusters. Experimental results on standard image retrieval datasets demonstrate that the proposed method compaies favorably to previous large-scale clustering methods. (C) 2018 Elsevier B.V. All rights reserved.

Keyword :

Image retrieval Large-scale clustering Local graph diffusion Mode-seeking

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pang, Shanmin , Xue, Jianru , Gao, Zhanning et al. Large-scale vocabularies with local graph diffusion and mode seeking [J]. | SIGNAL PROCESSING-IMAGE COMMUNICATION , 2018 , 63 : 1-8 .
MLA Pang, Shanmin et al. "Large-scale vocabularies with local graph diffusion and mode seeking" . | SIGNAL PROCESSING-IMAGE COMMUNICATION 63 (2018) : 1-8 .
APA Pang, Shanmin , Xue, Jianru , Gao, Zhanning , Zheng, Lihong , Zhu, Li . Large-scale vocabularies with local graph diffusion and mode seeking . | SIGNAL PROCESSING-IMAGE COMMUNICATION , 2018 , 63 , 1-8 .
Export to NoteExpress RIS BibTex
Color constancy via multibranch deep probability network EI SCIE Scopus
期刊论文 | 2018 , 27 (4) | JOURNAL OF ELECTRONIC IMAGING
Abstract&Keyword Cite

Abstract :

A learning-based multibranch deep probability network is proposed to estimate the illuminated color of the light source in a scene, commonly referred to as color constancy. The method consists of two coupled subnetworks, which are the deep multibranch illumination estimating network (DMBEN) and deep probability computing network (DPN). The one branch of DMBEN estimates the global illuminant through pooling layer and fully connected layer, whereas the other branch is built as an end-to-end residuals network (Res-net) to evaluate the local illumination. The other adjoint subnetwork DPN separately computes the probabilities that results of DMBEN are similar to the ground truth, then determines the better estimation according to the two probabilities under a new criterion. The results of extensive experiments on Color Checker and NUS 8-Camera datasets show that the proposed approach is superior to the state-of-the-art methods both in efficiency and effectiveness. (C) 2018 SPIE and IS&T

Keyword :

multibranch illuminant estimation probability network color constancy

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Fei , Wang, Wei , Qiu, Zhiliang et al. Color constancy via multibranch deep probability network [J]. | JOURNAL OF ELECTRONIC IMAGING , 2018 , 27 (4) .
MLA Wang, Fei et al. "Color constancy via multibranch deep probability network" . | JOURNAL OF ELECTRONIC IMAGING 27 . 4 (2018) .
APA Wang, Fei , Wang, Wei , Qiu, Zhiliang , Fang, Jianwu , Xue, Jianru , Zhang, Jingru . Color constancy via multibranch deep probability network . | JOURNAL OF ELECTRONIC IMAGING , 2018 , 27 (4) .
Export to NoteExpress RIS BibTex
Research on Human-Machine Collaborative Annotation for Traffic Scene Data CPCI-S
会议论文 | 2018 , 2900-2905 | Chinese Automation Congress (CAC)
Abstract&Keyword Cite

Abstract :

Computer vision model using deep learning requires a lot of high-quality data for training. However, obtaining amounts of well-annotated data is too expensive. The state-of-the-art automatic annotation tools can accurately detect and segment a few objects. We bring together the annotation tools and the crowed engineering into a framework for object detection and instance-level segmentation. The input of model are the image need to annotate and the annotation constraints: precision, utility and cost. The output of the model are the set of detected objects and the set of instance-level segmentation results. The model can integrate the computer vision annotation model with manual annotation model. We validate human-machine collaborative annotation model on the Cityscapes dataset.

Keyword :

human-machine collaborative annotation object detection instance-level segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pan, Yuxin , Fang, Jianwu , Dou, Jian et al. Research on Human-Machine Collaborative Annotation for Traffic Scene Data [C] . 2018 : 2900-2905 .
MLA Pan, Yuxin et al. "Research on Human-Machine Collaborative Annotation for Traffic Scene Data" . (2018) : 2900-2905 .
APA Pan, Yuxin , Fang, Jianwu , Dou, Jian , Ye, Zhen , Xue, Jianru . Research on Human-Machine Collaborative Annotation for Traffic Scene Data . (2018) : 2900-2905 .
Export to NoteExpress RIS BibTex
A Decision Fusion Model for 3D Detection of Autonomous Driving CPCI-S
会议论文 | 2018 , 3773-3777 | Chinese Automation Congress (CAC)
Abstract&Keyword Cite

Abstract :

This paper proposes a multimodal fusion model for 3D car detection inputting both point clouds and RGB images and generates the corresponding 3D bounding boxes. Our model is composed of two subnetworks: one is point-based method and another is multi-view based method, which is then combined by a decision fusion model. This decision model can absorb the advantages of these two sub-networks and restrict their shortcomings effectively. Experiments on the KITTI 3D car detection benchmark show that our work can achieve state of the art performance.

Keyword :

Autonomous driving decision fusion 3D car detection network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ye, Zhen , Xue, Jianru , Fang, Jianwu et al. A Decision Fusion Model for 3D Detection of Autonomous Driving [C] . 2018 : 3773-3777 .
MLA Ye, Zhen et al. "A Decision Fusion Model for 3D Detection of Autonomous Driving" . (2018) : 3773-3777 .
APA Ye, Zhen , Xue, Jianru , Fang, Jianwu , Dou, Jian , Pan, Yuxin . A Decision Fusion Model for 3D Detection of Autonomous Driving . (2018) : 3773-3777 .
Export to NoteExpress RIS BibTex
Building discriminative CNN image representations for object retrieval using the replicator equation EI SCIE Scopus
期刊论文 | 2018 , 83 , 150-160 | PATTERN RECOGNITION
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

We present a generic unsupervised method to increase the discriminative power of image vectors obtained from a broad family of deep neural networks for object retrieval. This goal is accomplished by simultaneously selecting and weighting informative deep convolutional features using the replicator equation, commonly used to capture the essence of selection in evolutionary game theory. The proposed method includes three major steps: First, efficiently detecting features within Regions of Interest (ROIs) using a simple algorithm, as well as trivially collecting a subset of background features. Second, assigning unassigned features by optimizing a standard quadratic problem using the replicator equation. Finally, using the replicator equation again in order to partially address the issue of feature burstiness. We provide theoretical time complexity analysis to show that our method is efficient. Experimental results on several common object retrieval benchmarks using both pre-trained and fine-tuned deep networks show that our method compares favorably to the state-of-the-art. We also publish an easy-to-use Matlab implementation of the proposed method for reproducing our results. (C) 2018 Elsevier Ltd. All rights reserved.

Keyword :

Deep feature selection Object retrieval Deep feature weighting Replicator equation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pang, Shanmin , Zhu, Jihua , Wang, Jiaxing et al. Building discriminative CNN image representations for object retrieval using the replicator equation [J]. | PATTERN RECOGNITION , 2018 , 83 : 150-160 .
MLA Pang, Shanmin et al. "Building discriminative CNN image representations for object retrieval using the replicator equation" . | PATTERN RECOGNITION 83 (2018) : 150-160 .
APA Pang, Shanmin , Zhu, Jihua , Wang, Jiaxing , Ordonez, Vicente , Xue, Jianru . Building discriminative CNN image representations for object retrieval using the replicator equation . | PATTERN RECOGNITION , 2018 , 83 , 150-160 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 15 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:3507/55033406
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.