• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
Search
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:薛建儒

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 21 >
Unifying Sum and Weighted Aggregations for Efficient Yet Effective Image Representation Computation EI Scopus SCIE
期刊论文 | 2019 , 28 (2) , 841-852 | IEEE Transactions on Image Processing
WoS CC Cited Count: 1 SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

© 1992-2012 IEEE. Embedding and aggregating a set of local descriptors (e.g., SIFT) into a single vector is normally used to represent images in image search. Standard aggregation operations include sum and weighted aggregations. While showing high efficiency, sum aggregation lacks discriminative power. In contrast, weighted aggregation shows promising retrieval performance but suffers extremely high time cost. In this paper, we present a general mixed aggregation method that unifies sum and weighted aggregation methods. Owing to its general formulation, our method is able to balance the trade-off between retrieval quality and image representation efficiency. Additionally, to improve query performance, we propose computing multiple weighting coefficients rather than one for each to be aggregated vector by partitioning them into several components with negligible computational cost. Extensive experimental results on standard public image retrieval benchmarks demonstrate that our aggregation method achieves state-of-the-art performance while showing over ten times speedup over baselines.

Keyword :

Aggregation image representation image search multiple weights

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pang, Shanmin , Xue, Jianru , Zhu, Jihua et al. Unifying Sum and Weighted Aggregations for Efficient Yet Effective Image Representation Computation [J]. | IEEE Transactions on Image Processing , 2019 , 28 (2) : 841-852 .
MLA Pang, Shanmin et al. "Unifying Sum and Weighted Aggregations for Efficient Yet Effective Image Representation Computation" . | IEEE Transactions on Image Processing 28 . 2 (2019) : 841-852 .
APA Pang, Shanmin , Xue, Jianru , Zhu, Jihua , Zhu, Li , Tian, Qi . Unifying Sum and Weighted Aggregations for Efficient Yet Effective Image Representation Computation . | IEEE Transactions on Image Processing , 2019 , 28 (2) , 841-852 .
Export to NoteExpress RIS BibTex
Deep Feature Aggregation and Image Re-ranking with Heat Diffusion for Image Retrieval EI Scopus
期刊论文 | 2018 | IEEE Transactions on Multimedia
Abstract&Keyword Cite

Abstract :

Image retrieval based on deep convolutional features has demonstrated state-of-the-art performance in popular benchmarks. In this paper, we present a unified solution to address deep convolutional feature aggregation and image re-ranking by simulating the dynamics of heat diffusion. A distinctive problem in image retrieval is that repetitive or bursty features tend to dominate final image representations, resulting in representations less distinguishable. We show that by considering each deep feature as a heat source, our unsupervised aggregation method is able to avoid over-representation of bursty features. We additionally provide a practical solution for the proposed aggregation method and further show the efficiency of our method in experimental evaluation. Inspired by the aforementioned deep feature aggregation method, we also propose a method to re-rank a number of top ranked images for a given query image by considering the query as the heat source. Finally, we extensively evaluate the proposed approach with pre-trained and fine-tuned deep networks on common public benchmarks and show superior performance compared to previous work. IEEE

Keyword :

Aggregation methods Experimental evaluation Feature aggregation Heat equation Image representations Practical solutions Re-ranking State-of-the-art performance

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pang, Shanmin , Ma, Jin , Xue, Jianru et al. Deep Feature Aggregation and Image Re-ranking with Heat Diffusion for Image Retrieval [J]. | IEEE Transactions on Multimedia , 2018 .
MLA Pang, Shanmin et al. "Deep Feature Aggregation and Image Re-ranking with Heat Diffusion for Image Retrieval" . | IEEE Transactions on Multimedia (2018) .
APA Pang, Shanmin , Ma, Jin , Xue, Jianru , Zhu, Jihua , Ordonez, Vicente . Deep Feature Aggregation and Image Re-ranking with Heat Diffusion for Image Retrieval . | IEEE Transactions on Multimedia , 2018 .
Export to NoteExpress RIS BibTex
Data-Driven State-Increment Statistical Model and Its Application in Autonomous Driving EI Scopus SCIE
期刊论文 | 2018 , 19 (12) , 3872-3882 | IEEE Transactions on Intelligent Transportation Systems
Abstract&Keyword Cite

Abstract :

The aim of trajectory planning is to generate a feasible, collision-free trajectory to guide an autonomous vehicle from the initial state to the goal state safely. However, it is difficult to guarantee that the trajectory is feasible for the vehicle and the real path of the vehicle is collision-free when the vehicle follows the trajectory. In this paper, a state-increment statistical model (SISM) is proposed to describe the kinodynamic constraints of a vehicle by modeling the controller, the actuator, and the vehicle model jointly. The SISM consists of Gaussian distributions of lateral error increments in all state subspaces which are composed of the curvature radius, the velocity, and the lateral error. It is a data-driven modeling approach that can improve the SISM via increasing the number of samples of the increment-state, which is composed of the state and its corresponding increment of the lateral error. According to the SISM, the experience cost functions are designed to evaluate the trajectories for searching the best one with the lowest cost, and the real path can be predicted directly according to the planned trajectory and the vehicle state. The predicted path can be utilized effectually to evaluate the safety of the vehicle motion. IEEE

Keyword :

Autonomous Vehicles Data driven Path prediction Predictive models Statistical modeling

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ma, Chao , Xue, Jianru , Liu, Yuehu et al. Data-Driven State-Increment Statistical Model and Its Application in Autonomous Driving [J]. | IEEE Transactions on Intelligent Transportation Systems , 2018 , 19 (12) : 3872-3882 .
MLA Ma, Chao et al. "Data-Driven State-Increment Statistical Model and Its Application in Autonomous Driving" . | IEEE Transactions on Intelligent Transportation Systems 19 . 12 (2018) : 3872-3882 .
APA Ma, Chao , Xue, Jianru , Liu, Yuehu , Yang, Jing , Li, Yongqiang , Zheng, Nanning . Data-Driven State-Increment Statistical Model and Its Application in Autonomous Driving . | IEEE Transactions on Intelligent Transportation Systems , 2018 , 19 (12) , 3872-3882 .
Export to NoteExpress RIS BibTex
Adding attentiveness to the neurons in recurrent neural networks EI Scopus
会议论文 | 2018 , 11213 LNCS , 136-152 | 15th European Conference on Computer Vision, ECCV 2018
Abstract&Keyword Cite

Abstract :

Recurrent neural networks (RNNs) are capable of modeling the temporal dynamics of complex sequential information. However, the structures of existing RNN neurons mainly focus on controlling the contributions of current and historical information but do not explore the different importance levels of different elements in an input vector of a time slot. We propose adding a simple yet effective Element-wise-Attention Gate (EleAttG) to an RNN block (e.g., all RNN neurons in a network layer) that empowers the RNN neurons to have the attentiveness capability. For an RNN block, an EleAttG is added to adaptively modulate the input by assigning different levels of importance, i.e., attention, to each element/dimension of the input. We refer to an RNN block equipped with an EleAttG as an EleAtt-RNN block. Specifically, the modulation of the input is content adaptive and is performed at fine granularity, being element-wise rather than input-wise. The proposed EleAttG, as an additional fundamental unit, is general and can be applied to any RNN structures, e.g., standard RNN, Long Short-Term Memory (LSTM), or Gated Recurrent Unit (GRU). We demonstrate the effectiveness of the proposed EleAtt-RNN by applying it to the action recognition tasks on both 3D human skeleton data and RGB videos. Experiments show that adding attentiveness through EleAttGs to RNN blocks significantly boosts the power of RNNs. © Springer Nature Switzerland AG 2018.

Keyword :

Action recognition Element-wise-Attention Gate (EleAttG) Historical information Recurrent neural network (RNNs) RGB video Sequential information Skeleton Temporal dynamics

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xue, Jianru , Lan, Cuiling , Zeng, Wenjun et al. Adding attentiveness to the neurons in recurrent neural networks [C] . 2018 : 136-152 .
MLA Xue, Jianru et al. "Adding attentiveness to the neurons in recurrent neural networks" . (2018) : 136-152 .
APA Xue, Jianru , Lan, Cuiling , Zeng, Wenjun , Gao, Zhanning , Zheng, Nanning . Adding attentiveness to the neurons in recurrent neural networks . (2018) : 136-152 .
Export to NoteExpress RIS BibTex
Color constancy via multibranch deep probability network EI SCIE Scopus
期刊论文 | 2018 , 27 (4) | JOURNAL OF ELECTRONIC IMAGING
Abstract&Keyword Cite

Abstract :

A learning-based multibranch deep probability network is proposed to estimate the illuminated color of the light source in a scene, commonly referred to as color constancy. The method consists of two coupled subnetworks, which are the deep multibranch illumination estimating network (DMBEN) and deep probability computing network (DPN). The one branch of DMBEN estimates the global illuminant through pooling layer and fully connected layer, whereas the other branch is built as an end-to-end residuals network (Res-net) to evaluate the local illumination. The other adjoint subnetwork DPN separately computes the probabilities that results of DMBEN are similar to the ground truth, then determines the better estimation according to the two probabilities under a new criterion. The results of extensive experiments on Color Checker and NUS 8-Camera datasets show that the proposed approach is superior to the state-of-the-art methods both in efficiency and effectiveness. (C) 2018 SPIE and IS&T

Keyword :

multibranch illuminant estimation probability network color constancy

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Fei , Wang, Wei , Qiu, Zhiliang et al. Color constancy via multibranch deep probability network [J]. | JOURNAL OF ELECTRONIC IMAGING , 2018 , 27 (4) .
MLA Wang, Fei et al. "Color constancy via multibranch deep probability network" . | JOURNAL OF ELECTRONIC IMAGING 27 . 4 (2018) .
APA Wang, Fei , Wang, Wei , Qiu, Zhiliang , Fang, Jianwu , Xue, Jianru , Zhang, Jingru . Color constancy via multibranch deep probability network . | JOURNAL OF ELECTRONIC IMAGING , 2018 , 27 (4) .
Export to NoteExpress RIS BibTex
Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization EI
会议论文 | 2018 , 2018-June , 734-739 | 2018 IEEE Intelligent Vehicles Symposium, IV 2018
Abstract&Keyword Cite

Abstract :

In this paper, we propose a robust point set registration algorithm which combines correntropy and point-to-plane distance, which can register rigid point sets with noises and outliers. Firstly, as correntropy performs well in handling data with non-Gaussian noises, we introduce it to model rigid point set registration problem based on point-to-plane distance; Secondly, we propose an iterative algorithm to solve this problem, which repeats to compute correspondence and transformation parameters respectively in closed form solutions. Simulated experimental results demonstrate the high precision and robustness of the proposed algorithm. In addition, LiDAR based localization experiments on automated vehicle performs satisfactory for localization accuracy and time consumption. © 2018 IEEE.

Keyword :

Automated vehicles Closed form solutions Iterative algorithm Localization accuracy Non-Gaussian noise Point-set registrations Time consumption Transformation parameters

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Guanglin , Du, Shaoyi , Cui, DIxiao et al. Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization [C] . 2018 : 734-739 .
MLA Xu, Guanglin et al. "Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization" . (2018) : 734-739 .
APA Xu, Guanglin , Du, Shaoyi , Cui, DIxiao , Zhang, Sirui , Chen, Badong , Zhang, Xuetao et al. Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization . (2018) : 734-739 .
Export to NoteExpress RIS BibTex
Research on Human-Machine Collaborative Annotation for Traffic Scene Data CPCI-S
会议论文 | 2018 , 2900-2905 | Chinese Automation Congress (CAC)
Abstract&Keyword Cite

Abstract :

Computer vision model using deep learning requires a lot of high-quality data for training. However, obtaining amounts of well-annotated data is too expensive. The state-of-the-art automatic annotation tools can accurately detect and segment a few objects. We bring together the annotation tools and the crowed engineering into a framework for object detection and instance-level segmentation. The input of model are the image need to annotate and the annotation constraints: precision, utility and cost. The output of the model are the set of detected objects and the set of instance-level segmentation results. The model can integrate the computer vision annotation model with manual annotation model. We validate human-machine collaborative annotation model on the Cityscapes dataset.

Keyword :

human-machine collaborative annotation object detection instance-level segmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Pan, Yuxin , Fang, Jianwu , Dou, Jian et al. Research on Human-Machine Collaborative Annotation for Traffic Scene Data [C] . 2018 : 2900-2905 .
MLA Pan, Yuxin et al. "Research on Human-Machine Collaborative Annotation for Traffic Scene Data" . (2018) : 2900-2905 .
APA Pan, Yuxin , Fang, Jianwu , Dou, Jian , Ye, Zhen , Xue, Jianru . Research on Human-Machine Collaborative Annotation for Traffic Scene Data . (2018) : 2900-2905 .
Export to NoteExpress RIS BibTex
Accurate Localization in Underground Garages via Cylinder Feature based Map Matching EI
会议论文 | 2018 , 2018-June , 314-319 | 2018 IEEE Intelligent Vehicles Symposium, IV 2018
Abstract&Keyword Cite

Abstract :

Autonomous driving in underground garages usually utilizes a 2D/3D occupancy map for localization. However, the real scene is changing, and may not be consistent with the map. Vehicles and other objects not contained in the map are considered as obstacles, which increase the difficulty of localization and affect the accuracy of result. In this paper, we propose a cylinder rotational projection statistics(Cy-RoPS) feature descriptor, which is a local surface feature descriptor to improve the accuracy of localization. The local surface feature motivated by RoPS feature is invariant to rotation of point set enclosed in a cylinder. We also propose to employ the local surface feature for localization in a real underground garage. The experimental results show that the proposed method is robust to dynamic obstacles in the underground garage, and has a higher accuracy in localization, compared with the state-of-the-art methods. © 2018 IEEE.

Keyword :

Autonomous driving Dynamic obstacles Feature descriptors Feature-based Local surfaces Map matching State-of-the-art methods Underground garages

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Tao, Zhongxing , Xu, Jianru , Wang, Di et al. Accurate Localization in Underground Garages via Cylinder Feature based Map Matching [C] . 2018 : 314-319 .
MLA Tao, Zhongxing et al. "Accurate Localization in Underground Garages via Cylinder Feature based Map Matching" . (2018) : 314-319 .
APA Tao, Zhongxing , Xu, Jianru , Wang, Di , Zhang, Shuyang , Cui, DIxiao , Du, And Shaoyi . Accurate Localization in Underground Garages via Cylinder Feature based Map Matching . (2018) : 314-319 .
Export to NoteExpress RIS BibTex
A Decision Fusion Model for 3D Detection of Autonomous Driving CPCI-S
会议论文 | 2018 , 3773-3777 | Chinese Automation Congress (CAC)
Abstract&Keyword Cite

Abstract :

This paper proposes a multimodal fusion model for 3D car detection inputting both point clouds and RGB images and generates the corresponding 3D bounding boxes. Our model is composed of two subnetworks: one is point-based method and another is multi-view based method, which is then combined by a decision fusion model. This decision model can absorb the advantages of these two sub-networks and restrict their shortcomings effectively. Experiments on the KITTI 3D car detection benchmark show that our work can achieve state of the art performance.

Keyword :

Autonomous driving decision fusion 3D car detection network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ye, Zhen , Xue, Jianru , Fang, Jianwu et al. A Decision Fusion Model for 3D Detection of Autonomous Driving [C] . 2018 : 3773-3777 .
MLA Ye, Zhen et al. "A Decision Fusion Model for 3D Detection of Autonomous Driving" . (2018) : 3773-3777 .
APA Ye, Zhen , Xue, Jianru , Fang, Jianwu , Dou, Jian , Pan, Yuxin . A Decision Fusion Model for 3D Detection of Autonomous Driving . (2018) : 3773-3777 .
Export to NoteExpress RIS BibTex
A Survey of Scene Understanding by Event Reasoning in Autonomous Driving EI Scopus CSCD
期刊论文 | 2018 , 15 (3) , 249-266 | International Journal of Automation and Computing
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

Realizing autonomy is a hot research topic for automatic vehicles in recent years. For a long time, most of the efforts to this goal concentrate on understanding the scenes surrounding the ego-vehicle (autonomous vehicle itself). By completing low-level vision tasks, such as detection, tracking and segmentation of the surrounding traffic participants, e.g., pedestrian, cyclists and vehicles, the scenes can be interpreted. However, for an autonomous vehicle, low-level vision tasks are largely insufficient to give help to comprehensive scene understanding. What are and how about the past, the on-going and the future of the scene participants? This deep question actually steers the vehicles towards truly full automation, just like human beings. Based on this thoughtfulness, this paper attempts to investigate the interpretation of traffic scene in autonomous driving from an event reasoning view. To reach this goal, we study the most relevant literatures and the state-of-the-arts on scene representation, event detection and intention prediction in autonomous driving. In addition, we also discuss the open challenges and problems in this field and endeavor to provide possible solutions. © 2018, Institute of Automation, Chinese Academy of Sciences and Springer-Verlag GmbH Germany, part of Springer Nature.

Keyword :

Autonomous Vehicles event reasoning Intention predictions Scene representation Scene understanding

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xue, Jian-Ru , Fang, Jian-Wu , Zhang, Pu . A Survey of Scene Understanding by Event Reasoning in Autonomous Driving [J]. | International Journal of Automation and Computing , 2018 , 15 (3) : 249-266 .
MLA Xue, Jian-Ru et al. "A Survey of Scene Understanding by Event Reasoning in Autonomous Driving" . | International Journal of Automation and Computing 15 . 3 (2018) : 249-266 .
APA Xue, Jian-Ru , Fang, Jian-Wu , Zhang, Pu . A Survey of Scene Understanding by Event Reasoning in Autonomous Driving . | International Journal of Automation and Computing , 2018 , 15 (3) , 249-266 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 21 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:2751/65282412
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.