• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
Search
Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 207 >
A two-level finite element method for the Allen–Cahn equation EI Scopus SCIE
期刊论文 | 2019 , 96 (1) , 158-169 | International Journal of Computer Mathematics
Abstract&Keyword Cite

Abstract :

We consider the fully implicit treatment for the nonlinear term of the Allen–Cahn equation. To solve the nonlinear problem efficiently, the two-level scheme is employed. We obtain the discrete energy law of the fully implicit scheme and two-level scheme with finite element method. Also, the convergence of the two-level method is presented. Finally, some numerical experiments are provided to confirm the theoretical analysis. © 2018 Informa UK Limited, trading as Taylor & Francis Group

Keyword :

Discrete energies Fully implicit scheme Implicit treatment Nonlinear problems Numerical experiments Numerical tests Stability and convergence Two level methods

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Qingfang , Zhang, Ke , Wang, Zhiheng et al. A two-level finite element method for the Allen–Cahn equation [J]. | International Journal of Computer Mathematics , 2019 , 96 (1) : 158-169 .
MLA Liu, Qingfang et al. "A two-level finite element method for the Allen–Cahn equation" . | International Journal of Computer Mathematics 96 . 1 (2019) : 158-169 .
APA Liu, Qingfang , Zhang, Ke , Wang, Zhiheng , Zhao, Jiakun . A two-level finite element method for the Allen–Cahn equation . | International Journal of Computer Mathematics , 2019 , 96 (1) , 158-169 .
Export to NoteExpress RIS BibTex
Robust point cloud registration based on both hard and soft assignments EI Scopus SCIE
期刊论文 | 2019 , 110 , 202-208 | Optics and Laser Technology
Abstract&Keyword Cite

Abstract :

For the registration of partially overlapping point clouds, this paper proposes an effective approach based on both the hard and soft assignments. Given two initially posed clouds, it firstly establishes the forward correspondence for each point in the data shape and calculates the value of a binary variable, which indicates whether this point correspondence is located in the overlapping areas or not. Then, it establishes the bilateral correspondence and computes bidirectional distances for each point in the overlapping areas. Based on the ratio of bidirectional distances, the exponential function is selected and utilized to calculate the probability value, which indicates the reliability of the point correspondence. Subsequently, both the values of hard and soft assignments are embedded into the proposed objective function for registration of partially overlapping point clouds, which then be solved by the proposed variant of ICP algorithm to obtain the optimal rigid transformation. The proposed approach can achieve good registration of point clouds, even when their overlap percentage is low. Experimental results tested on public datasets illustrate its superiority over previous approaches on accuracy and robustness. © 2018 Elsevier Ltd

Keyword :

Bidirectional distances Hard assignment Overlap percentage Point cloud registration Soft assignments

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhu, Jihua , Jin, Congcong , Jiang, Zutao et al. Robust point cloud registration based on both hard and soft assignments [J]. | Optics and Laser Technology , 2019 , 110 : 202-208 .
MLA Zhu, Jihua et al. "Robust point cloud registration based on both hard and soft assignments" . | Optics and Laser Technology 110 (2019) : 202-208 .
APA Zhu, Jihua , Jin, Congcong , Jiang, Zutao , Xu, Siyu , Xu, Minmin , Pang, Shanmin . Robust point cloud registration based on both hard and soft assignments . | Optics and Laser Technology , 2019 , 110 , 202-208 .
Export to NoteExpress RIS BibTex
Traffic Sensory Data Classification by Quantifying Scenario Complexity EI
会议论文 | 2018 , 2018-June , 1543-1548 | 2018 IEEE Intelligent Vehicles Symposium, IV 2018
Abstract&Keyword Cite

Abstract :

For unmanned ground vehicle (UGV) off-line testing and performance evaluation, massive amount of traffic scenario data is often required. The annotations in current off-line traffic sensory dataset typically include I) types of roadways II) scene types III) specific characteristics that are generally considered challenging for cognitive algorithms. While such annotations are helpful in manual selection of data, they are insufficient for comprehensive and quantitate measurement of per-roadway-segment scenario complexity. To resolve such limitations, we propose a traffic sensory data classification paradigm based on quantifying the scenario complexity for each roadway segment, where such quantification is jointly based on road semantic complexity and traffic element complexity. The road semantic complexity is a proposed measurement of the complexity incurred by the static elements such as curvy roads, intersections, merges and splits, which is predicted with a Support Vector Regression (SVR). The traffic element complexity is a measurement of complexity due to dynamic traffic elements, such as nearby vehicles and pedestrians. Experimental results and a case study verify the efficacy of the proposed method. © 2018 IEEE.

Keyword :

AND splits Dynamic traffic Line testing Performance evaluations Sensory data Support vector regression (SVR) Unmanned ground vehicles

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Jiajie , Zhang, Chi , Liu, Yuehu et al. Traffic Sensory Data Classification by Quantifying Scenario Complexity [C] . 2018 : 1543-1548 .
MLA Wang, Jiajie et al. "Traffic Sensory Data Classification by Quantifying Scenario Complexity" . (2018) : 1543-1548 .
APA Wang, Jiajie , Zhang, Chi , Liu, Yuehu , Zhang, Qilin . Traffic Sensory Data Classification by Quantifying Scenario Complexity . (2018) : 1543-1548 .
Export to NoteExpress RIS BibTex
Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization EI
会议论文 | 2018 , 2018-June , 734-739 | 2018 IEEE Intelligent Vehicles Symposium, IV 2018
Abstract&Keyword Cite

Abstract :

In this paper, we propose a robust point set registration algorithm which combines correntropy and point-to-plane distance, which can register rigid point sets with noises and outliers. Firstly, as correntropy performs well in handling data with non-Gaussian noises, we introduce it to model rigid point set registration problem based on point-to-plane distance; Secondly, we propose an iterative algorithm to solve this problem, which repeats to compute correspondence and transformation parameters respectively in closed form solutions. Simulated experimental results demonstrate the high precision and robustness of the proposed algorithm. In addition, LiDAR based localization experiments on automated vehicle performs satisfactory for localization accuracy and time consumption. © 2018 IEEE.

Keyword :

Automated vehicles Closed form solutions Iterative algorithm Localization accuracy Non-Gaussian noise Point-set registrations Time consumption Transformation parameters

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Guanglin , Du, Shaoyi , Cui, DIxiao et al. Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization [C] . 2018 : 734-739 .
MLA Xu, Guanglin et al. "Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization" . (2018) : 734-739 .
APA Xu, Guanglin , Du, Shaoyi , Cui, DIxiao , Zhang, Sirui , Chen, Badong , Zhang, Xuetao et al. Precise Point Set Registration Using Point-to-Plane Distance and Correntropy for LiDAR Based Localization . (2018) : 734-739 .
Export to NoteExpress RIS BibTex
基于OCR的增值税发票识别与管理系统的设计与实现 学位论文库
学位论文 | 2018 | Mentor:祝继华
Abstract&Keyword Cite

Abstract :

随着产业结构的调整和税制改革的不断深化,增值税已经成为中国最主要的税收种类。对于大型企业来讲,增值税发票的管理成为了一项繁重的工作。为了提高企业财务中心的办公效率,本论文基于B/S架构,设计并实现了一套具有增值税发票影像文件采集、发票数据的智能识别与存储、发票管理与报表分析等功能的管理系统。 本论文首先概述了系统所涉及到的图像识别,系统编程等方面的理论基础和技术。在此基础上对发票识别的算法进行了研究与实现,然后结合客户的实际需求,分析了系统的功能性需求和非功能性需求,得出了系统的发票录入、发票识别、发票查询、发票报表四大用例,并用统一建模语言建立其功能模型、静态模型以及动态模型。然后在需求分析的基础上,对系统进行了概要设计,确定了系统的总体架构。在完成系统的概要设计后对系统的主要模块以及核心功能进行详细设计与编码实现,详细论述了发票录入、发票识别、发票查询、报表分析四个模块的具体原理与实现方案,给出了相关代码实现与运行效果。论文在识别模块结合了二维码识别与OCR两种不同的技术,将系统的智能识别准确率提高到了97%左右,极大减少了录入过程中的人工干预,提高了企业的办公效率。 最后,结合实际场景对系统进行了测试,给出了主要模块的测试方案,并完成了测试,测试结果表明系统完全能够达到预期功能和性能要求。

Keyword :

增值税发票 智能录入 OCR 二维码识别

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 程立业 . 基于OCR的增值税发票识别与管理系统的设计与实现 [D]. , .
MLA 程立业 . "基于OCR的增值税发票识别与管理系统的设计与实现" . , .
APA 程立业 . 基于OCR的增值税发票识别与管理系统的设计与实现 . , .
Export to NoteExpress RIS BibTex
Deep forest with local experts based on elm for pedestrian detection EI
会议论文 | 2018 , 11165 LNCS , 803-814 | 19th Pacific-Rim Conference on Multimedia, PCM 2018
Abstract&Keyword Cite

Abstract :

Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. Recently, some authors have shown the advantages of using combinations of part/patch-based detectors in order to cope with the large variability of poses and the existence of partial occlusions. In the beginning of 2017, deep forest is put forward to make up the blank of the decision tree in the field of deep learning. Deep forests have much less parameters than deep neural network and the advantages of higher classification accuracy. In this paper, we propose a novel pedestrian detection approach that combines the flexibility of a part-based model with the fast execution time of a deep forest classifier. In this proposed combination, the role of the part evaluations is taken over by local expert evaluations at the nodes of the decision tree. We first do feature select based on extreme learning machines to get feature sets. Afterwards we use the deep forest to classify the feature sets to get the score which is the results of the local experts. We tested the proposed method with well-known challenging datasets such as TUD and INRIA. The final experimental results on two challenging pedestrian datasets indicate that the proposed method achieves the state-of-the-art or competitive performance. © Springer Nature Switzerland AG 2018.

Keyword :

Classification accuracy Competitive performance Deep forest Extreme learning machine Fast execution time Forest classifiers Partial occlusions Pedestrian detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Wenbo , Cao, Sisi , Jin, Xin et al. Deep forest with local experts based on elm for pedestrian detection [C] . 2018 : 803-814 .
MLA Zheng, Wenbo et al. "Deep forest with local experts based on elm for pedestrian detection" . (2018) : 803-814 .
APA Zheng, Wenbo , Cao, Sisi , Jin, Xin , Mo, Shaocong , Gao, Han , Qu, Yili et al. Deep forest with local experts based on elm for pedestrian detection . (2018) : 803-814 .
Export to NoteExpress RIS BibTex
A low-latency computing framework for time-evolving graphs EI
期刊论文 | 2018 | Journal of Supercomputing
Abstract&Keyword Cite

Abstract :

The demand to deliver fast responses in processing time-evolving graphs is higher than ever before in a large number of big data applications. This problem promotes extensive uses of an incremental computing model, which executes the underlying graph algorithm on the newly updated graph structure by taking the results of the computation on the outdated graph structure as initial values, in distributed time-evolving graph computing systems. In this paper, we experimentally study how the initial values of the computation on a newly updated graph structure influence the convergence of the iterative graph analysis, and we develop an optimization framework on the basis of the incremental computing model to accelerate the convergence of processing time-evolving graphs thus achieving high performance for time-evolving graph analysis. In contrast to the traditional incremental computing model, which uses the results of the computation on the outdated graph structure directly, the proposed framework predicts the optimal initial values of the computation on the new graph structure and thereby reduces the number of iterations. Two different prediction approaches are designed to optimize the initial values based on a combination of the results of the computation on the previous graph data and the newly incoming graph data. We have evaluated our optimization framework using the graph algorithms PageRank and KMeans on Amazon EC2 clusters. The experiments demonstrate that the incremental computing implementation with the initial value prediction have reduced the number of iterations by 30% for the PageRank algorithm and 13.7% for the KMeans algorithm, and reduced the response time by 12.7% and 10.6% accordingly compared to the traditional incremental computing model. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.

Keyword :

Big data applications Computing frameworks Incremental computing k-Means algorithm Number of iterations Optimization framework PageRank algorithm Time evolving graphs

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ji, Shuo , Zhao, Yinliang , Zhao, Xiaomei . A low-latency computing framework for time-evolving graphs [J]. | Journal of Supercomputing , 2018 .
MLA Ji, Shuo et al. "A low-latency computing framework for time-evolving graphs" . | Journal of Supercomputing (2018) .
APA Ji, Shuo , Zhao, Yinliang , Zhao, Xiaomei . A low-latency computing framework for time-evolving graphs . | Journal of Supercomputing , 2018 .
Export to NoteExpress RIS BibTex
False information detection on social media via a hybrid deep model EI
会议论文 | 2018 , 11186 LNCS , 323-333 | 10th Conference on Social Informatics, SocInfo 2018
Abstract&Keyword Cite

Abstract :

There is not only low-cost, easy-access, real-time and valuable information on social media, but also a large amount of false information. False information causes great harm to individuals, the society and the country. So how to detect false information? In the paper, we analyze false information further. We rationally select three information evaluation metrics to distinguish false information. We pioneer the division of information into 5 types and introduce them in detail from the definition, the focus, features, etc. Moreover, in this work, we propose a hybrid deep model to represent text semantics of information with context and capture sentiment semantics features for false information detection. Finally, we apply the model to a benchmark dataset and a Weibo dataset, which shows the model is well-performed. © Springer Nature Switzerland AG 2018.

Keyword :

Benchmark datasets False information Information credibilities Information detection Information evaluation Large amounts Social media Text classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Lianwei , Rao, Yuan , Yu, Hualei et al. False information detection on social media via a hybrid deep model [C] . 2018 : 323-333 .
MLA Wu, Lianwei et al. "False information detection on social media via a hybrid deep model" . (2018) : 323-333 .
APA Wu, Lianwei , Rao, Yuan , Yu, Hualei , Wang, Yiming , Nazir, Ambreen . False information detection on social media via a hybrid deep model . (2018) : 323-333 .
Export to NoteExpress RIS BibTex
Understanding the behaviors of BGP-based DDoS protection services EI
会议论文 | 2018 , 11058 LNCS , 463-473 | 12th International Conference on Network and System Security, NSS 2018
Abstract&Keyword Cite

Abstract :

Distributed Denial of Service attacks has been one of the most challenges faced by the Internet for decades. Recently, DDoS protection services (DPS) have risen up to mitigate large-scale DDoS attacks by diverting the vast malicious traffic against the victims to affordable networks. One common approach is to reroute the traffic through the change of BGP policies, which may cause abnormal BGP routing dynamics. However, little is known about such behaviors and the consequences. To fill this gap, in this paper, we conduct the first study on the behaviors of BGP-based DPS through two steps. First, we propose a machine learning based approach to identify DDoS events because there usually lacks data for characterizing real DDoS events. Second, We design a new algorithm to analyze the behavior of DPS against typical DDoS attacks. In the case study of real DDoS attacks, we carefully analyze the policies used to mitigate the attacks and obtain several meaningful findings. This research sheds light on the design of effective DDoS attack mitigation schemes. © Springer Nature Switzerland AG 2018.

Keyword :

DDoS Attack Ddos attack mitigations Distributed denial of service attack DPS behavior Malicious traffic Routing dynamics

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Tung, Tony Miu , Wang, Chenxu , Wang, Jinhe . Understanding the behaviors of BGP-based DDoS protection services [C] . 2018 : 463-473 .
MLA Tung, Tony Miu et al. "Understanding the behaviors of BGP-based DDoS protection services" . (2018) : 463-473 .
APA Tung, Tony Miu , Wang, Chenxu , Wang, Jinhe . Understanding the behaviors of BGP-based DDoS protection services . (2018) : 463-473 .
Export to NoteExpress RIS BibTex
Unsupervised semantic-based convolutional features aggregation for image retrieval EI
期刊论文 | 2018 | Multimedia Tools and Applications
Abstract&Keyword Cite

Abstract :

Deep features extracted from the convolutional layers of pre-trained CNNs have been widely used in the image retrieval task. These features, however, are in a large number and probably cannot be directly used for similarity evaluation due to lack of efficiency. Thus, it is of great importance to study how to aggregate deep features into a global yet distinctive image vector. This paper first introduces a simple but effective method to select informative features based on semantic content of feature maps. Then, we propose an effective channel weighting method (CW) for selected features by analyzing relations between the discriminative activation and distribution parameters of feature maps, including standard variance, non-zero responses and sum value. Furthermore, we provide a solution to pick semantic detectors that are independent on gallery images. Based on the aforementioned three strategies, we derive a global image vector generation method, and demonstrate its state-of-the-art performance on benchmark datasets. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.

Keyword :

Deep convolutional features Distribution parameters Object localization Semantic detectors Similarity evaluation Standard variances State-of-the-art performance VGG16

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Xinsheng , Pang, Shanmin , Zhu, Jihua et al. Unsupervised semantic-based convolutional features aggregation for image retrieval [J]. | Multimedia Tools and Applications , 2018 .
MLA Wang, Xinsheng et al. "Unsupervised semantic-based convolutional features aggregation for image retrieval" . | Multimedia Tools and Applications (2018) .
APA Wang, Xinsheng , Pang, Shanmin , Zhu, Jihua , Wang, Jiaxing , Wang, Lin . Unsupervised semantic-based convolutional features aggregation for image retrieval . | Multimedia Tools and Applications , 2018 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 207 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:1924/54658409
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.