Indexed by:
Abstract:
Human action recognition and prediction with skeleton-based data has been widely applied in intelligent robots and machine vision. Current advanced methods mainly focus on using recurrent neural network (RNN) to predict human motion based only on skeleton data. However, objects that interact with the human body will have a great impact on human behaviors. Extracting body information without considering environmental objects constraints reduces the accuracy of behavior prediction. In this paper, we propose a scene-perception architecture, which implements an end-to-end human behavior prediction task based on both objects and human skeleton. We propose a scene-perception Graph Convolutional Network (SPGCN) to formula the natural constraint between skeleton nodes and objects in the scene to predict human postures. SPGCN is composed of graph convolutional network with scene-aware and RNN, GCN is used to learn the dependence relationships between human and objects. Then, the learned features are fed into RNN to predict future pose and action labels. We evaluate SPGCN on CAD-120 dataset. Experiments show that our proposed method achieves promising results compared with the state-of-the-art methods. © 2021 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2021
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 3
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0