Human interaction prediction using deep temporal features

    Research output: Chapter in Book/Conference paperConference paperpeer-review

    36 Citations (Scopus)

    Abstract

    Interaction prediction has a wide range of applications such as robot controlling and prevention of dangerous events. In this paper, we introduce a new method to capture deep temporal information in videos for human interaction prediction. We propose to use flow coding images to represent the low-level motion information in videos and extract deep temporal features using a deep convolutional neural network architecture. We tested our method on the UT-Interaction dataset and the challenging TV human interaction dataset, and demonstrated the advantages of the proposed deep temporal features based on flow coding images. The proposed method, though using only the temporal information, outperforms the state of the art methods for human interaction prediction.

    Original languageEnglish
    Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    EditorsGang Hua, Herve Jegou
    PublisherSpringer-Verlag London Ltd.
    Pages403-414
    Number of pages12
    Volume9914 LNCS
    ISBN (Print)9783319488806
    DOIs
    Publication statusPublished - 2016
    Event14th European Conference on Computer Vision, ECCV 2016 - Amsterdam, Netherlands
    Duration: 8 Oct 201616 Oct 2016

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume9914 LNCS
    ISSN (Print)03029743
    ISSN (Electronic)16113349

    Conference

    Conference14th European Conference on Computer Vision, ECCV 2016
    Country/TerritoryNetherlands
    CityAmsterdam
    Period8/10/1616/10/16

    Fingerprint

    Dive into the research topics of 'Human interaction prediction using deep temporal features'. Together they form a unique fingerprint.

    Cite this