Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs

Lei Wang, Piotr Koniusz, Du Huynh

Research output: Chapter in Book/Conference paperConference paper

21 Downloads (Pure)

Abstract

In this paper, we revive the use of old-fashioned handcrafted video representations for action recognition and put new life into these techniques via a CNN-based hallucination step. Despite of the use of RGB and optical flow frames, the I3D model (amongst others) thrives on combining its output with the Improved Dense Trajectory (IDT) and extracted with its low-level video descriptors encoded via Bag-of-Words (BoW) and Fisher Vectors (FV). Such a fusion of CNNs and handcrafted representations is time-consuming due to pre-processing, descriptor extraction, encoding and tuning parameters. Thus, we propose an end-to-end trainable network with streams which learn the IDT-based BoW/FV representations at the training stage and are simple to integrate with the I3D model. Specifically, each stream takes I3D feature maps ahead of the last 1D conv. layer and learns to `translate' these maps to BoW/FV representations. Thus, our model can hallucinate and use such synthesized BoW/FV representations at the testing stage. We show that even features of the entire I3D optical flow stream can be hallucinated thus simplifying the pipeline. Our model saves 20-55h of computations and yields state-of-the-art results on four publicly available datasets.
Original languageEnglish
Title of host publicationProceedings of the 2019 International Conference on Computer Vision
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages12
Publication statusE-pub ahead of print - Oct 2019
EventIEEE International Conference on Computer Vision 2019 - Seoul, Korea, Democratic People's Republic of
Duration: 27 Oct 20192 Nov 2019
http://iccv2019.thecvf.com/

Conference

ConferenceIEEE International Conference on Computer Vision 2019
Abbreviated titleICCV2019
CountryKorea, Democratic People's Republic of
CitySeoul
Period27/10/192/11/19
OtherICCV is the premier international computer vision event comprising the main conference and several co-located workshops and tutorials. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.
Internet address

Fingerprint

Optical flows
Trajectories
Fusion reactions
Tuning
Pipelines
Testing
Processing

Cite this

Wang, L., Koniusz, P., & Huynh, D. (2019). Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs. In Proceedings of the 2019 International Conference on Computer Vision IEEE, Institute of Electrical and Electronics Engineers.
Wang, Lei ; Koniusz, Piotr ; Huynh, Du. / Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs. Proceedings of the 2019 International Conference on Computer Vision. IEEE, Institute of Electrical and Electronics Engineers, 2019.
@inproceedings{f43af8e1839c4c6d965950fd14712dbd,
title = "Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs",
abstract = "In this paper, we revive the use of old-fashioned handcrafted video representations for action recognition and put new life into these techniques via a CNN-based hallucination step. Despite of the use of RGB and optical flow frames, the I3D model (amongst others) thrives on combining its output with the Improved Dense Trajectory (IDT) and extracted with its low-level video descriptors encoded via Bag-of-Words (BoW) and Fisher Vectors (FV). Such a fusion of CNNs and handcrafted representations is time-consuming due to pre-processing, descriptor extraction, encoding and tuning parameters. Thus, we propose an end-to-end trainable network with streams which learn the IDT-based BoW/FV representations at the training stage and are simple to integrate with the I3D model. Specifically, each stream takes I3D feature maps ahead of the last 1D conv. layer and learns to `translate' these maps to BoW/FV representations. Thus, our model can hallucinate and use such synthesized BoW/FV representations at the testing stage. We show that even features of the entire I3D optical flow stream can be hallucinated thus simplifying the pipeline. Our model saves 20-55h of computations and yields state-of-the-art results on four publicly available datasets.",
keywords = "cs.CV",
author = "Lei Wang and Piotr Koniusz and Du Huynh",
year = "2019",
month = "10",
language = "English",
booktitle = "Proceedings of the 2019 International Conference on Computer Vision",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",
address = "United States",

}

Wang, L, Koniusz, P & Huynh, D 2019, Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs. in Proceedings of the 2019 International Conference on Computer Vision. IEEE, Institute of Electrical and Electronics Engineers, IEEE International Conference on Computer Vision 2019, Seoul, Korea, Democratic People's Republic of, 27/10/19.

Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs. / Wang, Lei; Koniusz, Piotr; Huynh, Du.

Proceedings of the 2019 International Conference on Computer Vision. IEEE, Institute of Electrical and Electronics Engineers, 2019.

Research output: Chapter in Book/Conference paperConference paper

TY - GEN

T1 - Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs

AU - Wang, Lei

AU - Koniusz, Piotr

AU - Huynh, Du

PY - 2019/10

Y1 - 2019/10

N2 - In this paper, we revive the use of old-fashioned handcrafted video representations for action recognition and put new life into these techniques via a CNN-based hallucination step. Despite of the use of RGB and optical flow frames, the I3D model (amongst others) thrives on combining its output with the Improved Dense Trajectory (IDT) and extracted with its low-level video descriptors encoded via Bag-of-Words (BoW) and Fisher Vectors (FV). Such a fusion of CNNs and handcrafted representations is time-consuming due to pre-processing, descriptor extraction, encoding and tuning parameters. Thus, we propose an end-to-end trainable network with streams which learn the IDT-based BoW/FV representations at the training stage and are simple to integrate with the I3D model. Specifically, each stream takes I3D feature maps ahead of the last 1D conv. layer and learns to `translate' these maps to BoW/FV representations. Thus, our model can hallucinate and use such synthesized BoW/FV representations at the testing stage. We show that even features of the entire I3D optical flow stream can be hallucinated thus simplifying the pipeline. Our model saves 20-55h of computations and yields state-of-the-art results on four publicly available datasets.

AB - In this paper, we revive the use of old-fashioned handcrafted video representations for action recognition and put new life into these techniques via a CNN-based hallucination step. Despite of the use of RGB and optical flow frames, the I3D model (amongst others) thrives on combining its output with the Improved Dense Trajectory (IDT) and extracted with its low-level video descriptors encoded via Bag-of-Words (BoW) and Fisher Vectors (FV). Such a fusion of CNNs and handcrafted representations is time-consuming due to pre-processing, descriptor extraction, encoding and tuning parameters. Thus, we propose an end-to-end trainable network with streams which learn the IDT-based BoW/FV representations at the training stage and are simple to integrate with the I3D model. Specifically, each stream takes I3D feature maps ahead of the last 1D conv. layer and learns to `translate' these maps to BoW/FV representations. Thus, our model can hallucinate and use such synthesized BoW/FV representations at the testing stage. We show that even features of the entire I3D optical flow stream can be hallucinated thus simplifying the pipeline. Our model saves 20-55h of computations and yields state-of-the-art results on four publicly available datasets.

KW - cs.CV

UR - http://iccv2019.thecvf.com/

M3 - Conference paper

BT - Proceedings of the 2019 International Conference on Computer Vision

PB - IEEE, Institute of Electrical and Electronics Engineers

ER -

Wang L, Koniusz P, Huynh D. Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs. In Proceedings of the 2019 International Conference on Computer Vision. IEEE, Institute of Electrical and Electronics Engineers. 2019