Time Series Representation Learning with Supervised Contrastive Temporal Transformer

Yuansan Liu, Sudanthi Wijewickrema, Christofer Bester, Stephen J. O'Leary, James Bailey

    Research output: Chapter in Book/Conference paperConference paperpeer-review

    1 Citation (Scopus)

    Abstract

    Finding effective representations for time series data is a useful but challenging task. Several works utilize self-supervised or unsupervised learning methods to address this. However, there still remains the open question of how to leverage available label information for better representations. To answer this question, we exploit pre-existing techniques in time series and representation learning domains and develop a simple, yet novel fusion model, called: Supervised COntrastive Temporal Transformer (SCOTT). We first investigate suitable augmentation methods for various types of time series data to assist with learning change-invariant representations. Secondly, we combine Transformer and Temporal Convolutional Networks in a simple way to efficiently learn both global and local features. Finally, we simplify Supervised Contrastive Loss for representation learning of labelled time series data. We preliminarily evaluate SCOTT on a downstream task, Time Series Classification, using 45 datasets from the UCR archive. The results show that with the representations learnt by SCOTT, even a weak classifier can perform similar to or better than existing state-of-the-art models (best performance on 23/45 datasets and highest rank against 9 baseline models). Afterwards, we investigate SCOTT's ability to address a real-world task, online Change Point Detection (CPD), on two datasets: a human activity dataset and a surgical patient dataset. We show that the model performs with high reliability and efficiency on the online CPD problem (∼98% and area under precision-recall curve respectively). Furthermore, ∼97% we demonstrate the model's potential in tackling early detection and show it performs best compared to other candidates.

    Original languageEnglish
    Title of host publication2024 International Joint Conference on Neural Networks, IJCNN 2024 - Proceedings
    Place of PublicationCanada
    PublisherIEEE, Institute of Electrical and Electronics Engineers
    ISBN (Electronic)9798350359312
    DOIs
    Publication statusE-pub ahead of print - 9 Sept 2024
    Event2024 International Joint Conference on Neural Networks, IJCNN 2024 - Yokohama, Japan
    Duration: 30 Jun 20245 Jul 2024

    Publication series

    NameProceedings of the International Joint Conference on Neural Networks

    Conference

    Conference2024 International Joint Conference on Neural Networks, IJCNN 2024
    Country/TerritoryJapan
    CityYokohama
    Period30/06/245/07/24

    Fingerprint

    Dive into the research topics of 'Time Series Representation Learning with Supervised Contrastive Temporal Transformer'. Together they form a unique fingerprint.

    Cite this