TY - GEN
T1 - Time Series Representation Learning with Supervised Contrastive Temporal Transformer
AU - Liu, Yuansan
AU - Wijewickrema, Sudanthi
AU - Bester, Christofer
AU - O'Leary, Stephen J.
AU - Bailey, James
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/9/9
Y1 - 2024/9/9
N2 - Finding effective representations for time series data is a useful but challenging task. Several works utilize self-supervised or unsupervised learning methods to address this. However, there still remains the open question of how to leverage available label information for better representations. To answer this question, we exploit pre-existing techniques in time series and representation learning domains and develop a simple, yet novel fusion model, called: Supervised COntrastive Temporal Transformer (SCOTT). We first investigate suitable augmentation methods for various types of time series data to assist with learning change-invariant representations. Secondly, we combine Transformer and Temporal Convolutional Networks in a simple way to efficiently learn both global and local features. Finally, we simplify Supervised Contrastive Loss for representation learning of labelled time series data. We preliminarily evaluate SCOTT on a downstream task, Time Series Classification, using 45 datasets from the UCR archive. The results show that with the representations learnt by SCOTT, even a weak classifier can perform similar to or better than existing state-of-the-art models (best performance on 23/45 datasets and highest rank against 9 baseline models). Afterwards, we investigate SCOTT's ability to address a real-world task, online Change Point Detection (CPD), on two datasets: a human activity dataset and a surgical patient dataset. We show that the model performs with high reliability and efficiency on the online CPD problem (∼98% and area under precision-recall curve respectively). Furthermore, ∼97% we demonstrate the model's potential in tackling early detection and show it performs best compared to other candidates.
AB - Finding effective representations for time series data is a useful but challenging task. Several works utilize self-supervised or unsupervised learning methods to address this. However, there still remains the open question of how to leverage available label information for better representations. To answer this question, we exploit pre-existing techniques in time series and representation learning domains and develop a simple, yet novel fusion model, called: Supervised COntrastive Temporal Transformer (SCOTT). We first investigate suitable augmentation methods for various types of time series data to assist with learning change-invariant representations. Secondly, we combine Transformer and Temporal Convolutional Networks in a simple way to efficiently learn both global and local features. Finally, we simplify Supervised Contrastive Loss for representation learning of labelled time series data. We preliminarily evaluate SCOTT on a downstream task, Time Series Classification, using 45 datasets from the UCR archive. The results show that with the representations learnt by SCOTT, even a weak classifier can perform similar to or better than existing state-of-the-art models (best performance on 23/45 datasets and highest rank against 9 baseline models). Afterwards, we investigate SCOTT's ability to address a real-world task, online Change Point Detection (CPD), on two datasets: a human activity dataset and a surgical patient dataset. We show that the model performs with high reliability and efficiency on the online CPD problem (∼98% and area under precision-recall curve respectively). Furthermore, ∼97% we demonstrate the model's potential in tackling early detection and show it performs best compared to other candidates.
KW - representation learning
KW - supervised contrastive learning
KW - temporal convolution
KW - time series
KW - transformer
UR - http://www.scopus.com/inward/record.url?scp=85205012501&partnerID=8YFLogxK
U2 - 10.1109/IJCNN60899.2024.10650516
DO - 10.1109/IJCNN60899.2024.10650516
M3 - Conference paper
AN - SCOPUS:85205012501
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2024 International Joint Conference on Neural Networks, IJCNN 2024 - Proceedings
PB - IEEE, Institute of Electrical and Electronics Engineers
CY - Canada
T2 - 2024 International Joint Conference on Neural Networks, IJCNN 2024
Y2 - 30 June 2024 through 5 July 2024
ER -