A Co-training Approach for Noisy Time Series Learning
By Weiqi Zhang et al
Published on June 10, 2023
Read the original document by opening this link in a new tab.
Table of Contents
ABSTRACT
1 INTRODUCTION
2 RELATED WORK
3 METHOD
3.1 Problem Definition
3.2 Single-View Contrastive Learning
3.3 Multi-View Contrastive Learning via Co-training
3.3.1 Why Prototype-based Co-training?
3.3.2 Unsupervised Representation Learning.
Summary
In this paper, Weiqi Zhang and colleagues propose a co-training approach for noisy time series learning. The work focuses on robust time series representation learning, considering noisy input data. By creating two views for the input time series through different encoders and conducting co-training based contrastive learning, the authors demonstrate a significant improvement in performance. The proposed TS-CoT method aims to mitigate the impact of data noise and corruption by leveraging complementary information from different views. Experimental evaluations on time series benchmarks show that TS-CoT outperforms existing methods. The paper discusses the importance of multi-view settings, contrastive learning, and the use of prototypes for effective representation learning. The framework involves unsupervised and semi-supervised tasks, showcasing superior empirical performance and robustness against noisy input.