Transformer networks for generic time series prediction

2024-06-17 — 2024-06-17

Wherein transformer architectures are applied to multivariate, irregularly sampled sequences, attention mechanisms are adapted to handle missing timestamps, and model performance is measured on benchmark datasets.

language
machine learning
meta learning
neural nets
NLP
stringology
time series
Figure 1

Transformers for generic time series prediction.

Placeholder.

1 References

Cai, Zhu, Wang, et al. 2024. MambaTS: Improved Selective State Space Models for Long-Term Time Series Forecasting.”
Das, Kong, Sen, et al. 2024. A Decoder-Only Foundation Model for Time-Series Forecasting.” In.
Garza, Challu, and Mergenthaler-Canseco. 2024. TimeGPT-1.”
He, Yang, Cheng, et al. 2025. Chaos Meets Attention: Transformers for Large-Scale Dynamical Prediction.”
Nguyen, Brandstetter, Kapoor, et al. 2023. ClimaX: A Foundation Model for Weather and Climate.”
Nishikawa, and Suzuki. 2024. State Space Models Are Comparable to Transformers in Estimating Functions with Dynamic Smoothness.”
Shi, Wang, Nie, et al. 2024. Time-MoE: Billion-Scale Time Series Foundation Models with Mixture of Experts.” In.
Zeng, Chen, Zhang, et al. 2023. Are Transformers Effective for Time Series Forecasting? In Proceedings of the AAAI Conference on Artificial Intelligence.