⏳⌛️Time-Series Forecasting Wrap-Up
TheSequence is the best way to build and reinforce your knowledge about machine learning and AI
Occasionally, we’d like to wrap-up in one newsletter a topic that we’ve covered in mini-series. These collections will help you navigate the articles and fill the gaps if you missed something.
💡 Time-Series Forecasting
Time-Series Forecasting is our longest mini-series so far, it consists of five Edges. But first, let’s make some useful intro about the whole category:
Initially considered one of the classic use cases for machine learning, time-series forecasting methods are surprisingly tricky to master. Part of the challenge is that time-series forecasting is one of those disciplines that expands from classical statistics to modern deep learning. As a result, the number and diversity of methods are overwhelming. At the same time, it feels as if recent advancements in deep learning research haven’t done as much for time-series forecasting compared to the progress in other disciplines, such as computer vision or natural language understanding. Nonetheless, time-series forecasting remains one of the most popular use cases for machine learning techniques.
How to understand time-series forecasting? Conceptually, a time-series forecasting model attempts to predict the value of a target variable for a given entity at a given time. Typically, entities represent logical groupings of temporal information, such as the orders in a stock order book or the measurements from a temperature sensor. The two most important dimensions for understanding a time-series forecasting model are the nature of the problem and the methods used. Even though there are many types of time-series problems, most of them fall into one of the following categories:
Univariate: Problems that model a single series of information over time.
Multivariate: Problems that model multiple, inter-related information over time.
Multi-step: Problems that attempt to forecast multiple steps into the future.
Multivariate, Multi-step: Problems that forecast multiple steps into the future for different series.
Classification: Problems that predict a discrete class given an input time-series.
From traditional statistics, time-series forecasting methods can be classified using the following categories:
Benchmark Forecasting: Methods such as naïve forecast or geometric random walk that build up forecasting intuition by adding additional layers of complexity. These methods are rarely used in complex scenarios.
Exponential Smoothing Forecasting: Methods that remove the variability within a series. This group includes techniques such as simple exponential smoothing or Holt’s linear trend.
Autoregressive Forecasting: Methods such as the famous ARIMA or SARIMA that focus on using observations from previous time steps in several layers of regressive models.
In recent years, deep neural networks have become one of the most effective mechanisms to apply to time-series forecasting problems. Techniques such as convolutional neural networks (CNNs), recurrent neural networks (RNNs) and even attention-based models are rapidly expanding the different categories of time-series forecasting techniques. These days, it’s very common to find techniques such as long-short-term memory networks (LSTMs) and CNNs attacking the same problems typically handled by models such as ARIMA or SARIMA.
Edge#49 (read without subscription): an introduction to time-series forecasting models; how Uber uses neural networks to forecast during extreme events; Uber’s M3 time-series platform.
Edge#51: time-series forecasting and ARIMA; architectures to accelerate time series models using AutoML by Google researchers; GluonTS, Amazon’s preferred framework for time-series forecasting.
Edge#53: the concept of Prophet; Facebook’s Prophet time-series algorithm; PyTorch Forecasting that enables deep learning models for time-series forecasting
Edge#55: the concept of DeepAR; overview of Amazon Research about multi-dimensional time-series forecasting; and sktime – a unified time-series framework for Scikit-Learn.
Edge#57: Transformers for time-series; how Uber manages uncertainty in time-series prediction models; and tsfresh – a magical library for feature extraction in time-series datasets.
Reading TheSequence Edge regularly, you become smarter about ML and AI. Trusted by the major AI Labs and universities of the world.