How to Develop a Skillful Machine Learning Time Series Forecasting Model

Author: Jason Brownlee

You are handed data and told to develop a forecast model.

What do you do?

This is a common situation; far more common than most people think.

  • Perhaps you are sent a CSV file.
  • Perhaps you are given access to a database.
  • Perhaps you are starting a competition.

The problem can be reasonably well defined:

  • You have or can access historical time series data.
  • You know or can find out what needs to be forecasted.
  • You know or can find out how what is most important in evaluating a candidate model.

So how do you tackle this problem?

Unless you have been through this trial by fire, you may struggle.

  • You may struggle because you are new to the fields of machine learning and time series.
  • You may struggle even if you have machine learning experience because time series data is different.
  • You may struggle even if you have a background in time series forecasting because machine learning methods may outperform the classical approaches on your data.

In all of these cases, you will benefit from working through the problem carefully and systematically.

In this post, I want to give you a specific and actionable procedure that you can use to work through your time series forecasting problem.

Let’s get started.

How to Develop a Skilful Time Series Forecasting Model

How to Develop a Skilful Time Series Forecasting Model
Photo by Make it Kenya, some rights reserved.

Process Overview

The goal of this process is to get a “good enough” forecast model as fast as possible.

This process may or may not deliver the best possible model, but it will deliver a good model: a model that is better than a baseline prediction, if such a model exists.

Typically, this process will deliver a model that is 80% to 90% of what can be achieved on the problem.

The process is fast. As such, it focuses on automation. Hyperparameters are searched rather than specified based on careful analysis. You are encouraged to test suites of models in parallel, rapidly getting an idea of what works and what doesn’t.

Nevertheless, the process is flexible, allowing you to circle back or go as deep as you like on a given step if you have the time and resources.

This process is divided into four parts; they are:

  1. Define Problem
  2. Design Test Harness
  3. Test Models
  4. Finalize Model

You will notice that the process is different from a classical linear work-through of a predictive modeling problem. This is because it is designed to get a working forecast model fast and then slow down and see if you can get a better model.

What is your process for working through a new time series forecasting problem?
Share it below in the comments.

How to Use This Process

The biggest mistake is skipping steps.

For example, the mistake that almost all beginners make is going straight to modeling without a strong idea of what problem is being solved or how to robustly evaluate candidate solutions. This almost always results in a lot of wasted time.

Slow down, follow the process, and complete each step.

I recommend having separate code for each experiment that can be re-run at any time.

This is important so that you can circle back when you discover a bug, fix the code, and re-run an experiment. You are running experiments and iterating quickly, but if you are sloppy, then you cannot trust any of your results. This is especially important when it comes to the design of your test harness for evaluating candidate models.

Let’s take a closer look at each step of the process.

1. Define Problem

Define your time series problem.

Some topics to consider and motivating questions within each topic are as follows:

  1. Inputs vs. Outputs
    1. What are the inputs and outputs for a forecast?
  2. Endogenous vs. Exogenous
    1. What are the endogenous and exogenous variables?
  3. Unstructured vs. Structured
    1. Are the time series variables unstructured or structured?
  4. Regression vs. Classification
    1. Are you working on a regression or classification predictive modeling problem?
    2. What are some alternate ways to frame your time series forecasting problem?
  5. Univariate vs. Multivariate
    1. Are you working on a univariate or multivariate time series problem?
  6. Single-step vs. Multi-step
    1. Do you require a single-step or a multi-step forecast?
  7. Static vs. Dynamic
    1. Do you require a static or a dynamically updated model?

Answer each question even if you have to estimate or guess.

Some useful tools to help get answers include:

  • Data visualizations (e.g. line plots, etc.).
  • Statistical analysis (e.g. ACF/PACF plots, etc.).
  • Domain experts.
  • Project stakeholders.

Update your answers to these questions as you learn more.

2. Design Test Harness

Design a test harness that you can use to evaluate candidate models.

This includes both the method used to estimate model skill and the metric used to evaluate predictions.

Below is a common time series forecasting model evaluation scheme if you are looking for ideas:

  1. Split the dataset into a train and test set.
  2. Fit a candidate approach on the training dataset.
  3. Make predictions on the test set directly or using walk-forward validation.
  4. Calculate a metric that compares the predictions to the expected values.

The test harness must be robust and you must have complete trust in the results it provides.

An important consideration is to ensure that any coefficients used for data preparation are estimated from the training dataset only and then applied on the test set. This might include mean and standard deviation in the case of data standardization.

3. Test Models

Test many models using your test harness.

I recommend carefully designing experiments to test a suite of configurations for standard models and letting them run. Each experiment can record results to a file, to allow you to quickly discover the top three to five most skilful configurations from each run.

Some common classes of methods that you can design experiments around include the following:

  • Baseline.
    • Persistence (grid search the lag observation that is persisted)
    • Rolling moving average.
  • Autoregression.
    • ARMA for stationary data.
    • ARIMA for data with a trend.
    • SARIMA for data with seasonality.
  • Exponential Smoothing.
    • Simple Smoothing
    • Holt Winters Smoothing
  • Linear Machine Learning.
    • Linear Regression
    • Ridge Regression
    • Lasso Regression
    • Elastic Net Regression
    • ….
  • Nonlinear Machine Learning.
    • k-Nearest Neighbors
    • Classification and Regression Trees
    • Support Vector Regression
  • Ensemble Machine Learning.
    • Bagging
    • Boosting
    • Random Forest
    • Gradient Boosting
  • Deep Learning.
    • MLP
    • CNN
    • LSTM
    • Hybrids

This list is based on a univariate time series forecasting problem, but you can adapt it for the specifics of your problem, e.g. use VAR/VARMA/etc. in the case of multivariate time series forecasting.

Slot in more of your favorite classical time series forecasting methods and machine learning methods as you see fit.

Order here is important and is structured in increasing complexity from classical to modern methods. Early approaches are simple and give good results fast; later approaches are slower and more complex, but also have a higher bar to clear to be skillful.

The resulting model skill can be used in a ratchet. For example, the skill of the best persistence configuration provide a baseline skill that all other models must outperform. If an autoregression model does better than persistence, it becomes the new level to outperform in order for a method to be considered skilful.

Ideally, you want to exhaust each level before moving on to the next. E.g. get the most out of Autoregression methods and use the results as a new baseline to define “skilful” before moving on to Exponential Smoothing methods.

I put deep learning at the end as generally neural networks are poor at time series forecasting, but there is still a lot of room for improvement and experimentation in this area.

The more time and resources that you have, the more configurations that you can evaluate.

For example, with more time and resources, you could:

  • Search model configurations at a finer resolution around a configuration known to already perform well.
  • Search more model hyperparameter configurations.
  • Use analysis to set better bounds on model hyperparameters to be searched.
  • Use domain knowledge to better prepare data or engineer input features.
  • Explore different potentially more complex methods.
  • Explore ensembles of well performing base models.

I also encourage you to include data preparation schemes as hyperparameters for model runs.

Some methods will perform some basic data preparation, such as differencing in ARIMA, nevertheless, it is often unclear exactly what data preparation schemes or combinations of schemes are required to best present a dataset to a modeling algorithm. Rather than guess, grid search and decide based on real results.

Some data preparation schemes to consider include:

  • Differencing to remove a trend.
  • Seasonal differencing to remove seasonality.
  • Standardize to center.
  • Normalize to rescale.
  • Power Transform to make normal.

So much searching can be slow.

Some ideas to speed up the evaluation of models include:

  • Use multiple machines in parallel via cloud hardware (such as Amazon EC2).
  • Reduce the size of the train or test dataset to make the evaluation process faster.
  • Use a more coarse grid of hyperparameters and circle back if you have time later.
  • Perhaps do not refit a model for each step in walk-forward validation.

4. Finalize Model

At the end of the previous time step, you know whether your time series is predictable.

If it is predictable, you will have a list of the top 5 to 10 candidate models that are skillful on the problem.

You can pick one or multiple models and finalize them. This involves training a new final model on all available historical data (train and test).

The model is ready for use; for example:

  • Make a prediction for the future.
  • Save the model to file for later use in making predictions.
  • Incorporate the model into software for making predictions.

If you have time, you can always circle back to the previous step and see if you can further improve upon the final model.

This may be required periodically if the data changes significantly over time.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this post, you discovered a simple four-step process that you can use to quickly discover a skilful predictive model for your time series forecasting problem.

Did you find this process useful?
Let me know below.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

The post How to Develop a Skillful Machine Learning Time Series Forecasting Model appeared first on Machine Learning Mastery.

Go to Source