{"id":1161,"date":"2018-10-14T18:00:09","date_gmt":"2018-10-14T18:00:09","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2018\/10\/14\/how-to-develop-baseline-forecasts-for-multi-site-multivariate-air-pollution-time-series-forecasting\/"},"modified":"2018-10-14T18:00:09","modified_gmt":"2018-10-14T18:00:09","slug":"how-to-develop-baseline-forecasts-for-multi-site-multivariate-air-pollution-time-series-forecasting","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2018\/10\/14\/how-to-develop-baseline-forecasts-for-multi-site-multivariate-air-pollution-time-series-forecasting\/","title":{"rendered":"How to Develop Baseline Forecasts for Multi-Site Multivariate Air Pollution Time Series Forecasting"},"content":{"rendered":"<p>Author: Jason Brownlee<\/p>\n<div>\n<p>Real-world time series forecasting is challenging for a whole host of reasons not limited to problem features such as having multiple input variables, the requirement to predict multiple time steps, and the need to perform the same type of prediction for multiple physical sites.<\/p>\n<p>The EMC Data Science Global Hackathon dataset, or the \u2018<em>Air Quality Prediction<\/em>\u2018 dataset for short, describes weather conditions at multiple sites and requires a prediction of air quality measurements over the subsequent three days.<\/p>\n<p>An important first step when working with a new time series forecasting dataset is to develop a baseline in model performance by which the skill of all other more sophisticated strategies can be compared. Baseline forecasting strategies are simple and fast. They are referred to as \u2018naive\u2019 strategies because they assume very little or nothing about the specific forecasting problem.<\/p>\n<p>In this tutorial, you will discover how to develop naive forecasting methods for the multistep multivariate air pollution time series forecasting problem.<\/p>\n<p>After completing this tutorial, you will know:<\/p>\n<ul>\n<li>How to develop a test harness for evaluating forecasting strategies for the air pollution dataset.<\/li>\n<li>How to develop global naive forecast strategies that use data from the entire training dataset.<\/li>\n<li>How to develop local naive forecast strategies that use data from the specific interval that is being forecasted.<\/li>\n<\/ul>\n<p>Let\u2019s get started.<\/p>\n<div id=\"attachment_6322\" style=\"width: 650px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-6322\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2018\/10\/How-to-Develop-Baseline-Forecasts-for-Multi-Site-Multivariate-Air-Pollution-Time-Series-Forecasting.jpg\" alt=\"How to Develop Baseline Forecasts for Multi-Site Multivariate Air Pollution Time Series Forecasting\" width=\"640\" height=\"480\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/10\/How-to-Develop-Baseline-Forecasts-for-Multi-Site-Multivariate-Air-Pollution-Time-Series-Forecasting.jpg 640w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/10\/How-to-Develop-Baseline-Forecasts-for-Multi-Site-Multivariate-Air-Pollution-Time-Series-Forecasting-300x225.jpg 300w\" sizes=\"(max-width: 640px) 100vw, 640px\"><\/p>\n<p class=\"wp-caption-text\">How to Develop Baseline Forecasts for Multi-Site Multivariate Air Pollution Time Series Forecasting<br \/>Photo by <a href=\"https:\/\/www.flickr.com\/photos\/zongo\/38524476520\/\">DAVID HOLT<\/a>, some rights reserved.<\/p>\n<\/div>\n<h2>Tutorial Overview<\/h2>\n<p>This tutorial is divided into six parts; they are:<\/p>\n<ul>\n<li>Problem Description<\/li>\n<li>Naive Methods<\/li>\n<li>Model Evaluation<\/li>\n<li>Global Naive Methods<\/li>\n<li>Chunk Naive Methods<\/li>\n<li>Summary of Results<\/li>\n<\/ul>\n<h2>Problem Description<\/h2>\n<p>The Air Quality Prediction dataset describes weather conditions at multiple sites and requires a prediction of air quality measurements over the subsequent three days.<\/p>\n<p>Specifically, weather observations such as temperature, pressure, wind speed, and wind direction are provided hourly for eight days for multiple sites. The objective is to predict air quality measurements for the next 3 days at multiple sites. The forecast lead times are not contiguous; instead, specific lead times must be forecast over the 72 hour forecast period. They are:<\/p>\n<pre class=\"crayon-plain-tag\">+1, +2, +3, +4, +5, +10, +17, +24, +48, +72<\/pre>\n<p>Further, the dataset is divided into disjoint but contiguous chunks of data, with eight days of data followed by three days that require a forecast.<\/p>\n<p>Not all observations are available at all sites or chunks and not all output variables are available at all sites and chunks. There are large portions of missing data that must be addressed.<\/p>\n<p>The dataset was used as the basis for a <a href=\"https:\/\/www.kaggle.com\/c\/dsg-hackathon\">short duration machine learning competition<\/a> (or hackathon) on the Kaggle website in 2012.<\/p>\n<p>Submissions for the competition were evaluated against the true observations that were withheld from participants and scored using Mean Absolute Error (MAE). Submissions required the value of -1,000,000 to be specified in those cases where a forecast was not possible due to missing data. In fact, a template of where to insert missing values was provided and required to be adopted for all submissions (what a pain).<\/p>\n<p>A winning entrant achieved a MAE of 0.21058 on the withheld test set (<a href=\"https:\/\/www.kaggle.com\/c\/dsg-hackathon\/leaderboard\">private leaderboard<\/a>) using random forest on lagged observations. A writeup of this solution is available in the post:<\/p>\n<ul>\n<li><a href=\"http:\/\/blog.kaggle.com\/2012\/05\/01\/chucking-everything-into-a-random-forest-ben-hamner-on-winning-the-air-quality-prediction-hackathon\/\">Chucking everything into a Random Forest: Ben Hamner on Winning The Air Quality Prediction Hackathon<\/a>, 2012.<\/li>\n<\/ul>\n<p>In this tutorial, we will explore how to develop naive forecasts for the problem that can be used as a baseline to determine whether a model has skill on the problem or not.<\/p>\n<\/p>\n<div class=\"woo-sc-hr\"><\/div>\n<p><center><\/p>\n<h3>Need help with Deep Learning for Time Series?<\/h3>\n<p>Take my free 7-day email crash course now (with sample code).<\/p>\n<p>Click to sign-up and also get a free PDF Ebook version of the course.<\/p>\n<p><a href=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/14531ee73f72a2%3A164f8be4f346dc\/5630742793027584\/\" target=\"_blank\" style=\"background: rgb(255, 206, 10); color: rgb(255, 255, 255); text-decoration: none; font-family: Helvetica, Arial, sans-serif; font-weight: bold; font-size: 16px; line-height: 20px; padding: 10px; display: inline-block; max-width: 300px; border-radius: 5px; text-shadow: rgba(0, 0, 0, 0.25) 0px -1px 1px; box-shadow: rgba(255, 255, 255, 0.5) 0px 1px 3px inset, rgba(0, 0, 0, 0.5) 0px 1px 3px;\">Download Your FREE Mini-Course<\/a><script data-leadbox=\"14531ee73f72a2:164f8be4f346dc\" data-url=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/14531ee73f72a2%3A164f8be4f346dc\/5630742793027584\/\" data-config=\"%7B%7D\" type=\"text\/javascript\" src=\"https:\/\/machinelearningmastery.lpages.co\/leadbox-1534880695.js\"><\/script><\/p>\n<p><\/center><\/p>\n<div class=\"woo-sc-hr\"><\/div>\n<h2>Naive Forecast Methods<\/h2>\n<p>A baseline in forecast performance provides a point of comparison.<\/p>\n<p>It is a point of reference for all other modeling techniques on your problem. If a model achieves performance at or below the baseline, the technique should be fixed or abandoned.<\/p>\n<p>The technique used to generate a forecast to calculate the baseline performance must be easy to implement and naive of problem-specific details. The principle is that if a sophisticated forecast method cannot outperform a model that uses little or no problem-specific information, then it does not have skill.<\/p>\n<p>There are problem-agnostic forecast methods that can and should be used first, followed by naive methods that use a modicum of problem-specific information.<\/p>\n<p>Two examples of problem agnostic naive forecast methods that could be used include:<\/p>\n<ul>\n<li>Persist the last observed value for each series.<\/li>\n<li>Forecast the average of observed values for each series.<\/li>\n<\/ul>\n<p>The data is divided into chunks, or intervals, of time. Each chunk of time has multiple variables at multiple sites to forecast. The persistence forecast method makes sense at this chunk-level of organization of the data.<\/p>\n<p>Other persistence methods could be explored; for example:<\/p>\n<ul>\n<li>Forecast observations from the previous day for the next three days for each series.<\/li>\n<li>Forecast observations from the previous three days for the next three days for each series.<\/li>\n<\/ul>\n<p>These are desirable baseline methods to explore, but the large amount of missing data and discontiguous structure of most of the data chunks make them challenging to implement without non-trivial data preparation.<\/p>\n<p>Forecasting the average observations for each series can be elaborated further; for example:<\/p>\n<ul>\n<li>Forecast the global (across-chunk) average value for each series.<\/li>\n<li>Forecast the local (within-chunk) average value for each series.<\/li>\n<\/ul>\n<p>A three-day forecast is required for each series with different start-times, e.g. times of day. As such, the forecast lead times for each chunk will fall on different hours of the day.<\/p>\n<p>A further elaboration of forecasting the average value is to incorporate the hour of day that is being forecasted; for example:<\/p>\n<ul>\n<li>Forecast the global (across-chunk) average value for the hour of day for each forecast lead time.<\/li>\n<li>Forecast the local (within-chunk) average value for the hour of day for each forecast lead time.<\/li>\n<\/ul>\n<p>Many variables are measured at multiple sites; as such, it may be possible to use information across series, such as in the calculation of averages or averages per hour of day for forecast lead times. These are interesting, but may exceed the mandate of naive.<\/p>\n<p>This is a good starting point, although there may be further elaborations of the naive methods that you may want to consider and explore as an exercise. Remember, the goal is to use very little problem specific information in order to develop a forecast baseline.<\/p>\n<p>In summary, we will investigate five different naive forecasting methods for this problem, the best of which will provide a lower-bound on performance by which other models can be compared. They are:<\/p>\n<ol>\n<li>Global Average Value per Series<\/li>\n<li>Global Average Value for Forecast Lead Time per Series<\/li>\n<li>Local Persisted Value per Series<\/li>\n<li>Local Average Value per Series<\/li>\n<li>Local Average Value for Forecast Lead Time per Series<\/li>\n<\/ol>\n<h2>Model Evaluation<\/h2>\n<p>Before we can evaluate naive forecasting methods, we must develop a test harness.<\/p>\n<p>This includes at least how the data will be prepared and how forecasts will be evaluated.<\/p>\n<h3>Load Dataset<\/h3>\n<p>The first step is to download the dataset and load it into memory.<\/p>\n<p>The dataset can be downloaded for free from the Kaggle website. You may have to create an account and log in, in order to be able to download the dataset.<\/p>\n<p>Download the entire dataset, e.g. \u201c<em>Download All<\/em>\u201d to your workstation and unzip the archive in your current working directory with the folder named \u2018<em>AirQualityPrediction<\/em>\u2018.<\/p>\n<ul>\n<li><a href=\"https:\/\/www.kaggle.com\/c\/dsg-hackathon\/data\">EMC Data Science Global Hackathon (Air Quality Prediction) Data<\/a><\/li>\n<\/ul>\n<p>Our focus will be the \u2018<em>TrainingData.csv<\/em>\u2018 file that contains the training dataset, specifically data in chunks where each chunk is eight contiguous days of observations and target variables.<\/p>\n<p>We can load the data file into memory using the Pandas <a href=\"https:\/\/pandas.pydata.org\/pandas-docs\/stable\/generated\/pandas.read_csv.html\">read_csv() function<\/a> and specify the header row on line 0.<\/p>\n<pre class=\"crayon-plain-tag\"># load dataset\r\ndataset = read_csv('AirQualityPrediction\/TrainingData.csv', header=0)<\/pre>\n<p>We can group data by the \u2018chunkID\u2019 variable (column index 1).<\/p>\n<p>First, let\u2019s get a list of the unique chunk identifiers.<\/p>\n<pre class=\"crayon-plain-tag\">chunk_ids = unique(values[:, 1])<\/pre>\n<p>We can then collect all rows for each chunk identifier and store them in a dictionary for easy access.<\/p>\n<pre class=\"crayon-plain-tag\">chunks = dict()\r\n# sort rows by chunk id\r\nfor chunk_id in chunk_ids:\r\n\tselection = values[:, chunk_ix] == chunk_id\r\n\tchunks[chunk_id] = values[selection, :]<\/pre>\n<p>Below defines a function named <em>to_chunks()<\/em> that takes a NumPy array of the loaded data and returns a dictionary of <em>chunk_id<\/em> to rows for the chunk.<\/p>\n<pre class=\"crayon-plain-tag\"># split the dataset by 'chunkID', return a dict of id to rows\r\ndef to_chunks(values, chunk_ix=1):\r\n\tchunks = dict()\r\n\t# get the unique chunk ids\r\n\tchunk_ids = unique(values[:, chunk_ix])\r\n\t# group rows by chunk id\r\n\tfor chunk_id in chunk_ids:\r\n\t\tselection = values[:, chunk_ix] == chunk_id\r\n\t\tchunks[chunk_id] = values[selection, :]\r\n\treturn chunks<\/pre>\n<p>The complete example that loads the dataset and splits it into chunks is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># load data and split into chunks\r\nfrom numpy import unique\r\nfrom pandas import read_csv\r\n\r\n# split the dataset by 'chunkID', return a dict of id to rows\r\ndef to_chunks(values, chunk_ix=1):\r\n\tchunks = dict()\r\n\t# get the unique chunk ids\r\n\tchunk_ids = unique(values[:, chunk_ix])\r\n\t# group rows by chunk id\r\n\tfor chunk_id in chunk_ids:\r\n\t\tselection = values[:, chunk_ix] == chunk_id\r\n\t\tchunks[chunk_id] = values[selection, :]\r\n\treturn chunks\r\n\r\n# load dataset\r\ndataset = read_csv('AirQualityPrediction\/TrainingData.csv', header=0)\r\n# group data by chunks\r\nvalues = dataset.values\r\nchunks = to_chunks(values)\r\nprint('Total Chunks: %d' % len(chunks))<\/pre>\n<p>Running the example prints the number of chunks in the dataset.<\/p>\n<pre class=\"crayon-plain-tag\">Total Chunks: 208<\/pre>\n<\/p>\n<h3>Data Preparation<\/h3>\n<p>Now that we know how to load the data and split it into chunks, we can separate into train and test datasets.<\/p>\n<p>Each chunk covers an interval of eight days of hourly observations, although the number of actual observations within each chunk may vary widely.<\/p>\n<p>We can split each chunk into the first five days of observations for training and the last three for test.<\/p>\n<p>Each observation has a row called \u2018<em>position_within_chunk<\/em>\u2018 that varies from 1 to 192 (8 days * 24 hours). We can therefore take all rows with a value in this column that is less than or equal to 120 (5 * 24) as training data and any values more than 120 as test data.<\/p>\n<p>Further, any chunks that don\u2019t have any observations in the train or test split can be dropped as not viable.<\/p>\n<p>When working with the naive models, we are only interested in the target variables, and none of the input meteorological variables. Therefore, we can remove the input data and have the train and test data only comprised of the 39 target variables for each chunk, as well as the position within chunk and hour of observation.<\/p>\n<p>The <em>split_train_test()<\/em> function below implements this behavior; given a dictionary of chunks, it will split each into a list of train and test chunk data.<\/p>\n<pre class=\"crayon-plain-tag\"># split each chunk into train\/test sets\r\ndef split_train_test(chunks, row_in_chunk_ix=2):\r\n\ttrain, test = list(), list()\r\n\t# first 5 days of hourly observations for train\r\n\tcut_point = 5 * 24\r\n\t# enumerate chunks\r\n\tfor k,rows in chunks.items():\r\n\t\t# split chunk rows by 'position_within_chunk'\r\n\t\ttrain_rows = rows[rows[:,row_in_chunk_ix] <= cut_point, :]\r\n\t\ttest_rows = rows[rows[:,row_in_chunk_ix] > cut_point, :]\r\n\t\tif len(train_rows) == 0 or len(test_rows) == 0:\r\n\t\t\tprint('>dropping chunk=%d: train=%s, test=%s' % (k, train_rows.shape, test_rows.shape))\r\n\t\t\tcontinue\r\n\t\t# store with chunk id, position in chunk, hour and all targets\r\n\t\tindices = [1,2,5] + [x for x in range(56,train_rows.shape[1])]\r\n\t\ttrain.append(train_rows[:, indices])\r\n\t\ttest.append(test_rows[:, indices])\r\n\treturn train, test<\/pre>\n<p>We do not require the entire test dataset; instead, we only require the observations at specific lead times over the three day period, specifically the lead times:<\/p>\n<pre class=\"crayon-plain-tag\">+1, +2, +3, +4, +5, +10, +17, +24, +48, +72<\/pre>\n<p>Where, each lead time is relative to the end of the training period.<\/p>\n<p>First, we can put these lead times into a function for easy reference:<\/p>\n<pre class=\"crayon-plain-tag\"># return a list of relative forecast lead times\r\ndef get_lead_times():\r\n\treturn [1, 2 ,3, 4, 5, 10, 17, 24, 48, 72]<\/pre>\n<p>Next, we can reduce the test dataset down to just the data at the preferred lead times.<\/p>\n<p>We can do that by looking at the \u2018<em>position_within_chunk<\/em>\u2018 column and using the lead time as an offset from the end of the training dataset, e.g. 120 + 1, 120 +2, etc.<\/p>\n<p>If we find a matching row in the test set, it is saved, otherwise a row of NaN observations is generated.<\/p>\n<p>The function <em>to_forecasts()<\/em> below implements this and returns a NumPy array with one row for each forecast lead time for each chunk.<\/p>\n<pre class=\"crayon-plain-tag\"># convert the rows in a test chunk to forecasts\r\ndef to_forecasts(test_chunks, row_in_chunk_ix=1):\r\n\t# get lead times\r\n\tlead_times = get_lead_times()\r\n\t# first 5 days of hourly observations for train\r\n\tcut_point = 5 * 24\r\n\tforecasts = list()\r\n\t# enumerate each chunk\r\n\tfor rows in test_chunks:\r\n\t\tchunk_id = rows[0, 0]\r\n\t\t# enumerate each lead time\r\n\t\tfor tau in lead_times:\r\n\t\t\t# determine the row in chunk we want for the lead time\r\n\t\t\toffset = cut_point + tau\r\n\t\t\t# retrieve data for the lead time using row number in chunk\r\n\t\t\trow_for_tau = rows[rows[:,row_in_chunk_ix]==offset, :]\r\n\t\t\t# check if we have data\r\n\t\t\tif len(row_for_tau) == 0:\r\n\t\t\t\t# create a mock row [chunk, position, hour] + [nan...]\r\n\t\t\t\trow = [chunk_id, offset, nan] + [nan for _ in range(39)]\r\n\t\t\t\tforecasts.append(row)\r\n\t\t\telse:\r\n\t\t\t\t# store the forecast row\r\n\t\t\t\tforecasts.append(row_for_tau[0])\r\n\treturn array(forecasts)<\/pre>\n<p>We can tie all of this together and split the dataset into train and test sets and save the results to new files.<\/p>\n<p>The complete code example is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># split data into train and test sets\r\nfrom numpy import unique\r\nfrom numpy import nan\r\nfrom numpy import array\r\nfrom numpy import savetxt\r\nfrom pandas import read_csv\r\n\r\n# split the dataset by 'chunkID', return a dict of id to rows\r\ndef to_chunks(values, chunk_ix=1):\r\n\tchunks = dict()\r\n\t# get the unique chunk ids\r\n\tchunk_ids = unique(values[:, chunk_ix])\r\n\t# group rows by chunk id\r\n\tfor chunk_id in chunk_ids:\r\n\t\tselection = values[:, chunk_ix] == chunk_id\r\n\t\tchunks[chunk_id] = values[selection, :]\r\n\treturn chunks\r\n\r\n# split each chunk into train\/test sets\r\ndef split_train_test(chunks, row_in_chunk_ix=2):\r\n\ttrain, test = list(), list()\r\n\t# first 5 days of hourly observations for train\r\n\tcut_point = 5 * 24\r\n\t# enumerate chunks\r\n\tfor k,rows in chunks.items():\r\n\t\t# split chunk rows by 'position_within_chunk'\r\n\t\ttrain_rows = rows[rows[:,row_in_chunk_ix] <= cut_point, :]\r\n\t\ttest_rows = rows[rows[:,row_in_chunk_ix] > cut_point, :]\r\n\t\tif len(train_rows) == 0 or len(test_rows) == 0:\r\n\t\t\tprint('>dropping chunk=%d: train=%s, test=%s' % (k, train_rows.shape, test_rows.shape))\r\n\t\t\tcontinue\r\n\t\t# store with chunk id, position in chunk, hour and all targets\r\n\t\tindices = [1,2,5] + [x for x in range(56,train_rows.shape[1])]\r\n\t\ttrain.append(train_rows[:, indices])\r\n\t\ttest.append(test_rows[:, indices])\r\n\treturn train, test\r\n\r\n# return a list of relative forecast lead times\r\ndef get_lead_times():\r\n\treturn [1, 2 ,3, 4, 5, 10, 17, 24, 48, 72]\r\n\r\n# convert the rows in a test chunk to forecasts\r\ndef to_forecasts(test_chunks, row_in_chunk_ix=1):\r\n\t# get lead times\r\n\tlead_times = get_lead_times()\r\n\t# first 5 days of hourly observations for train\r\n\tcut_point = 5 * 24\r\n\tforecasts = list()\r\n\t# enumerate each chunk\r\n\tfor rows in test_chunks:\r\n\t\tchunk_id = rows[0, 0]\r\n\t\t# enumerate each lead time\r\n\t\tfor tau in lead_times:\r\n\t\t\t# determine the row in chunk we want for the lead time\r\n\t\t\toffset = cut_point + tau\r\n\t\t\t# retrieve data for the lead time using row number in chunk\r\n\t\t\trow_for_tau = rows[rows[:,row_in_chunk_ix]==offset, :]\r\n\t\t\t# check if we have data\r\n\t\t\tif len(row_for_tau) == 0:\r\n\t\t\t\t# create a mock row [chunk, position, hour] + [nan...]\r\n\t\t\t\trow = [chunk_id, offset, nan] + [nan for _ in range(39)]\r\n\t\t\t\tforecasts.append(row)\r\n\t\t\telse:\r\n\t\t\t\t# store the forecast row\r\n\t\t\t\tforecasts.append(row_for_tau[0])\r\n\treturn array(forecasts)\r\n\r\n# load dataset\r\ndataset = read_csv('AirQualityPrediction\/TrainingData.csv', header=0)\r\n# group data by chunks\r\nvalues = dataset.values\r\nchunks = to_chunks(values)\r\n# split into train\/test\r\ntrain, test = split_train_test(chunks)\r\n# flatten training chunks to rows\r\ntrain_rows = array([row for rows in train for row in rows])\r\n# print(train_rows.shape)\r\nprint('Train Rows: %s' % str(train_rows.shape))\r\n# reduce train to forecast lead times only\r\ntest_rows = to_forecasts(test)\r\nprint('Test Rows: %s' % str(test_rows.shape))\r\n# save datasets\r\nsavetxt('AirQualityPrediction\/naive_train.csv', train_rows, delimiter=',')\r\nsavetxt('AirQualityPrediction\/naive_test.csv', test_rows, delimiter=',')<\/pre>\n<p>Running the example first comments that chunk 69 is removed from the dataset for having insufficient data.<\/p>\n<p>We can then see that we have 42 columns in each of the train and test sets, one for the chunk id, position within chunk, hour of day, and the 39 training variables.<\/p>\n<p>We can also see the dramatically smaller version of the test dataset with rows only at the forecast lead times.<\/p>\n<p>The new train and test datasets are saved in the \u2018<em>naive_train.csv<\/em>\u2018 and \u2018<em>naive_test.csv<\/em>\u2018 files respectively.<\/p>\n<pre class=\"crayon-plain-tag\">>dropping chunk=69: train=(0, 95), test=(28, 95)\r\nTrain Rows: (23514, 42)\r\nTest Rows: (2070, 42)<\/pre>\n<\/p>\n<h3>Forecast Evaluation<\/h3>\n<p>Once forecasts have been made, they need to be evaluated.<\/p>\n<p>It is helpful to have a simpler format when evaluating forecasts. For example, we will use the three-dimensional structure of <em>[chunks][variables][time]<\/em>, where variable is the target variable number from 0 to 38 and time is the lead time index from 0 to 9.<\/p>\n<p>Models are expected to make predictions in this format.<\/p>\n<p>We can also restructure the test dataset to have this dataset for comparison. The <em>prepare_test_forecasts()<\/em> function below implements this.<\/p>\n<pre class=\"crayon-plain-tag\"># convert the test dataset in chunks to [chunk][variable][time] format\r\ndef prepare_test_forecasts(test_chunks):\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor rows in test_chunks:\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(3, rows.shape[1]):\r\n\t\t\tyhat = rows[:, j]\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)<\/pre>\n<p>We will evaluate a model using the mean absolute error, or MAE. This is the metric that was used in the competition and is a sensible choice given the non-Gaussian distribution of the target variables.<\/p>\n<p>If a lead time contains no data in the test set (e.g. <em>NaN<\/em>), then no error will be calculated for that forecast. If the lead time does have data in the test set but no data in the forecast, then the full magnitude of the observation will be taken as error. Finally, if the test set has an observation and a forecast was made, then the absolute difference will be recorded as the error.<\/p>\n<p>The <em>calculate_error()<\/em> function implements these rules and returns the error for a given forecast.<\/p>\n<pre class=\"crayon-plain-tag\"># calculate the error between an actual and predicted value\r\ndef calculate_error(actual, predicted):\r\n\t# give the full actual value if predicted is nan\r\n\tif isnan(predicted):\r\n\t\treturn abs(actual)\r\n\t# calculate abs difference\r\n\treturn abs(actual - predicted)<\/pre>\n<p>Errors are summed across all chunks and all lead times, then averaged.<\/p>\n<p>The overall MAE will be calculated, but we will also calculate a MAE for each forecast lead time. This can help with model selection generally as some models may perform differently at different lead times.<\/p>\n<p>The evaluate_forecasts() function below implements this, calculating the MAE and per-lead time MAE for the provided predictions and expected values in <em>[chunk][variable][time]<\/em> format.<\/p>\n<pre class=\"crayon-plain-tag\"># evaluate a forecast in the format [chunk][variable][time]\r\ndef evaluate_forecasts(predictions, testset):\r\n\tlead_times = get_lead_times()\r\n\ttotal_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))]\r\n\ttotal_c, times_c = 0, [0 for _ in range(len(lead_times))]\r\n\t# enumerate test chunks\r\n\tfor i in range(len(test_chunks)):\r\n\t\t# convert to forecasts\r\n\t\tactual = testset[i]\r\n\t\tpredicted = predictions[i]\r\n\t\t# enumerate target variables\r\n\t\tfor j in range(predicted.shape[0]):\r\n\t\t\t# enumerate lead times\r\n\t\t\tfor k in range(len(lead_times)):\r\n\t\t\t\t# skip if actual in nan\r\n\t\t\t\tif isnan(actual[j, k]):\r\n\t\t\t\t\tcontinue\r\n\t\t\t\t# calculate error\r\n\t\t\t\terror = calculate_error(actual[j, k], predicted[j, k])\r\n\t\t\t\t# update statistics\r\n\t\t\t\ttotal_mae += error\r\n\t\t\t\ttimes_mae[k] += error\r\n\t\t\t\ttotal_c += 1\r\n\t\t\t\ttimes_c[k] += 1\r\n\t# normalize summed absolute errors\r\n\ttotal_mae \/= total_c\r\n\ttimes_mae = [times_mae[i]\/times_c[i] for i in range(len(times_mae))]\r\n\treturn total_mae, times_mae<\/pre>\n<p>Once we have the evaluation of a model, we can present it.<\/p>\n<p>The <em>summarize_error()<\/em> function below first prints a one-line summary of a model\u2019s performance then creates a plot of MAE per forecast lead time.<\/p>\n<pre class=\"crayon-plain-tag\"># summarize scores\r\ndef summarize_error(name, total_mae, times_mae):\r\n\t# print summary\r\n\tlead_times = get_lead_times()\r\n\tformatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))]\r\n\ts_scores = ', '.join(formatted)\r\n\tprint('%s: [%.3f MAE] %s' % (name, total_mae, s_scores))\r\n\t# plot summary\r\n\tpyplot.plot([str(x) for x in lead_times], times_mae, marker='.')\r\n\tpyplot.show()<\/pre>\n<p>We are now ready to start exploring the performance of naive forecasting methods.<\/p>\n<h2>Global Naive Methods<\/h2>\n<p>In this section, we will explore naive forecast methods that use all data in the training dataset, not constrained to the chunk for which we are making a prediction.<\/p>\n<p>We will look at two approaches:<\/p>\n<ul>\n<li>Forecast Average Value per Series<\/li>\n<li>Forecast Average Value for Hour-of-Day per Series<\/li>\n<\/ul>\n<h3>Forecast Average Value per Series<\/h3>\n<p>The first step is to implement a general function for making a forecast for each chunk.<\/p>\n<p>The function takes the training dataset and the input columns (chunk id, position in chunk, and hour) for the test set and returns forecasts for all chunks with the expected 3D format of <em>[chunk][variable][time]<\/em>.<\/p>\n<p>The function enumerates the chunks in the forecast, then enumerates the 39 target columns, calling another new function named <em>forecast_variable()<\/em> in order to make a prediction for each lead time for a given target variable.<\/p>\n<p>The complete function is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast for each chunk, returns [chunk][variable][time]\r\ndef forecast_chunks(train_chunks, test_input):\r\n\tlead_times = get_lead_times()\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor i in range(len(train_chunks)):\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(39):\r\n\t\t\tyhat = forecast_variable(train_chunks, train_chunks[i], test_input[i], lead_times, j)\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)<\/pre>\n<p>We can now implement a version of the <em>forecast_variable()<\/em> that calculates the mean for a given series and forecasts that mean for each lead time.<\/p>\n<p>First, we must collect all observations in the target column across all chunks, then calculate the average of the observations while also ignoring the NaN values. The <em>nanmean()<\/em> NumPy function will calculate the mean of an array and ignore <em>NaN<\/em> values.<\/p>\n<p>The <em>forecast_variable()<\/em> function below implements this behavior.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# collect obs from all chunks\r\n\tall_obs = list()\r\n\tfor chunk in train_chunks:\r\n\t\tall_obs += [x for x in chunk[:, col_ix]]\r\n\t# return the average, ignoring nan\r\n\tvalue = nanmean(all_obs)\r\n\treturn [value for _ in lead_times]<\/pre>\n<p>We now have everything we need.<\/p>\n<p>The complete example of forecasting the global mean for each series across all forecast lead times is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast global mean\r\nfrom numpy import loadtxt\r\nfrom numpy import nan\r\nfrom numpy import isnan\r\nfrom numpy import count_nonzero\r\nfrom numpy import unique\r\nfrom numpy import array\r\nfrom numpy import nanmean\r\nfrom matplotlib import pyplot\r\n\r\n# split the dataset by 'chunkID', return a list of chunks\r\ndef to_chunks(values, chunk_ix=0):\r\n\tchunks = list()\r\n\t# get the unique chunk ids\r\n\tchunk_ids = unique(values[:, chunk_ix])\r\n\t# group rows by chunk id\r\n\tfor chunk_id in chunk_ids:\r\n\t\tselection = values[:, chunk_ix] == chunk_id\r\n\t\tchunks.append(values[selection, :])\r\n\treturn chunks\r\n\r\n# return a list of relative forecast lead times\r\ndef get_lead_times():\r\n\treturn [1, 2 ,3, 4, 5, 10, 17, 24, 48, 72]\r\n\r\n# forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# collect obs from all chunks\r\n\tall_obs = list()\r\n\tfor chunk in train_chunks:\r\n\t\tall_obs += [x for x in chunk[:, col_ix]]\r\n\t# return the average, ignoring nan\r\n\tvalue = nanmean(all_obs)\r\n\treturn [value for _ in lead_times]\r\n\r\n# forecast for each chunk, returns [chunk][variable][time]\r\ndef forecast_chunks(train_chunks, test_input):\r\n\tlead_times = get_lead_times()\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor i in range(len(train_chunks)):\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(39):\r\n\t\t\tyhat = forecast_variable(train_chunks, train_chunks[i], test_input[i], lead_times, j)\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# convert the test dataset in chunks to [chunk][variable][time] format\r\ndef prepare_test_forecasts(test_chunks):\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor rows in test_chunks:\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(3, rows.shape[1]):\r\n\t\t\tyhat = rows[:, j]\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# calculate the error between an actual and predicted value\r\ndef calculate_error(actual, predicted):\r\n\t# give the full actual value if predicted is nan\r\n\tif isnan(predicted):\r\n\t\treturn abs(actual)\r\n\t# calculate abs difference\r\n\treturn abs(actual - predicted)\r\n\r\n# evaluate a forecast in the format [chunk][variable][time]\r\ndef evaluate_forecasts(predictions, testset):\r\n\tlead_times = get_lead_times()\r\n\ttotal_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))]\r\n\ttotal_c, times_c = 0, [0 for _ in range(len(lead_times))]\r\n\t# enumerate test chunks\r\n\tfor i in range(len(test_chunks)):\r\n\t\t# convert to forecasts\r\n\t\tactual = testset[i]\r\n\t\tpredicted = predictions[i]\r\n\t\t# enumerate target variables\r\n\t\tfor j in range(predicted.shape[0]):\r\n\t\t\t# enumerate lead times\r\n\t\t\tfor k in range(len(lead_times)):\r\n\t\t\t\t# skip if actual in nan\r\n\t\t\t\tif isnan(actual[j, k]):\r\n\t\t\t\t\tcontinue\r\n\t\t\t\t# calculate error\r\n\t\t\t\terror = calculate_error(actual[j, k], predicted[j, k])\r\n\t\t\t\t# update statistics\r\n\t\t\t\ttotal_mae += error\r\n\t\t\t\ttimes_mae[k] += error\r\n\t\t\t\ttotal_c += 1\r\n\t\t\t\ttimes_c[k] += 1\r\n\t# normalize summed absolute errors\r\n\ttotal_mae \/= total_c\r\n\ttimes_mae = [times_mae[i]\/times_c[i] for i in range(len(times_mae))]\r\n\treturn total_mae, times_mae\r\n\r\n# summarize scores\r\ndef summarize_error(name, total_mae, times_mae):\r\n\t# print summary\r\n\tlead_times = get_lead_times()\r\n\tformatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))]\r\n\ts_scores = ', '.join(formatted)\r\n\tprint('%s: [%.3f MAE] %s' % (name, total_mae, s_scores))\r\n\t# plot summary\r\n\tpyplot.plot([str(x) for x in lead_times], times_mae, marker='.')\r\n\tpyplot.show()\r\n\r\n# load dataset\r\ntrain = loadtxt('AirQualityPrediction\/naive_train.csv', delimiter=',')\r\ntest = loadtxt('AirQualityPrediction\/naive_test.csv', delimiter=',')\r\n# group data by chunks\r\ntrain_chunks = to_chunks(train)\r\ntest_chunks = to_chunks(test)\r\n# forecast\r\ntest_input = [rows[:, :3] for rows in test_chunks]\r\nforecast = forecast_chunks(train_chunks, test_input)\r\n# evaluate forecast\r\nactual = prepare_test_forecasts(test_chunks)\r\ntotal_mae, times_mae = evaluate_forecasts(forecast, actual)\r\n# summarize forecast\r\nsummarize_error('Global Mean', total_mae, times_mae)<\/pre>\n<p>Running the example first prints the overall MAE of 0.634, followed by the MAE scores for each forecast lead time.<\/p>\n<pre class=\"crayon-plain-tag\"># Global Mean: [0.634 MAE] +1 0.635, +2 0.629, +3 0.638, +4 0.650, +5 0.649, +10 0.635, +17 0.634, +24 0.641, +48 0.613, +72 0.618<\/pre>\n<p>A line plot is created showing the MAE scores for each forecast lead time from +1 hour to +72 hours.<\/p>\n<p>We cannot see any obvious relationship in forecast lead time to forecast error as we might expect with a more skillful model.<\/p>\n<div id=\"attachment_6314\" style=\"width: 1290px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-6314\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Mean.png\" alt=\"MAE by Forecast Lead Time With Global Mean\" width=\"1280\" height=\"960\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Mean.png 1280w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Mean-300x225.png 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Mean-768x576.png 768w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Mean-1024x768.png 1024w\" sizes=\"(max-width: 1280px) 100vw, 1280px\"><\/p>\n<p class=\"wp-caption-text\">MAE by Forecast Lead Time With Global Mean<\/p>\n<\/div>\n<p>We can update the example to forecast the global median instead of the mean.<\/p>\n<p>The median may make more sense to use as a central tendency than the mean for this data given the non-Gaussian like distribution the data seems to show.<\/p>\n<p>NumPy provides the <em>nanmedian()<\/em> function that we can use in place of <em>nanmean()<\/em> in the <em>forecast_variable()<\/em> function.<\/p>\n<p>The complete updated example is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast global median\r\nfrom numpy import loadtxt\r\nfrom numpy import nan\r\nfrom numpy import isnan\r\nfrom numpy import count_nonzero\r\nfrom numpy import unique\r\nfrom numpy import array\r\nfrom numpy import nanmedian\r\nfrom matplotlib import pyplot\r\n\r\n# split the dataset by 'chunkID', return a list of chunks\r\ndef to_chunks(values, chunk_ix=0):\r\n\tchunks = list()\r\n\t# get the unique chunk ids\r\n\tchunk_ids = unique(values[:, chunk_ix])\r\n\t# group rows by chunk id\r\n\tfor chunk_id in chunk_ids:\r\n\t\tselection = values[:, chunk_ix] == chunk_id\r\n\t\tchunks.append(values[selection, :])\r\n\treturn chunks\r\n\r\n# return a list of relative forecast lead times\r\ndef get_lead_times():\r\n\treturn [1, 2 ,3, 4, 5, 10, 17, 24, 48, 72]\r\n\r\n# forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# collect obs from all chunks\r\n\tall_obs = list()\r\n\tfor chunk in train_chunks:\r\n\t\tall_obs += [x for x in chunk[:, col_ix]]\r\n\t# return the average, ignoring nan\r\n\tvalue = nanmedian(all_obs)\r\n\treturn [value for _ in lead_times]\r\n\r\n# forecast for each chunk, returns [chunk][variable][time]\r\ndef forecast_chunks(train_chunks, test_input):\r\n\tlead_times = get_lead_times()\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor i in range(len(train_chunks)):\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(39):\r\n\t\t\tyhat = forecast_variable(train_chunks, train_chunks[i], test_input[i], lead_times, j)\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# convert the test dataset in chunks to [chunk][variable][time] format\r\ndef prepare_test_forecasts(test_chunks):\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor rows in test_chunks:\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(3, rows.shape[1]):\r\n\t\t\tyhat = rows[:, j]\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# calculate the error between an actual and predicted value\r\ndef calculate_error(actual, predicted):\r\n\t# give the full actual value if predicted is nan\r\n\tif isnan(predicted):\r\n\t\treturn abs(actual)\r\n\t# calculate abs difference\r\n\treturn abs(actual - predicted)\r\n\r\n# evaluate a forecast in the format [chunk][variable][time]\r\ndef evaluate_forecasts(predictions, testset):\r\n\tlead_times = get_lead_times()\r\n\ttotal_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))]\r\n\ttotal_c, times_c = 0, [0 for _ in range(len(lead_times))]\r\n\t# enumerate test chunks\r\n\tfor i in range(len(test_chunks)):\r\n\t\t# convert to forecasts\r\n\t\tactual = testset[i]\r\n\t\tpredicted = predictions[i]\r\n\t\t# enumerate target variables\r\n\t\tfor j in range(predicted.shape[0]):\r\n\t\t\t# enumerate lead times\r\n\t\t\tfor k in range(len(lead_times)):\r\n\t\t\t\t# skip if actual in nan\r\n\t\t\t\tif isnan(actual[j, k]):\r\n\t\t\t\t\tcontinue\r\n\t\t\t\t# calculate error\r\n\t\t\t\terror = calculate_error(actual[j, k], predicted[j, k])\r\n\t\t\t\t# update statistics\r\n\t\t\t\ttotal_mae += error\r\n\t\t\t\ttimes_mae[k] += error\r\n\t\t\t\ttotal_c += 1\r\n\t\t\t\ttimes_c[k] += 1\r\n\t# normalize summed absolute errors\r\n\ttotal_mae \/= total_c\r\n\ttimes_mae = [times_mae[i]\/times_c[i] for i in range(len(times_mae))]\r\n\treturn total_mae, times_mae\r\n\r\n# summarize scores\r\ndef summarize_error(name, total_mae, times_mae):\r\n\t# print summary\r\n\tlead_times = get_lead_times()\r\n\tformatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))]\r\n\ts_scores = ', '.join(formatted)\r\n\tprint('%s: [%.3f MAE] %s' % (name, total_mae, s_scores))\r\n\t# plot summary\r\n\tpyplot.plot([str(x) for x in lead_times], times_mae, marker='.')\r\n\tpyplot.show()\r\n\r\n# load dataset\r\ntrain = loadtxt('AirQualityPrediction\/naive_train.csv', delimiter=',')\r\ntest = loadtxt('AirQualityPrediction\/naive_test.csv', delimiter=',')\r\n# group data by chunks\r\ntrain_chunks = to_chunks(train)\r\ntest_chunks = to_chunks(test)\r\n# forecast\r\ntest_input = [rows[:, :3] for rows in test_chunks]\r\nforecast = forecast_chunks(train_chunks, test_input)\r\n# evaluate forecast\r\nactual = prepare_test_forecasts(test_chunks)\r\ntotal_mae, times_mae = evaluate_forecasts(forecast, actual)\r\n# summarize forecast\r\nsummarize_error('Global Median', total_mae, times_mae)<\/pre>\n<p>Running the example shows a drop in MAE to about 0.59, suggesting that indeed using the median as the central tendency may be a better baseline strategy.<\/p>\n<pre class=\"crayon-plain-tag\">Global Median: [0.598 MAE] +1 0.601, +2 0.594, +3 0.600, +4 0.611, +5 0.615, +10 0.594, +17 0.592, +24 0.602, +48 0.585, +72 0.580<\/pre>\n<p>A line plot of MAE per lead time is also created.<\/p>\n<div id=\"attachment_6315\" style=\"width: 1290px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-6315\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median.png\" alt=\"MAE by Forecast Lead Time With Global Median\" width=\"1280\" height=\"960\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median.png 1280w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median-300x225.png 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median-768x576.png 768w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median-1024x768.png 1024w\" sizes=\"(max-width: 1280px) 100vw, 1280px\"><\/p>\n<p class=\"wp-caption-text\">MAE by Forecast Lead Time With Global Median<\/p>\n<\/div>\n<h3>Forecast Average Value for Hour-of-Day per Series<\/h3>\n<p>We can update the naive model for calculating a central tendency by series to only include rows that have the same hour of day as the forecast lead time.<\/p>\n<p>For example, if the +1 lead time has the hour 6 (e.g. 0600 or 6AM), then we can find all other rows in the training dataset across all chunks for that hour and calculate the median value for a given target variable from those rows.<\/p>\n<p>We record the hour of day on the test dataset and make it available to the model when making forecasts. One wrinkle is that in some cases the test dataset did not have a record for a given lead time and one had to be invented with <em>NaN<\/em> values, including a <em>NaN<\/em> value for the hour. In these cases, no forecast is required so we will skip them and forecast a <em>NaN<\/em> value.<\/p>\n<p>The <em>forecast_variable()<\/em> function below implements this behavior, returning forecasts for each lead time for a given variable.<\/p>\n<p>It is not very efficient, and it might be a lot more efficient to pre-calculate the median values for each hour for each variable first and then forecast using a lookup table. Efficiency is not a concern at this point as we are looking for a baseline of model performance.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\tforecast = list()\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# enumerate lead times\r\n\tfor i in range(len(lead_times)):\r\n\t\t# get the hour for this forecast lead time\r\n\t\thour = chunk_test[i, 2]\r\n\t\t# check for no test data\r\n\t\tif isnan(hour):\r\n\t\t\tforecast.append(nan)\r\n\t\t\tcontinue\r\n\t\t# get all rows in training for this hour\r\n\t\tall_rows = list()\r\n\t\tfor rows in train_chunks:\r\n\t\t\t[all_rows.append(row) for row in rows[rows[:,2]==hour]]\r\n\t\t# calculate the central tendency for target\r\n\t\tall_rows = array(all_rows)\r\n\t\tvalue = nanmedian(all_rows[:, col_ix])\r\n\t\tforecast.append(value)\r\n\treturn forecast<\/pre>\n<p>The complete example of forecasting the global median value by hour of the day across is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast global median by hour of day\r\nfrom numpy import loadtxt\r\nfrom numpy import nan\r\nfrom numpy import isnan\r\nfrom numpy import count_nonzero\r\nfrom numpy import unique\r\nfrom numpy import array\r\nfrom numpy import nanmedian\r\nfrom matplotlib import pyplot\r\n\r\n# split the dataset by 'chunkID', return a list of chunks\r\ndef to_chunks(values, chunk_ix=0):\r\n\tchunks = list()\r\n\t# get the unique chunk ids\r\n\tchunk_ids = unique(values[:, chunk_ix])\r\n\t# group rows by chunk id\r\n\tfor chunk_id in chunk_ids:\r\n\t\tselection = values[:, chunk_ix] == chunk_id\r\n\t\tchunks.append(values[selection, :])\r\n\treturn chunks\r\n\r\n# return a list of relative forecast lead times\r\ndef get_lead_times():\r\n\treturn [1, 2, 3, 4, 5, 10, 17, 24, 48, 72]\r\n\r\n# forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\tforecast = list()\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# enumerate lead times\r\n\tfor i in range(len(lead_times)):\r\n\t\t# get the hour for this forecast lead time\r\n\t\thour = chunk_test[i, 2]\r\n\t\t# check for no test data\r\n\t\tif isnan(hour):\r\n\t\t\tforecast.append(nan)\r\n\t\t\tcontinue\r\n\t\t# get all rows in training for this hour\r\n\t\tall_rows = list()\r\n\t\tfor rows in train_chunks:\r\n\t\t\t[all_rows.append(row) for row in rows[rows[:,2]==hour]]\r\n\t\t# calculate the central tendency for target\r\n\t\tall_rows = array(all_rows)\r\n\t\tvalue = nanmedian(all_rows[:, col_ix])\r\n\t\tforecast.append(value)\r\n\treturn forecast\r\n\r\n# forecast for each chunk, returns [chunk][variable][time]\r\ndef forecast_chunks(train_chunks, test_input):\r\n\tlead_times = get_lead_times()\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor i in range(len(train_chunks)):\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(39):\r\n\t\t\tyhat = forecast_variable(train_chunks, train_chunks[i], test_input[i], lead_times, j)\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# convert the test dataset in chunks to [chunk][variable][time] format\r\ndef prepare_test_forecasts(test_chunks):\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor rows in test_chunks:\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(3, rows.shape[1]):\r\n\t\t\tyhat = rows[:, j]\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# calculate the error between an actual and predicted value\r\ndef calculate_error(actual, predicted):\r\n\t# give the full actual value if predicted is nan\r\n\tif isnan(predicted):\r\n\t\treturn abs(actual)\r\n\t# calculate abs difference\r\n\treturn abs(actual - predicted)\r\n\r\n# evaluate a forecast in the format [chunk][variable][time]\r\ndef evaluate_forecasts(predictions, testset):\r\n\tlead_times = get_lead_times()\r\n\ttotal_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))]\r\n\ttotal_c, times_c = 0, [0 for _ in range(len(lead_times))]\r\n\t# enumerate test chunks\r\n\tfor i in range(len(test_chunks)):\r\n\t\t# convert to forecasts\r\n\t\tactual = testset[i]\r\n\t\tpredicted = predictions[i]\r\n\t\t# enumerate target variables\r\n\t\tfor j in range(predicted.shape[0]):\r\n\t\t\t# enumerate lead times\r\n\t\t\tfor k in range(len(lead_times)):\r\n\t\t\t\t# skip if actual in nan\r\n\t\t\t\tif isnan(actual[j, k]):\r\n\t\t\t\t\tcontinue\r\n\t\t\t\t# calculate error\r\n\t\t\t\terror = calculate_error(actual[j, k], predicted[j, k])\r\n\t\t\t\t# update statistics\r\n\t\t\t\ttotal_mae += error\r\n\t\t\t\ttimes_mae[k] += error\r\n\t\t\t\ttotal_c += 1\r\n\t\t\t\ttimes_c[k] += 1\r\n\t# normalize summed absolute errors\r\n\ttotal_mae \/= total_c\r\n\ttimes_mae = [times_mae[i]\/times_c[i] for i in range(len(times_mae))]\r\n\treturn total_mae, times_mae\r\n\r\n# summarize scores\r\ndef summarize_error(name, total_mae, times_mae):\r\n\t# print summary\r\n\tlead_times = get_lead_times()\r\n\tformatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))]\r\n\ts_scores = ', '.join(formatted)\r\n\tprint('%s: [%.3f MAE] %s' % (name, total_mae, s_scores))\r\n\t# plot summary\r\n\tpyplot.plot([str(x) for x in lead_times], times_mae, marker='.')\r\n\tpyplot.show()\r\n\r\n# load dataset\r\ntrain = loadtxt('AirQualityPrediction\/naive_train.csv', delimiter=',')\r\ntest = loadtxt('AirQualityPrediction\/naive_test.csv', delimiter=',')\r\n# group data by chunks\r\ntrain_chunks = to_chunks(train)\r\ntest_chunks = to_chunks(test)\r\n# forecast\r\ntest_input = [rows[:, :3] for rows in test_chunks]\r\nforecast = forecast_chunks(train_chunks, test_input)\r\n# evaluate forecast\r\nactual = prepare_test_forecasts(test_chunks)\r\ntotal_mae, times_mae = evaluate_forecasts(forecast, actual)\r\n# summarize forecast\r\nsummarize_error('Global Median by Hour', total_mae, times_mae)<\/pre>\n<p>Running the example summarizes the performance of the model with a MAE of 0.567, which is an improvement over the global median for each series.<\/p>\n<pre class=\"crayon-plain-tag\">Global Median by Hour: [0.567 MAE] +1 0.573, +2 0.565, +3 0.567, +4 0.579, +5 0.589, +10 0.559, +17 0.565, +24 0.567, +48 0.558, +72 0.551<\/pre>\n<p>A line plot of the MAE by forecast lead time is also created showing that +72 had the lowest overall forecast error. This is interesting, and may suggest that hour-based information may be useful in more sophisticated models.<\/p>\n<div id=\"attachment_6316\" style=\"width: 1290px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-6316\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median-By-Hour-of-Day.png\" alt=\"MAE by Forecast Lead Time With Global Median By Hour of Day\" width=\"1280\" height=\"960\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median-By-Hour-of-Day.png 1280w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median-By-Hour-of-Day-300x225.png 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median-By-Hour-of-Day-768x576.png 768w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-With-Global-Median-By-Hour-of-Day-1024x768.png 1024w\" sizes=\"(max-width: 1280px) 100vw, 1280px\"><\/p>\n<p class=\"wp-caption-text\">MAE by Forecast Lead Time With Global Median By Hour of Day<\/p>\n<\/div>\n<h2>Chunk Naive Methods<\/h2>\n<p>It is possible that using information specific to the chunk may have more predictive power than using global information from the entire training dataset.<\/p>\n<p>We can explore this with three local or chunk-specific naive forecasting methods; they are:<\/p>\n<ul>\n<li>Forecast Last Observation per Series<\/li>\n<li>Forecast Average Value per Series<\/li>\n<li>Forecast Average Value for Hour-of-Day per Series<\/li>\n<\/ul>\n<p>The last two of which are the chunk-specific version of the global strategies that were evaluated in the previous section.<\/p>\n<h3>Forecast Last Observation per Series<\/h3>\n<p>Forecasting the last non-NaN observation for a chunk is perhaps the simplest model, classically called the persistence model or the naive model.<\/p>\n<p>The <em>forecast_variable()<\/em> function below implements this forecast strategy.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# extract the history for the series\r\n\thistory = chunk_train[:, col_ix]\r\n\t# persist a nan if we do not find any valid data\r\n\tpersisted = nan\r\n\t# enumerate history in verse order looking for the first non-nan\r\n\tfor value in reversed(history):\r\n\t\tif not isnan(value):\r\n\t\t\tpersisted = value\r\n\t\t\tbreak\r\n\t# persist the same value for all lead times\r\n\tforecast = [persisted for _ in range(len(lead_times))]\r\n\treturn forecast<\/pre>\n<p>The complete example for evaluating the persistence forecast strategy on the test set is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># persist last observation\r\nfrom numpy import loadtxt\r\nfrom numpy import nan\r\nfrom numpy import isnan\r\nfrom numpy import count_nonzero\r\nfrom numpy import unique\r\nfrom numpy import array\r\nfrom numpy import nanmedian\r\nfrom matplotlib import pyplot\r\n\r\n# split the dataset by 'chunkID', return a list of chunks\r\ndef to_chunks(values, chunk_ix=0):\r\n\tchunks = list()\r\n\t# get the unique chunk ids\r\n\tchunk_ids = unique(values[:, chunk_ix])\r\n\t# group rows by chunk id\r\n\tfor chunk_id in chunk_ids:\r\n\t\tselection = values[:, chunk_ix] == chunk_id\r\n\t\tchunks.append(values[selection, :])\r\n\treturn chunks\r\n\r\n# return a list of relative forecast lead times\r\ndef get_lead_times():\r\n\treturn [1, 2, 3, 4, 5, 10, 17, 24, 48, 72]\r\n\r\n# forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# extract the history for the series\r\n\thistory = chunk_train[:, col_ix]\r\n\t# persist a nan if we do not find any valid data\r\n\tpersisted = nan\r\n\t# enumerate history in verse order looking for the first non-nan\r\n\tfor value in reversed(history):\r\n\t\tif not isnan(value):\r\n\t\t\tpersisted = value\r\n\t\t\tbreak\r\n\t# persist the same value for all lead times\r\n\tforecast = [persisted for _ in range(len(lead_times))]\r\n\treturn forecast\r\n\r\n# forecast for each chunk, returns [chunk][variable][time]\r\ndef forecast_chunks(train_chunks, test_input):\r\n\tlead_times = get_lead_times()\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor i in range(len(train_chunks)):\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(39):\r\n\t\t\tyhat = forecast_variable(train_chunks, train_chunks[i], test_input[i], lead_times, j)\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# convert the test dataset in chunks to [chunk][variable][time] format\r\ndef prepare_test_forecasts(test_chunks):\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor rows in test_chunks:\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(3, rows.shape[1]):\r\n\t\t\tyhat = rows[:, j]\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# calculate the error between an actual and predicted value\r\ndef calculate_error(actual, predicted):\r\n\t# give the full actual value if predicted is nan\r\n\tif isnan(predicted):\r\n\t\treturn abs(actual)\r\n\t# calculate abs difference\r\n\treturn abs(actual - predicted)\r\n\r\n# evaluate a forecast in the format [chunk][variable][time]\r\ndef evaluate_forecasts(predictions, testset):\r\n\tlead_times = get_lead_times()\r\n\ttotal_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))]\r\n\ttotal_c, times_c = 0, [0 for _ in range(len(lead_times))]\r\n\t# enumerate test chunks\r\n\tfor i in range(len(test_chunks)):\r\n\t\t# convert to forecasts\r\n\t\tactual = testset[i]\r\n\t\tpredicted = predictions[i]\r\n\t\t# enumerate target variables\r\n\t\tfor j in range(predicted.shape[0]):\r\n\t\t\t# enumerate lead times\r\n\t\t\tfor k in range(len(lead_times)):\r\n\t\t\t\t# skip if actual in nan\r\n\t\t\t\tif isnan(actual[j, k]):\r\n\t\t\t\t\tcontinue\r\n\t\t\t\t# calculate error\r\n\t\t\t\terror = calculate_error(actual[j, k], predicted[j, k])\r\n\t\t\t\t# update statistics\r\n\t\t\t\ttotal_mae += error\r\n\t\t\t\ttimes_mae[k] += error\r\n\t\t\t\ttotal_c += 1\r\n\t\t\t\ttimes_c[k] += 1\r\n\t# normalize summed absolute errors\r\n\ttotal_mae \/= total_c\r\n\ttimes_mae = [times_mae[i]\/times_c[i] for i in range(len(times_mae))]\r\n\treturn total_mae, times_mae\r\n\r\n# summarize scores\r\ndef summarize_error(name, total_mae, times_mae):\r\n\t# print summary\r\n\tlead_times = get_lead_times()\r\n\tformatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))]\r\n\ts_scores = ', '.join(formatted)\r\n\tprint('%s: [%.3f MAE] %s' % (name, total_mae, s_scores))\r\n\t# plot summary\r\n\tpyplot.plot([str(x) for x in lead_times], times_mae, marker='.')\r\n\tpyplot.show()\r\n\r\n# load dataset\r\ntrain = loadtxt('AirQualityPrediction\/naive_train.csv', delimiter=',')\r\ntest = loadtxt('AirQualityPrediction\/naive_test.csv', delimiter=',')\r\n# group data by chunks\r\ntrain_chunks = to_chunks(train)\r\ntest_chunks = to_chunks(test)\r\n# forecast\r\ntest_input = [rows[:, :3] for rows in test_chunks]\r\nforecast = forecast_chunks(train_chunks, test_input)\r\n# evaluate forecast\r\nactual = prepare_test_forecasts(test_chunks)\r\ntotal_mae, times_mae = evaluate_forecasts(forecast, actual)\r\n# summarize forecast\r\nsummarize_error('Persistence', total_mae, times_mae)<\/pre>\n<p>Running the example prints the overall MAE and the MAE per forecast lead time.<\/p>\n<p>We can see that the persistence forecast appears to out-perform all of the global strategies evaluated in the previous section.<\/p>\n<p>This adds some support that the reasonable assumption that chunk-specific information is important in modeling this problem.<\/p>\n<pre class=\"crayon-plain-tag\">Persistence: [0.520 MAE] +1 0.217, +2 0.330, +3 0.400, +4 0.471, +5 0.515, +10 0.648, +17 0.656, +24 0.589, +48 0.671, +72 0.708<\/pre>\n<p>A line plot of MAE per forecast lead time is created.<\/p>\n<p>Importantly, this plot shows the expected behavior of increasing error with the increase in forecast lead time. Namely, the further one predicts into the future, the more challenging it is, and in turn, the more error one would be expected to make.<\/p>\n<div id=\"attachment_6317\" style=\"width: 1290px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-6317\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Persistence.png\" alt=\"MAE by Forecast Lead Time via Persistence\" width=\"1280\" height=\"960\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Persistence.png 1280w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Persistence-300x225.png 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Persistence-768x576.png 768w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Persistence-1024x768.png 1024w\" sizes=\"(max-width: 1280px) 100vw, 1280px\"><\/p>\n<p class=\"wp-caption-text\">MAE by Forecast Lead Time via Persistence<\/p>\n<\/div>\n<h3>Forecast Average Value per Series<\/h3>\n<p>Instead of persisting the last observation for the series, we can persist the average value for the series using only the data in the chunk.<\/p>\n<p>Specifically, we can calculate the median of the series, which as we found in the previous section seems to lead to better performance.<\/p>\n<p>The <em>forecast_variable()<\/em> implements this local strategy.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# extract the history for the series\r\n\thistory = chunk_train[:, col_ix]\r\n\t# calculate the central tendency\r\n\tvalue = nanmedian(history)\r\n\t# persist the same value for all lead times\r\n\tforecast = [value for _ in range(len(lead_times))]\r\n\treturn forecast<\/pre>\n<p>The complete example is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast local median\r\nfrom numpy import loadtxt\r\nfrom numpy import nan\r\nfrom numpy import isnan\r\nfrom numpy import count_nonzero\r\nfrom numpy import unique\r\nfrom numpy import array\r\nfrom numpy import nanmedian\r\nfrom matplotlib import pyplot\r\n\r\n# split the dataset by 'chunkID', return a list of chunks\r\ndef to_chunks(values, chunk_ix=0):\r\n\tchunks = list()\r\n\t# get the unique chunk ids\r\n\tchunk_ids = unique(values[:, chunk_ix])\r\n\t# group rows by chunk id\r\n\tfor chunk_id in chunk_ids:\r\n\t\tselection = values[:, chunk_ix] == chunk_id\r\n\t\tchunks.append(values[selection, :])\r\n\treturn chunks\r\n\r\n# return a list of relative forecast lead times\r\ndef get_lead_times():\r\n\treturn [1, 2, 3, 4, 5, 10, 17, 24, 48, 72]\r\n\r\n# forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# extract the history for the series\r\n\thistory = chunk_train[:, col_ix]\r\n\t# calculate the central tendency\r\n\tvalue = nanmedian(history)\r\n\t# persist the same value for all lead times\r\n\tforecast = [value for _ in range(len(lead_times))]\r\n\treturn forecast\r\n\r\n# forecast for each chunk, returns [chunk][variable][time]\r\ndef forecast_chunks(train_chunks, test_input):\r\n\tlead_times = get_lead_times()\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor i in range(len(train_chunks)):\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(39):\r\n\t\t\tyhat = forecast_variable(train_chunks, train_chunks[i], test_input[i], lead_times, j)\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# convert the test dataset in chunks to [chunk][variable][time] format\r\ndef prepare_test_forecasts(test_chunks):\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor rows in test_chunks:\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(3, rows.shape[1]):\r\n\t\t\tyhat = rows[:, j]\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# calculate the error between an actual and predicted value\r\ndef calculate_error(actual, predicted):\r\n\t# give the full actual value if predicted is nan\r\n\tif isnan(predicted):\r\n\t\treturn abs(actual)\r\n\t# calculate abs difference\r\n\treturn abs(actual - predicted)\r\n\r\n# evaluate a forecast in the format [chunk][variable][time]\r\ndef evaluate_forecasts(predictions, testset):\r\n\tlead_times = get_lead_times()\r\n\ttotal_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))]\r\n\ttotal_c, times_c = 0, [0 for _ in range(len(lead_times))]\r\n\t# enumerate test chunks\r\n\tfor i in range(len(test_chunks)):\r\n\t\t# convert to forecasts\r\n\t\tactual = testset[i]\r\n\t\tpredicted = predictions[i]\r\n\t\t# enumerate target variables\r\n\t\tfor j in range(predicted.shape[0]):\r\n\t\t\t# enumerate lead times\r\n\t\t\tfor k in range(len(lead_times)):\r\n\t\t\t\t# skip if actual in nan\r\n\t\t\t\tif isnan(actual[j, k]):\r\n\t\t\t\t\tcontinue\r\n\t\t\t\t# calculate error\r\n\t\t\t\terror = calculate_error(actual[j, k], predicted[j, k])\r\n\t\t\t\t# update statistics\r\n\t\t\t\ttotal_mae += error\r\n\t\t\t\ttimes_mae[k] += error\r\n\t\t\t\ttotal_c += 1\r\n\t\t\t\ttimes_c[k] += 1\r\n\t# normalize summed absolute errors\r\n\ttotal_mae \/= total_c\r\n\ttimes_mae = [times_mae[i]\/times_c[i] for i in range(len(times_mae))]\r\n\treturn total_mae, times_mae\r\n\r\n# summarize scores\r\ndef summarize_error(name, total_mae, times_mae):\r\n\t# print summary\r\n\tlead_times = get_lead_times()\r\n\tformatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))]\r\n\ts_scores = ', '.join(formatted)\r\n\tprint('%s: [%.3f MAE] %s' % (name, total_mae, s_scores))\r\n\t# plot summary\r\n\tpyplot.plot([str(x) for x in lead_times], times_mae, marker='.')\r\n\tpyplot.show()\r\n\r\n# load dataset\r\ntrain = loadtxt('AirQualityPrediction\/naive_train.csv', delimiter=',')\r\ntest = loadtxt('AirQualityPrediction\/naive_test.csv', delimiter=',')\r\n# group data by chunks\r\ntrain_chunks = to_chunks(train)\r\ntest_chunks = to_chunks(test)\r\n# forecast\r\ntest_input = [rows[:, :3] for rows in test_chunks]\r\nforecast = forecast_chunks(train_chunks, test_input)\r\n# evaluate forecast\r\nactual = prepare_test_forecasts(test_chunks)\r\ntotal_mae, times_mae = evaluate_forecasts(forecast, actual)\r\n# summarize forecast\r\nsummarize_error('Local Median', total_mae, times_mae)<\/pre>\n<p>Running the example summarizes the performance of this naive strategy, showing a MAE of about 0.568, which is worse than the above persistence strategy.<\/p>\n<pre class=\"crayon-plain-tag\">Local Median: [0.568 MAE] +1 0.535, +2 0.542, +3 0.550, +4 0.568, +5 0.568, +10 0.562, +17 0.567, +24 0.605, +48 0.590, +72 0.593<\/pre>\n<p>A line plot of MAE per forecast lead time is also created showing the familiar increasing curve of error per lead time.<\/p>\n<div id=\"attachment_6318\" style=\"width: 1290px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-6318\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median.png\" alt=\"MAE by Forecast Lead Time via Local Median\" width=\"1280\" height=\"960\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median.png 1280w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median-300x225.png 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median-768x576.png 768w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median-1024x768.png 1024w\" sizes=\"(max-width: 1280px) 100vw, 1280px\"><\/p>\n<p class=\"wp-caption-text\">MAE by Forecast Lead Time via Local Median<\/p>\n<\/div>\n<h3>Forecast Average Value for Hour-of-Day per Series<\/h3>\n<p>Finally, we can dial in the persistence strategy by using the average value per series for the specific hour of day at each forecast lead time.<\/p>\n<p>This approach was found to be effective at the global strategy. It may be effective using only the data from the chunk, although at the risk of using a much smaller data sample.<\/p>\n<p>The <em>forecast_variable()<\/em> function below implements this strategy, first finding all rows with the hour of the forecast lead time, then calculating the median of those rows for the given target variable.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\tforecast = list()\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# enumerate lead times\r\n\tfor i in range(len(lead_times)):\r\n\t\t# get the hour for this forecast lead time\r\n\t\thour = chunk_test[i, 2]\r\n\t\t# check for no test data\r\n\t\tif isnan(hour):\r\n\t\t\tforecast.append(nan)\r\n\t\t\tcontinue\r\n\t\t# select rows in chunk with this hour\r\n\t\tselected = chunk_train[chunk_train[:,2]==hour]\r\n\t\t# calculate the central tendency for target\r\n\t\tvalue = nanmedian(selected[:, col_ix])\r\n\t\tforecast.append(value)\r\n\treturn forecast<\/pre>\n<p>The complete example is listed below.<\/p>\n<pre class=\"crayon-plain-tag\"># forecast local median per hour of day\r\nfrom numpy import loadtxt\r\nfrom numpy import nan\r\nfrom numpy import isnan\r\nfrom numpy import unique\r\nfrom numpy import array\r\nfrom numpy import nanmedian\r\nfrom matplotlib import pyplot\r\n\r\n# split the dataset by 'chunkID', return a list of chunks\r\ndef to_chunks(values, chunk_ix=0):\r\n\tchunks = list()\r\n\t# get the unique chunk ids\r\n\tchunk_ids = unique(values[:, chunk_ix])\r\n\t# group rows by chunk id\r\n\tfor chunk_id in chunk_ids:\r\n\t\tselection = values[:, chunk_ix] == chunk_id\r\n\t\tchunks.append(values[selection, :])\r\n\treturn chunks\r\n\r\n# return a list of relative forecast lead times\r\ndef get_lead_times():\r\n\treturn [1, 2, 3, 4, 5, 10, 17, 24, 48, 72]\r\n\r\n# forecast all lead times for one variable\r\ndef forecast_variable(train_chunks, chunk_train, chunk_test, lead_times, target_ix):\r\n\tforecast = list()\r\n\t# convert target number into column number\r\n\tcol_ix = 3 + target_ix\r\n\t# enumerate lead times\r\n\tfor i in range(len(lead_times)):\r\n\t\t# get the hour for this forecast lead time\r\n\t\thour = chunk_test[i, 2]\r\n\t\t# check for no test data\r\n\t\tif isnan(hour):\r\n\t\t\tforecast.append(nan)\r\n\t\t\tcontinue\r\n\t\t# select rows in chunk with this hour\r\n\t\tselected = chunk_train[chunk_train[:,2]==hour]\r\n\t\t# calculate the central tendency for target\r\n\t\tvalue = nanmedian(selected[:, col_ix])\r\n\t\tforecast.append(value)\r\n\treturn forecast\r\n\r\n# forecast for each chunk, returns [chunk][variable][time]\r\ndef forecast_chunks(train_chunks, test_input):\r\n\tlead_times = get_lead_times()\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor i in range(len(train_chunks)):\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(39):\r\n\t\t\tyhat = forecast_variable(train_chunks, train_chunks[i], test_input[i], lead_times, j)\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# convert the test dataset in chunks to [chunk][variable][time] format\r\ndef prepare_test_forecasts(test_chunks):\r\n\tpredictions = list()\r\n\t# enumerate chunks to forecast\r\n\tfor rows in test_chunks:\r\n\t\t# enumerate targets for chunk\r\n\t\tchunk_predictions = list()\r\n\t\tfor j in range(3, rows.shape[1]):\r\n\t\t\tyhat = rows[:, j]\r\n\t\t\tchunk_predictions.append(yhat)\r\n\t\tchunk_predictions = array(chunk_predictions)\r\n\t\tpredictions.append(chunk_predictions)\r\n\treturn array(predictions)\r\n\r\n# calculate the error between an actual and predicted value\r\ndef calculate_error(actual, predicted):\r\n\t# give the full actual value if predicted is nan\r\n\tif isnan(predicted):\r\n\t\treturn abs(actual)\r\n\t# calculate abs difference\r\n\treturn abs(actual - predicted)\r\n\r\n# evaluate a forecast in the format [chunk][variable][time]\r\ndef evaluate_forecasts(predictions, testset):\r\n\tlead_times = get_lead_times()\r\n\ttotal_mae, times_mae = 0.0, [0.0 for _ in range(len(lead_times))]\r\n\ttotal_c, times_c = 0, [0 for _ in range(len(lead_times))]\r\n\t# enumerate test chunks\r\n\tfor i in range(len(test_chunks)):\r\n\t\t# convert to forecasts\r\n\t\tactual = testset[i]\r\n\t\tpredicted = predictions[i]\r\n\t\t# enumerate target variables\r\n\t\tfor j in range(predicted.shape[0]):\r\n\t\t\t# enumerate lead times\r\n\t\t\tfor k in range(len(lead_times)):\r\n\t\t\t\t# skip if actual in nan\r\n\t\t\t\tif isnan(actual[j, k]):\r\n\t\t\t\t\tcontinue\r\n\t\t\t\t# calculate error\r\n\t\t\t\terror = calculate_error(actual[j, k], predicted[j, k])\r\n\t\t\t\t# update statistics\r\n\t\t\t\ttotal_mae += error\r\n\t\t\t\ttimes_mae[k] += error\r\n\t\t\t\ttotal_c += 1\r\n\t\t\t\ttimes_c[k] += 1\r\n\t# normalize summed absolute errors\r\n\ttotal_mae \/= total_c\r\n\ttimes_mae = [times_mae[i]\/times_c[i] for i in range(len(times_mae))]\r\n\treturn total_mae, times_mae\r\n\r\n# summarize scores\r\ndef summarize_error(name, total_mae, times_mae):\r\n\t# print summary\r\n\tlead_times = get_lead_times()\r\n\tformatted = ['+%d %.3f' % (lead_times[i], times_mae[i]) for i in range(len(lead_times))]\r\n\ts_scores = ', '.join(formatted)\r\n\tprint('%s: [%.3f MAE] %s' % (name, total_mae, s_scores))\r\n\t# plot summary\r\n\tpyplot.plot([str(x) for x in lead_times], times_mae, marker='.')\r\n\tpyplot.show()\r\n\r\n# load dataset\r\ntrain = loadtxt('AirQualityPrediction\/naive_train.csv', delimiter=',')\r\ntest = loadtxt('AirQualityPrediction\/naive_test.csv', delimiter=',')\r\n# group data by chunks\r\ntrain_chunks = to_chunks(train)\r\ntest_chunks = to_chunks(test)\r\n# forecast\r\ntest_input = [rows[:, :3] for rows in test_chunks]\r\nforecast = forecast_chunks(train_chunks, test_input)\r\n# evaluate forecast\r\nactual = prepare_test_forecasts(test_chunks)\r\ntotal_mae, times_mae = evaluate_forecasts(forecast, actual)\r\n# summarize forecast\r\nsummarize_error('Local Median by Hour', total_mae, times_mae)<\/pre>\n<p>Running the example prints the overall MAE of about 0.574, which is worse than the global variation of the same strategy.<\/p>\n<p>As suspected, this is likely due to the small sample size, that is at most five rows of training data contributing to each forecast.<\/p>\n<pre class=\"crayon-plain-tag\">Local Median by Hour: [0.574 MAE] +1 0.561, +2 0.559, +3 0.568, +4 0.577, +5 0.577, +10 0.556, +17 0.551, +24 0.588, +48 0.601, +72 0.608<\/pre>\n<p>A line plot of MAE per forecast lead time is also created showing the familiar increasing curve of error per lead time.<\/p>\n<div id=\"attachment_6319\" style=\"width: 1290px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-6319\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median-By-Hour-of-Day.png\" alt=\"MAE by Forecast Lead Time via Local Median By Hour of Day\" width=\"1280\" height=\"960\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median-By-Hour-of-Day.png 1280w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median-By-Hour-of-Day-300x225.png 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median-By-Hour-of-Day-768x576.png 768w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/MAE-by-Forecast-Lead-Time-via-Local-Median-By-Hour-of-Day-1024x768.png 1024w\" sizes=\"(max-width: 1280px) 100vw, 1280px\"><\/p>\n<p class=\"wp-caption-text\">MAE by Forecast Lead Time via Local Median By Hour of Day<\/p>\n<\/div>\n<h2>Summary of Results<\/h2>\n<p>We can summarize the performance of all of the naive forecast methods reviewed in this tutorial.<\/p>\n<p>The example below lists each method using a shorthand of \u2018<em>g<\/em>\u2018 and \u2018<em>l<\/em>\u2018 for global and local and \u2018<em>h<\/em>\u2018 for the hour-of-day variations. The example creates a bar chart so that we can compare the naive strategies based on their relative performance.<\/p>\n<pre class=\"crayon-plain-tag\"># summary of results\r\nfrom matplotlib import pyplot\r\n# results\r\nresults = {\r\n\t'g-mean':0.634,\r\n\t'g-med':0.598,\r\n\t'g-med-h':0.567,\r\n\t'l-per':0.520,\r\n\t'l-med':0.568,\r\n\t'l-med-h':0.574}\r\n# plot\r\npyplot.bar(results.keys(), results.values())\r\nlocs, labels = pyplot.xticks()\r\npyplot.setp(labels, rotation=30)\r\npyplot.show()<\/pre>\n<p>Running the example creates a bar chart comparing the MAE for each of the six strategies.<\/p>\n<p>We can see that the persistence strategy was better than all of the other methods and that the second best strategy was the global median for each series that used the hour of day.<\/p>\n<p>Models evaluated on this train\/test separation of the dataset must achieve an overall MAE lower than 0.520 in order to be considered skillful.<\/p>\n<div id=\"attachment_6320\" style=\"width: 1290px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-6320\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2018\/07\/Bar-Chart-with-Summary-of-Naive-Forecast-Methods.png\" alt=\"Bar Chart with Summary of Naive Forecast Methods\" width=\"1280\" height=\"960\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/Bar-Chart-with-Summary-of-Naive-Forecast-Methods.png 1280w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/Bar-Chart-with-Summary-of-Naive-Forecast-Methods-300x225.png 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/Bar-Chart-with-Summary-of-Naive-Forecast-Methods-768x576.png 768w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2018\/07\/Bar-Chart-with-Summary-of-Naive-Forecast-Methods-1024x768.png 1024w\" sizes=\"(max-width: 1280px) 100vw, 1280px\"><\/p>\n<p class=\"wp-caption-text\">Bar Chart with Summary of Naive Forecast Methods<\/p>\n<\/div>\n<h2>Extensions<\/h2>\n<p>This section lists some ideas for extending the tutorial that you may wish to explore.<\/p>\n<ul>\n<li><strong>Cross-Site Naive Forecast<\/strong>. Develop a naive forecast strategy that uses information about each variable across sites, e.g. different target variables for the same variable at different sites.<\/li>\n<li><strong>Hybrid Approach<\/strong>. Develop a hybrid forecast strategy that combines elements of two or more of the naive forecast strategies at different lead times described in this tutorial.<\/li>\n<li><strong>Ensemble of Naive Methods<\/strong>. Develop an ensemble forecast strategy that creates a linear combination of two or more forecast strategies described in this tutorial.<\/li>\n<\/ul>\n<p>If you explore any of these extensions, I\u2019d love to know.<\/p>\n<h2>Further Reading<\/h2>\n<p>This section provides more resources on the topic if you are looking to go deeper.<\/p>\n<h3>Posts<\/h3>\n<ul>\n<li><a href=\"https:\/\/machinelearningmastery.com\/standard-multivariate-multi-step-multi-site-time-series-forecasting-problem\/\">A Standard Multivariate, Multi-Step, and Multi-Site Time Series Forecasting Problem<\/a><\/li>\n<li><a href=\"https:\/\/machinelearningmastery.com\/persistence-time-series-forecasting-with-python\/\">How to Make Baseline Predictions for Time Series Forecasting with Python<\/a><\/li>\n<\/ul>\n<h3>Articles<\/h3>\n<ul>\n<li><a href=\"https:\/\/www.kaggle.com\/c\/dsg-hackathon\/data\">EMC Data Science Global Hackathon (Air Quality Prediction)<\/a><\/li>\n<li><a href=\"http:\/\/blog.kaggle.com\/2012\/05\/01\/chucking-everything-into-a-random-forest-ben-hamner-on-winning-the-air-quality-prediction-hackathon\/\">Chucking everything into a Random Forest: Ben Hamner on Winning The Air Quality Prediction Hackathon<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/benhamner\/Air-Quality-Prediction-Hackathon-Winning-Model\">Winning Code for the EMC Data Science Global Hackathon (Air Quality Prediction)<\/a><\/li>\n<li><a href=\"https:\/\/www.kaggle.com\/c\/dsg-hackathon\/discussion\/1821\">General approaches to partitioning the models?<\/a><\/li>\n<\/ul>\n<h2>Summary<\/h2>\n<p>In this tutorial, you discovered how to develop naive forecasting methods for the multistep multivariate air pollution time series forecasting problem.<\/p>\n<p>Specifically, you learned:<\/p>\n<ul>\n<li>How to develop a test harness for evaluating forecasting strategies for the air pollution dataset.<\/li>\n<li>How to develop global naive forecast strategies that use data from the entire training dataset.<\/li>\n<li>How to develop local naive forecast strategies that use data from the specific interval that is being forecasted.<\/li>\n<\/ul>\n<p>Do you have any questions?<br \/>\nAsk your questions in the comments below and I will do my best to answer.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/how-to-develop-baseline-forecasts-for-multi-site-multivariate-air-pollution-time-series-forecasting\/\">How to Develop Baseline Forecasts for Multi-Site Multivariate Air Pollution Time Series Forecasting<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/\">Machine Learning Mastery<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/machinelearningmastery.com\/how-to-develop-baseline-forecasts-for-multi-site-multivariate-air-pollution-time-series-forecasting\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Jason Brownlee Real-world time series forecasting is challenging for a whole host of reasons not limited to problem features such as having multiple input [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2018\/10\/14\/how-to-develop-baseline-forecasts-for-multi-site-multivariate-air-pollution-time-series-forecasting\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":1162,"comment_status":"registered_only","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1161"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=1161"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/1161\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/1162"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=1161"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=1161"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=1161"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}