{"id":3681,"date":"2020-07-19T19:00:08","date_gmt":"2020-07-19T19:00:08","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2020\/07\/19\/add-binary-flags-for-missing-values-for-machine-learning\/"},"modified":"2020-07-19T19:00:08","modified_gmt":"2020-07-19T19:00:08","slug":"add-binary-flags-for-missing-values-for-machine-learning","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2020\/07\/19\/add-binary-flags-for-missing-values-for-machine-learning\/","title":{"rendered":"Add Binary Flags for Missing Values for Machine Learning"},"content":{"rendered":"<p>Author: Jason Brownlee<\/p>\n<div>\n<p>Missing values can cause problems when modeling classification and regression prediction problems with machine learning algorithms.<\/p>\n<p>A common approach is to replace missing values with a calculated statistic, such as the mean of the column. This allows the dataset to be modeled as per normal but gives no indication to the model that the row original contained missing values.<\/p>\n<p>One approach to address this issue is to include additional binary flag input features that indicate whether a row or a column contained a missing value that was imputed. This additional information may or may not be helpful to the model in predicting the target value.<\/p>\n<p>In this tutorial, you will discover how to <strong>add binary flags for missing values<\/strong> for modeling.<\/p>\n<p>After completing this tutorial, you will know:<\/p>\n<ul>\n<li>How to load and evaluate models with statistical imputation on a classification dataset with missing values.<\/li>\n<li>How to add a flag that indicates if a row has one more missing values and evaluate models with this new feature.<\/li>\n<li>How to add a flag for each input variable that has missing values and evaluate models with these new features.<\/li>\n<\/ul>\n<p>Discover data cleaning, feature selection, data transforms, dimensionality reduction and much more <a href=\"https:\/\/machinelearningmastery.com\/data-preparation-for-machine-learning\/\">in my new book<\/a>, with 30 step-by-step tutorials and full Python source code.<\/p>\n<p>Let&rsquo;s get started.<\/p>\n<div id=\"attachment_11047\" style=\"width: 810px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-11047\" class=\"size-full wp-image-11047\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/07\/Add-Binary-Flags-for-Missing-Values-for-Machine-Learning.jpg\" alt=\"Add Binary Flags for Missing Values for Machine Learning\" width=\"800\" height=\"489\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2020\/07\/Add-Binary-Flags-for-Missing-Values-for-Machine-Learning.jpg 800w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2020\/07\/Add-Binary-Flags-for-Missing-Values-for-Machine-Learning-300x183.jpg 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2020\/07\/Add-Binary-Flags-for-Missing-Values-for-Machine-Learning-768x469.jpg 768w\" sizes=\"(max-width: 800px) 100vw, 800px\"><\/p>\n<p id=\"caption-attachment-11047\" class=\"wp-caption-text\">Add Binary Flags for Missing Values for Machine Learning<br \/>Photo by <a href=\"https:\/\/www.flickr.com\/photos\/82026782@N05\/7840032194\/\">keith o connell<\/a>, some rights reserved.<\/p>\n<\/div>\n<h2>Tutorial Overview<\/h2>\n<p>This tutorial is divided into three parts; they are:<\/p>\n<ol>\n<li>Imputing the Horse Colic Dataset<\/li>\n<li>Model With a Binary Flag for Missing Values<\/li>\n<li>Model With Indicators of All Missing Values<\/li>\n<\/ol>\n<h2>Imputing the Horse Colic Dataset<\/h2>\n<p>The horse colic dataset describes medical characteristics of horses with colic and whether they lived or died.<\/p>\n<p>There are 300 rows and 26 input variables with one output variable. It is a binary classification prediction task that involves predicting 1 if the horse lived and 2 if the horse died.<\/p>\n<p>There are many fields we could select to predict in this dataset. In this case, we will predict whether the problem was surgical or not (column index 23), making it a binary classification problem.<\/p>\n<p>The dataset has numerous <a href=\"https:\/\/machinelearningmastery.com\/statistical-imputation-for-missing-values-in-machine-learning\/\">missing values<\/a> for many of the columns where each missing value is marked with a question mark character (&ldquo;?&rdquo;).<\/p>\n<p>Below provides an example of rows from the dataset with marked missing values.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">2,1,530101,38.50,66,28,3,3,?,2,5,4,4,?,?,?,3,5,45.00,8.40,?,?,2,2,11300,00000,00000,2\r\n1,1,534817,39.2,88,20,?,?,4,1,3,4,2,?,?,?,4,2,50,85,2,2,3,2,02208,00000,00000,2\r\n2,1,530334,38.30,40,24,1,1,3,1,3,3,1,?,?,?,1,1,33.00,6.70,?,?,1,2,00000,00000,00000,1\r\n1,9,5290409,39.10,164,84,4,1,6,2,2,4,4,1,2,5.00,3,?,48.00,7.20,3,5.30,2,1,02208,00000,00000,1\r\n...<\/pre>\n<p>You can learn more about the dataset here:<\/p>\n<ul>\n<li><a href=\"https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/horse-colic.csv\">Horse Colic Dataset<\/a><\/li>\n<li><a href=\"https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/horse-colic.names\">Horse Colic Dataset Description<\/a><\/li>\n<\/ul>\n<p>No need to download the dataset as we will download it automatically in the worked examples.<\/p>\n<p>Marking missing values with a NaN (not a number) value in a loaded dataset using Python is a best practice.<\/p>\n<p>We can load the dataset using the <a href=\"https:\/\/pandas.pydata.org\/pandas-docs\/stable\/generated\/pandas.read_csv.html\">read_csv() Pandas<\/a> function and specify the &ldquo;<em>na_values<\/em>&rdquo; to load values of &lsquo;?&rsquo; as missing, marked with a NaN value.<\/p>\n<p>The example below downloads the dataset, marks &ldquo;?&rdquo; values as NaN (missing) and summarizes the shape of the dataset.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># summarize the horse colic dataset\r\nfrom pandas import read_csv\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/horse-colic.csv'\r\ndataframe = read_csv(url, header=None, na_values='?')\r\ndata = dataframe.values\r\n# split into input and output elements\r\nix = [i for i in range(data.shape[1]) if i != 23]\r\nX, y = data[:, ix], data[:, 23]\r\nprint(X.shape, y.shape)<\/pre>\n<p>Running the example downloads the dataset and reports the number of rows and columns, matching our expectations.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">(300, 27) (300,)<\/pre>\n<p>Next, we can evaluate a model on this dataset.<\/p>\n<p>We can use the <a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.impute.SimpleImputer.html\">SimpleImputer class<\/a> to perform statistical imputation and replace the missing values with the mean of each column. We can then fit a random forest model on the dataset.<\/p>\n<p>For more on how to use the SimpleImputer class, see the tutorial:<\/p>\n<ul>\n<li><a href=\"https:\/\/machinelearningmastery.com\/statistical-imputation-for-missing-values-in-machine-learning\/\">Statistical Imputation for Missing Values in Machine Learning<\/a><\/li>\n<\/ul>\n<p>To achieve this, we will define a pipeline that first performs imputation, then fits the model and evaluates this modeling pipeline using repeated stratified k-fold cross-validation with three repeats and 10 folds.<\/p>\n<p>The complete example is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># evaluate mean imputation and random forest for the horse colic dataset\r\nfrom numpy import mean\r\nfrom numpy import std\r\nfrom pandas import read_csv\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.impute import SimpleImputer\r\nfrom sklearn.model_selection import cross_val_score\r\nfrom sklearn.model_selection import RepeatedStratifiedKFold\r\nfrom sklearn.pipeline import Pipeline\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/horse-colic.csv'\r\ndataframe = read_csv(url, header=None, na_values='?')\r\n# split into input and output elements\r\ndata = dataframe.values\r\nix = [i for i in range(data.shape[1]) if i != 23]\r\nX, y = data[:, ix], data[:, 23]\r\n# define modeling pipeline\r\nmodel = RandomForestClassifier()\r\nimputer = SimpleImputer()\r\npipeline = Pipeline(steps=[('i', imputer), ('m', model)])\r\n# define model evaluation\r\ncv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)\r\n# evaluate model\r\nscores = cross_val_score(pipeline, X, y, scoring='accuracy', cv=cv, n_jobs=-1)\r\nprint('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))<\/pre>\n<p>Running the example evaluates the random forest with mean statistical imputation on the horse colic dataset.<\/p>\n<p>Your specific results may vary given the stochastic nature of the learning algorithm, the stochastic nature of the evaluation procedure, and differences in precision across machines. Try running the example a few times.<\/p>\n<p>In this case, the pipeline achieved an estimated classification accuracy of about 86.2 percent.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Mean Accuracy: 0.862 (0.056)<\/pre>\n<p>Next, let&rsquo;s see if we can improve the performance of the model by providing more information about missing values.<\/p>\n<\/p>\n<div class=\"woo-sc-hr\"><\/div>\n<p><center><\/p>\n<h3>Want to Get Started With Data Preparation?<\/h3>\n<p>Take my free 7-day email crash course now (with sample code).<\/p>\n<p>Click to sign-up and also get a free PDF Ebook version of the course.<\/p>\n<p><a href=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/1041bc0ec172a2%3A164f8be4f346dc\/4935938752774144\/\" target=\"_blank\" style=\"background: rgb(255, 206, 10); color: rgb(255, 255, 255); text-decoration: none; font-family: Helvetica, Arial, sans-serif; font-weight: bold; font-size: 16px; line-height: 20px; padding: 10px; display: inline-block; max-width: 300px; border-radius: 5px; text-shadow: rgba(0, 0, 0, 0.25) 0px -1px 1px; box-shadow: rgba(255, 255, 255, 0.5) 0px 1px 3px inset, rgba(0, 0, 0, 0.5) 0px 1px 3px;\" rel=\"noopener noreferrer\">Download Your FREE Mini-Course<\/a><script data-leadbox=\"1041bc0ec172a2:164f8be4f346dc\" data-url=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/1041bc0ec172a2%3A164f8be4f346dc\/4935938752774144\/\" data-config=\"%7B%7D\" type=\"text\/javascript\" src=\"https:\/\/machinelearningmastery.lpages.co\/leadbox-1589485176.js\"><\/script><\/p>\n<p><\/center><\/p>\n<div class=\"woo-sc-hr\"><\/div>\n<h2>Model With a Binary Flag for Missing Values<\/h2>\n<p>In the previous section, we replaced missing values with a calculated statistic.<\/p>\n<p>The model is unaware that missing values were replaced.<\/p>\n<p>It is possible that knowledge of whether a row contains a missing value or not will be useful to the model when making a prediction.<\/p>\n<p>One approach to exposing the model to this knowledge is by providing an additional column that is a binary flag indicating whether the row had a missing value or not.<\/p>\n<ul>\n<li>0: Row does not contain a missing value.<\/li>\n<li>1: Row contains a missing value (which was\/will be imputed).<\/li>\n<\/ul>\n<p>This can be achieved directly on the loaded dataset. First, we can sum the values for each row to create a new column where if the row contains at least one NaN, then the sum will be a NaN.<\/p>\n<p>We can then mark all values in the new column as 1 if they contain a NaN, or 0 otherwise.<\/p>\n<p>Finally, we can add this column to the loaded dataset.<\/p>\n<p>Tying this together, the complete example of adding a binary flag to indicate one or more missing values in each row is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># add a binary flag that indicates if a row contains a missing value\r\nfrom numpy import isnan\r\nfrom numpy import hstack\r\nfrom pandas import read_csv\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/horse-colic.csv'\r\ndataframe = read_csv(url, header=None, na_values='?')\r\n# split into input and output elements\r\ndata = dataframe.values\r\nix = [i for i in range(data.shape[1]) if i != 23]\r\nX, y = data[:, ix], data[:, 23]\r\nprint(X.shape)\r\n# sum each row where rows with a nan will sum to nan\r\na = X.sum(axis=1)\r\n# mark all nan as 1\r\na[isnan(a)] = 1\r\n# mark all non-nan as 0\r\na[~isnan(a)] = 0\r\na = a.reshape((len(a), 1))\r\n# add to the dataset as another column\r\nX = hstack((X, a))\r\nprint(X.shape)<\/pre>\n<p>Running the example first downloads the dataset and reports the number of rows and columns, as expected.<\/p>\n<p>Then the new binary variable indicating whether a row contains a missing value is created and added to the end of the input variables. The shape of the input data is then reported, confirming the addition of the feature, from 27 to 28 columns.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">(300, 27)\r\n(300, 28)<\/pre>\n<p>We can then evaluate the model as we did in the previous section with the additional binary flag and see if it impacts model performance.<\/p>\n<p>The complete example is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># evaluate model performance with a binary flag for missing values and imputed missing\r\nfrom numpy import isnan\r\nfrom numpy import hstack\r\nfrom numpy import mean\r\nfrom numpy import std\r\nfrom pandas import read_csv\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.impute import SimpleImputer\r\nfrom sklearn.model_selection import cross_val_score\r\nfrom sklearn.model_selection import RepeatedStratifiedKFold\r\nfrom sklearn.pipeline import Pipeline\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/horse-colic.csv'\r\ndataframe = read_csv(url, header=None, na_values='?')\r\n# split into input and output elements\r\ndata = dataframe.values\r\nix = [i for i in range(data.shape[1]) if i != 23]\r\nX, y = data[:, ix], data[:, 23]\r\n# sum each row where rows with a nan will sum to nan\r\na = X.sum(axis=1)\r\n# mark all nan as 1\r\na[isnan(a)] = 1\r\n# mark all non-nan as 0\r\na[~isnan(a)] = 0\r\na = a.reshape((len(a), 1))\r\n# add to the dataset as another column\r\nX = hstack((X, a))\r\n# define modeling pipeline\r\nmodel = RandomForestClassifier()\r\nimputer = SimpleImputer()\r\npipeline = Pipeline(steps=[('i', imputer), ('m', model)])\r\n# define model evaluation\r\ncv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)\r\n# evaluate model\r\nscores = cross_val_score(pipeline, X, y, scoring='accuracy', cv=cv, n_jobs=-1)\r\nprint('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))<\/pre>\n<p>Running the example reports the mean and standard deviation classification accuracy on the horse colic dataset with the additional feature and imputation.<\/p>\n<p>Your specific results may vary given the stochastic nature of the learning algorithm, the stochastic nature of the evaluation procedure, and differences in precision across machines. Try running the example a few times.<\/p>\n<p>In this case, we see a modest lift in performance from 86.2 percent to 86.3 percent. The difference is small and may not be statistically significant.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Mean Accuracy: 0.863 (0.055)<\/pre>\n<p>Most rows in this dataset have a missing value, and this approach might be more beneficial on datasets with fewer missing values.<\/p>\n<p>Next, let&rsquo;s see if we can provide even more information about the missing values to the model.<\/p>\n<h2>Model With Indicators of All Missing Values<\/h2>\n<p>In the previous section, we added one additional column to indicate whether a row contains a missing value or not.<\/p>\n<p>One step further is to indicate whether each input value was missing and imputed or not. This effectively adds one additional column for each input variable that contains missing values and may offer benefit to the model.<\/p>\n<p>This can be achieved by setting the &ldquo;<em>add_indicator<\/em>&rdquo; argument to <em>True<\/em> when defining the <a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.impute.SimpleImputer.html\">SimpleImputer instance<\/a>.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# impute and mark missing values\r\nX = SimpleImputer(add_indicator=True).fit_transform(X)<\/pre>\n<p>We can demonstrate this with a worked example.<\/p>\n<p>The example below loads the horse colic dataset as before, then imputes the missing values on the entire dataset and adds indicators variables for each input variable that has missing values<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># impute and add indicators for columns with missing values\r\nfrom pandas import read_csv\r\nfrom sklearn.impute import SimpleImputer\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/horse-colic.csv'\r\ndataframe = read_csv(url, header=None, na_values='?')\r\ndata = dataframe.values\r\n# split into input and output elements\r\nix = [i for i in range(data.shape[1]) if i != 23]\r\nX, y = data[:, ix], data[:, 23]\r\nprint(X.shape)\r\n# impute and mark missing values\r\nX = SimpleImputer(strategy='mean', add_indicator=True).fit_transform(X)\r\nprint(X.shape)<\/pre>\n<p>Running the example first downloads and summarizes the shape of the dataset as expected, then applies the imputation and adds the binary (1 and 0 values) columns indicating whether each row contains a missing value for a given input variable.<\/p>\n<p>We can see that the number of input variables has increased from 27 to 48, indicating the addition of 21 binary input variables, and in turn, that 21 of the 27 input variables must contain at least one missing value.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">(300, 27)\r\n(300, 48)<\/pre>\n<p>Next, we can evaluate the model with this additional information.<\/p>\n<p>The complete example below demonstrates this.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># evaluate imputation with added indicators features on the horse colic dataset\r\nfrom numpy import mean\r\nfrom numpy import std\r\nfrom pandas import read_csv\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.impute import SimpleImputer\r\nfrom sklearn.model_selection import cross_val_score\r\nfrom sklearn.model_selection import RepeatedStratifiedKFold\r\nfrom sklearn.pipeline import Pipeline\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/horse-colic.csv'\r\ndataframe = read_csv(url, header=None, na_values='?')\r\n# split into input and output elements\r\ndata = dataframe.values\r\nix = [i for i in range(data.shape[1]) if i != 23]\r\nX, y = data[:, ix], data[:, 23]\r\n# define modeling pipeline\r\nmodel = RandomForestClassifier()\r\nimputer = SimpleImputer(add_indicator=True)\r\npipeline = Pipeline(steps=[('i', imputer), ('m', model)])\r\n# define model evaluation\r\ncv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)\r\n# evaluate model\r\nscores = cross_val_score(pipeline, X, y, scoring='accuracy', cv=cv, n_jobs=-1)\r\nprint('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))<\/pre>\n<p>Running the example reports the mean and standard deviation classification accuracy on the horse colic dataset with the additional indicators features and imputation.<\/p>\n<p>Your specific results may vary given the stochastic nature of the learning algorithm, the stochastic nature of the evaluation procedure, and differences in precision across machines. Try running the example a few times.<\/p>\n<p>In this case, we see a nice lift in performance from 86.3 percent in the previous section to 86.7 percent.<\/p>\n<p>This may provide strong evidence that adding one flag per column that was inputted is a better strategy on this dataset and chosen model.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Mean Accuracy: 0.867 (0.055)<\/pre>\n<\/p>\n<h2>Further Reading<\/h2>\n<p>This section provides more resources on the topic if you are looking to go deeper.<\/p>\n<h3>Related Tutorials<\/h3>\n<ul>\n<li><a href=\"https:\/\/machinelearningmastery.com\/results-for-standard-classification-and-regression-machine-learning-datasets\/\">Best Results for Standard Machine Learning Datasets<\/a><\/li>\n<li><a href=\"https:\/\/machinelearningmastery.com\/statistical-imputation-for-missing-values-in-machine-learning\/\">Statistical Imputation for Missing Values in Machine Learning<\/a><\/li>\n<li><a href=\"https:\/\/machinelearningmastery.com\/handle-missing-data-python\/\">How to Handle Missing Data with Python<\/a><\/li>\n<\/ul>\n<h2>Summary<\/h2>\n<p>In this tutorial, you discovered how to add binary flags for missing values for modeling.<\/p>\n<p>Specifically, you learned:<\/p>\n<ul>\n<li>How to load and evaluate models with statistical imputation on a classification dataset with missing values.<\/li>\n<li>How to add a flag that indicates if a row has one more missing values and evaluate models with this new feature.<\/li>\n<li>How to add a flag for each input variable that has missing values and evaluate models with these new features.<\/li>\n<\/ul>\n<p><strong>Do you have any questions?<\/strong><br \/>\nAsk your questions in the comments below and I will do my best to answer.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/binary-flags-for-missing-values-for-machine-learning\/\">Add Binary Flags for Missing Values for Machine Learning<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/\">Machine Learning Mastery<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/machinelearningmastery.com\/binary-flags-for-missing-values-for-machine-learning\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Jason Brownlee Missing values can cause problems when modeling classification and regression prediction problems with machine learning algorithms. A common approach is to replace [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2020\/07\/19\/add-binary-flags-for-missing-values-for-machine-learning\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":3682,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3681"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=3681"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3681\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/3682"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=3681"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=3681"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=3681"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}