{"id":4112,"date":"2020-11-19T18:00:09","date_gmt":"2020-11-19T18:00:09","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2020\/11\/19\/a-gentle-introduction-to-pycaret-for-machine-learning\/"},"modified":"2020-11-19T18:00:09","modified_gmt":"2020-11-19T18:00:09","slug":"a-gentle-introduction-to-pycaret-for-machine-learning","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2020\/11\/19\/a-gentle-introduction-to-pycaret-for-machine-learning\/","title":{"rendered":"A Gentle Introduction to PyCaret for Machine Learning"},"content":{"rendered":"<p>Author: Jason Brownlee<\/p>\n<div>\n<p><strong>PyCaret<\/strong> is a Python open source machine learning library designed to make performing standard tasks in a machine learning project easy.<\/p>\n<p>It is a Python version of the Caret machine learning package in R, popular because it allows models to be evaluated, compared, and tuned on a given dataset with just a few lines of code.<\/p>\n<p>The PyCaret library provides these features, allowing the machine learning practitioner in Python to spot check a suite of standard machine learning algorithms on a classification or regression dataset with a single function call.<\/p>\n<p>In this tutorial, you will discover the PyCaret Python open source library for machine learning.<\/p>\n<p>After completing this tutorial, you will know:<\/p>\n<ul>\n<li>PyCaret is a Python version of the popular and widely used caret machine learning package in R.<\/li>\n<li>How to use PyCaret to easily evaluate and compare standard machine learning models on a dataset.<\/li>\n<li>How to use PyCaret to easily tune the hyperparameters of a well-performing machine learning model.<\/li>\n<\/ul>\n<p>Let&rsquo;s get started.<\/p>\n<div id=\"attachment_11862\" style=\"width: 809px\" class=\"wp-caption aligncenter\"><img decoding=\"async\" aria-describedby=\"caption-attachment-11862\" loading=\"lazy\" class=\"size-full wp-image-11862\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/03\/A-Gentle-Introduction-to-PyCaret-for-Machine-Learning.jpg\" alt=\"A Gentle Introduction to PyCaret for Machine Learning\" width=\"799\" height=\"383\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2021\/03\/A-Gentle-Introduction-to-PyCaret-for-Machine-Learning.jpg 799w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2021\/03\/A-Gentle-Introduction-to-PyCaret-for-Machine-Learning-300x144.jpg 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2021\/03\/A-Gentle-Introduction-to-PyCaret-for-Machine-Learning-768x368.jpg 768w\" sizes=\"(max-width: 799px) 100vw, 799px\"><\/p>\n<p id=\"caption-attachment-11862\" class=\"wp-caption-text\">A Gentle Introduction to PyCaret for Machine Learning<br \/>Photo by <a href=\"https:\/\/www.flickr.com\/photos\/photommo\/35017076361\/\">Thomas<\/a>, some rights reserved.<\/p>\n<\/div>\n<h2>Tutorial Overview<\/h2>\n<p>This tutorial is divided into four parts; they are:<\/p>\n<ol>\n<li>What Is PyCaret?<\/li>\n<li>Sonar Dataset<\/li>\n<li>Comparing Machine Learning Models<\/li>\n<li>Tuning Machine Learning Models<\/li>\n<\/ol>\n<h2>What Is PyCaret?<\/h2>\n<p><a href=\"https:\/\/pycaret.org\/\">PyCaret<\/a> is an open source Python machine learning library inspired by the <a href=\"https:\/\/topepo.github.io\/caret\/\">caret R package<\/a>.<\/p>\n<p>The goal of the caret package is to automate the major steps for evaluating and comparing machine learning algorithms for classification and regression. The main benefit of the library is that a lot can be achieved with very few lines of code and little manual configuration. The PyCaret library brings these capabilities to Python.<\/p>\n<blockquote>\n<p>PyCaret is an open-source, low-code machine learning library in Python that aims to reduce the cycle time from hypothesis to insights. It is well suited for seasoned data scientists who want to increase the productivity of their ML experiments by using PyCaret in their workflows or for citizen data scientists and those new to data science with little or no background in coding.<\/p>\n<\/blockquote>\n<p>&mdash; <a href=\"https:\/\/pycaret.org\/\">PyCaret Homepage<\/a><\/p>\n<p>The PyCaret library automates many steps of a machine learning project, such as:<\/p>\n<ul>\n<li>Defining the data transforms to perform (<em>setup()<\/em>)<\/li>\n<li>Evaluating and comparing standard models (<em>compare_models()<\/em>)<\/li>\n<li>Tuning model hyperparameters (<em>tune_model()<\/em>)<\/li>\n<\/ul>\n<p>As well as many more features not limited to creating ensembles, saving models, and deploying models.<\/p>\n<p>The PyCaret library has a wealth of documentation for using the API; you can get started here:<\/p>\n<ul>\n<li><a href=\"https:\/\/pycaret.org\/\">PyCaret Homepage<\/a><\/li>\n<\/ul>\n<p>We will not explore all of the features of the library in this tutorial; instead, we will focus on simple machine learning model comparison and hyperparameter tuning.<\/p>\n<p>You can install PyCaret using your Python package manager, such as pip. For example:<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">pip install pycaret<\/pre>\n<p>Once installed, we can confirm that the library is available in your development environment and is working correctly by printing the installed version.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># check pycaret version\r\nimport pycaret\r\nprint('PyCaret: %s' % pycaret.__version__)<\/pre>\n<p>Running the example will load the PyCaret library and print the installed version number.<\/p>\n<p>Your version number should be the same or higher.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">PyCaret: 2.0.0<\/pre>\n<p>If you need help installing PyCaret for your system, you can see the installation instructions here:<\/p>\n<ul>\n<li><a href=\"https:\/\/pycaret.org\/install\">PyCaret Installation Instructions<\/a><\/li>\n<\/ul>\n<p>Now that we are familiar with what PyCaret is, let&rsquo;s explore how we might use it on a machine learning project.<\/p>\n<h2>Sonar Dataset<\/h2>\n<p>We will use the Sonar standard binary classification dataset. You can learn more about it here:<\/p>\n<ul>\n<li><a href=\"https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.csv\">Sonar Dataset (sonar.csv)<\/a><\/li>\n<li><a href=\"https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.names\">Sonar Dataset Details (sonar.names)<\/a><\/li>\n<\/ul>\n<p>We can download the dataset directly from the URL and load it as a Pandas DataFrame.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# define the location of the dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.csv'\r\n# load the dataset\r\ndf = read_csv(url, header=None)\r\n# summarize the shape of the dataset\r\nprint(df.shape)<\/pre>\n<p>The PyCaret seems to require that a dataset has column names, and our dataset does not have column names, so we can set the column number as the column name directly.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# set column names as the column number\r\nn_cols = df.shape[1]\r\ndf.columns = [str(i) for i in range(n_cols)]<\/pre>\n<p>Finally, we can summarize the first few rows of data.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# summarize the first few rows of data\r\nprint(df.head())<\/pre>\n<p>Tying this together, the complete example of loading and summarizing the Sonar dataset is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># load the sonar dataset\r\nfrom pandas import read_csv\r\n# define the location of the dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.csv'\r\n# load the dataset\r\ndf = read_csv(url, header=None)\r\n# summarize the shape of the dataset\r\nprint(df.shape)\r\n# set column names as the column number\r\nn_cols = df.shape[1]\r\ndf.columns = [str(i) for i in range(n_cols)]\r\n# summarize the first few rows of data\r\nprint(df.head())<\/pre>\n<p>Running the example first loads the dataset and reports the shape, showing it has 208 rows and 61 columns.<\/p>\n<p>The first five rows are then printed showing that the input variables are all numeric and the target variable is column &ldquo;60&rdquo; and has string labels.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">(208, 61)\r\n0 1 2 3 4 ... 56 57 58 59 60\r\n0 0.0200 0.0371 0.0428 0.0207 0.0954 ... 0.0180 0.0084 0.0090 0.0032 R\r\n1 0.0453 0.0523 0.0843 0.0689 0.1183 ... 0.0140 0.0049 0.0052 0.0044 R\r\n2 0.0262 0.0582 0.1099 0.1083 0.0974 ... 0.0316 0.0164 0.0095 0.0078 R\r\n3 0.0100 0.0171 0.0623 0.0205 0.0205 ... 0.0050 0.0044 0.0040 0.0117 R\r\n4 0.0762 0.0666 0.0481 0.0394 0.0590 ... 0.0072 0.0048 0.0107 0.0094 R<\/pre>\n<p>Next, we can use PyCaret to evaluate and compare a suite of standard machine learning algorithms to quickly discover what works well on this dataset.<\/p>\n<h2>PyCaret for Comparing Machine Learning Models<\/h2>\n<p>In this section, we will evaluate and compare the performance of standard machine learning models on the Sonar classification dataset.<\/p>\n<p>First, we must set the dataset with the PyCaret library via the <a href=\"https:\/\/pycaret.org\/classification\/\">setup() function<\/a>. This requires that we provide the Pandas DataFrame and specify the name of the column that contains the target variable.<\/p>\n<p>The <em>setup()<\/em> function also allows you to configure simple data preparation, such as scaling, power transforms, missing data handling, and PCA transforms.<\/p>\n<p>We will specify the data, target variable, and turn off HTML output, verbose output, and requests for user feedback.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# setup the dataset\r\ngrid = setup(data=df, target=df.columns[-1], html=False, silent=True, verbose=False)<\/pre>\n<p>Next, we can compare standard machine learning models by calling the <em>compare_models()<\/em> function.<\/p>\n<p>By default, it will evaluate models using 10-fold cross-validation, sort results by classification accuracy, and return the single best model.<\/p>\n<p>These are good defaults, and we don&rsquo;t need to change a thing.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# evaluate models and compare models\r\nbest = compare_models()<\/pre>\n<p>Call the <em>compare_models()<\/em> function will also report a table of results summarizing all of the models that were evaluated and their performance.<\/p>\n<p>Finally, we can report the best-performing model and its configuration.<\/p>\n<p>Tying this together, the complete example of evaluating a suite of standard models on the Sonar classification dataset is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># compare machine learning algorithms on the sonar classification dataset\r\nfrom pandas import read_csv\r\nfrom pycaret.classification import setup\r\nfrom pycaret.classification import compare_models\r\n# define the location of the dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.csv'\r\n# load the dataset\r\ndf = read_csv(url, header=None)\r\n# set column names as the column number\r\nn_cols = df.shape[1]\r\ndf.columns = [str(i) for i in range(n_cols)]\r\n# setup the dataset\r\ngrid = setup(data=df, target=df.columns[-1], html=False, silent=True, verbose=False)\r\n# evaluate models and compare models\r\nbest = compare_models()\r\n# report the best model\r\nprint(best)<\/pre>\n<p>Running the example will load the dataset, configure the PyCaret library, evaluate a suite of standard models, and report the best model found for the dataset.<\/p>\n<p><strong>Note<\/strong>: Your <a href=\"https:\/\/machinelearningmastery.com\/different-results-each-time-in-machine-learning\/\">results may vary<\/a> given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.<\/p>\n<p>In this case, we can see that the &ldquo;<em>Extra Trees Classifier<\/em>&rdquo; has the best accuracy on the dataset with a score of about 86.95 percent.<\/p>\n<p>We can then see the configuration of the model that was used, which looks like it used default hyperparameter values.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Model  Accuracy     AUC  Recall   Prec.      F1  \r\n0            Extra Trees Classifier    0.8695  0.9497  0.8571  0.8778  0.8631\r\n1               CatBoost Classifier    0.8695  0.9548  0.8143  0.9177  0.8508\r\n2   Light Gradient Boosting Machine    0.8219  0.9096  0.8000  0.8327  0.8012\r\n3      Gradient Boosting Classifier    0.8010  0.8801  0.7690  0.8110  0.7805\r\n4              Ada Boost Classifier    0.8000  0.8474  0.7952  0.8071  0.7890\r\n5            K Neighbors Classifier    0.7995  0.8613  0.7405  0.8276  0.7773\r\n6         Extreme Gradient Boosting    0.7995  0.8934  0.7833  0.8095  0.7802\r\n7          Random Forest Classifier    0.7662  0.8778  0.6976  0.8024  0.7345\r\n8          Decision Tree Classifier    0.7533  0.7524  0.7119  0.7655  0.7213\r\n9                  Ridge Classifier    0.7448  0.0000  0.6952  0.7574  0.7135\r\n10                      Naive Bayes    0.7214  0.8159  0.8286  0.6700  0.7308\r\n11              SVM - Linear Kernel    0.7181  0.0000  0.6286  0.7146  0.6309\r\n12              Logistic Regression    0.7100  0.8104  0.6357  0.7263  0.6634\r\n13     Linear Discriminant Analysis    0.6924  0.7510  0.6667  0.6762  0.6628\r\n14  Quadratic Discriminant Analysis    0.5800  0.6308  0.1095  0.5000  0.1750\r\n\r\n     Kappa     MCC  TT (Sec)\r\n0   0.7383  0.7446    0.1415\r\n1   0.7368  0.7552    1.9930\r\n2   0.6410  0.6581    0.0134\r\n3   0.5989  0.6090    0.1413\r\n4   0.5979  0.6123    0.0726\r\n5   0.5957  0.6038    0.0019\r\n6   0.5970  0.6132    0.0287\r\n7   0.5277  0.5438    0.1107\r\n8   0.5028  0.5192    0.0035\r\n9   0.4870  0.5003    0.0030\r\n10  0.4488  0.4752    0.0019\r\n11  0.4235  0.4609    0.0024\r\n12  0.4143  0.4285    0.0059\r\n13  0.3825  0.3927    0.0034\r\n14  0.1172  0.1792    0.0033\r\nExtraTreesClassifier(bootstrap=False, ccp_alpha=0.0, class_weight=None,\r\n                     criterion='gini', max_depth=None, max_features='auto',\r\n                     max_leaf_nodes=None, max_samples=None,\r\n                     min_impurity_decrease=0.0, min_impurity_split=None,\r\n                     min_samples_leaf=1, min_samples_split=2,\r\n                     min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=-1,\r\n                     oob_score=False, random_state=2728, verbose=0,\r\n                     warm_start=False)<\/pre>\n<p>We could use this configuration directly and fit a model on the entire dataset and use it to make predictions on new data.<\/p>\n<p>We can also use the table of results to get an idea of the types of models that perform well on the dataset, in this case, ensembles of decision trees.<\/p>\n<p>Now that we are familiar with how to compare machine learning models using PyCaret, let&rsquo;s look at how we might use the library to tune model hyperparameters.<\/p>\n<h2>Tuning Machine Learning Models<\/h2>\n<p>In this section, we will tune the hyperparameters of a machine learning model on the Sonar classification dataset.<\/p>\n<p>We must load and set up the dataset as we did before when comparing models.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# setup the dataset\r\ngrid = setup(data=df, target=df.columns[-1], html=False, silent=True, verbose=False)<\/pre>\n<p>We can tune model hyperparameters using the <em>tune_model()<\/em> function in the PyCaret library.<\/p>\n<p>The function takes an instance of the model to tune as input and knows what hyperparameters to tune automatically. A random search of model hyperparameters is performed and the total number of evaluations can be controlled via the &ldquo;<em>n_iter<\/em>&rdquo; argument.<\/p>\n<p>By default, the function will optimize the &lsquo;<em>Accuracy<\/em>&lsquo; and will evaluate the performance of each configuration using 10-fold cross-validation, although this sensible default configuration can be changed.<\/p>\n<p>We can perform a random search of the extra trees classifier as follows:<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# tune model hyperparameters\r\nbest = tune_model(ExtraTreesClassifier(), n_iter=200)<\/pre>\n<p>The function will return the best-performing model, which can be used directly or printed to determine the hyperparameters that were selected.<\/p>\n<p>It will also print a table of the results for the best configuration across the number of folds in the k-fold cross-validation (e.g. 10 folds).<\/p>\n<p>Tying this together, the complete example of tuning the hyperparameters of the extra trees classifier on the Sonar dataset is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># tune model hyperparameters on the sonar classification dataset\r\nfrom pandas import read_csv\r\nfrom sklearn.ensemble import ExtraTreesClassifier\r\nfrom pycaret.classification import setup\r\nfrom pycaret.classification import tune_model\r\n# define the location of the dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.csv'\r\n# load the dataset\r\ndf = read_csv(url, header=None)\r\n# set column names as the column number\r\nn_cols = df.shape[1]\r\ndf.columns = [str(i) for i in range(n_cols)]\r\n# setup the dataset\r\ngrid = setup(data=df, target=df.columns[-1], html=False, silent=True, verbose=False)\r\n# tune model hyperparameters\r\nbest = tune_model(ExtraTreesClassifier(), n_iter=200, choose_better=True)\r\n# report the best model\r\nprint(best)<\/pre>\n<p>Running the example first loads the dataset and configures the PyCaret library.<\/p>\n<p>A grid search is then performed reporting the performance of the best-performing configuration across the 10 folds of cross-validation and the mean accuracy.<\/p>\n<p><strong>Note<\/strong>: Your <a href=\"https:\/\/machinelearningmastery.com\/different-results-each-time-in-machine-learning\/\">results may vary<\/a> given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.<\/p>\n<p>In this case, we can see that the random search found a configuration with an accuracy of about 75.29 percent, which is not better than the default configuration from the previous section that achieved a score of about 86.95 percent.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Accuracy     AUC  Recall   Prec.      F1   Kappa     MCC\r\n0       0.8667  1.0000  1.0000  0.7778  0.8750  0.7368  0.7638\r\n1       0.6667  0.8393  0.4286  0.7500  0.5455  0.3119  0.3425\r\n2       0.6667  0.8036  0.2857  1.0000  0.4444  0.2991  0.4193\r\n3       0.7333  0.7321  0.4286  1.0000  0.6000  0.4444  0.5345\r\n4       0.6667  0.5714  0.2857  1.0000  0.4444  0.2991  0.4193\r\n5       0.8571  0.8750  0.6667  1.0000  0.8000  0.6957  0.7303\r\n6       0.8571  0.9583  0.6667  1.0000  0.8000  0.6957  0.7303\r\n7       0.7857  0.8776  0.5714  1.0000  0.7273  0.5714  0.6325\r\n8       0.6429  0.7959  0.2857  1.0000  0.4444  0.2857  0.4082\r\n9       0.7857  0.8163  0.5714  1.0000  0.7273  0.5714  0.6325\r\nMean    0.7529  0.8270  0.5190  0.9528  0.6408  0.4911  0.5613\r\nSD      0.0846  0.1132  0.2145  0.0946  0.1571  0.1753  0.1485\r\nExtraTreesClassifier(bootstrap=False, ccp_alpha=0.0, class_weight=None,\r\n                     criterion='gini', max_depth=1, max_features='auto',\r\n                     max_leaf_nodes=None, max_samples=None,\r\n                     min_impurity_decrease=0.0, min_impurity_split=None,\r\n                     min_samples_leaf=4, min_samples_split=2,\r\n                     min_weight_fraction_leaf=0.0, n_estimators=120,\r\n                     n_jobs=None, oob_score=False, random_state=None, verbose=0,\r\n                     warm_start=False)<\/pre>\n<p>We might be able to improve upon the grid search by specifying to the <em>tune_model()<\/em> function what hyperparameters to search and what ranges to search.<\/p>\n<h2>Further Reading<\/h2>\n<p>This section provides more resources on the topic if you are looking to go deeper.<\/p>\n<ul>\n<li><a href=\"https:\/\/pycaret.org\/\">PyCaret Homepage<\/a><\/li>\n<li><a href=\"https:\/\/topepo.github.io\/caret\/\">R Caret Package Homepage<\/a><\/li>\n<li><a href=\"https:\/\/pycaret.org\/install\">PyCaret Installation Instructions<\/a><\/li>\n<\/ul>\n<h2>Summary<\/h2>\n<p>In this tutorial, you discovered the PyCaret Python open source library for machine learning.<\/p>\n<p>Specifically, you learned:<\/p>\n<ul>\n<li>PyCaret is a Python version of the popular and widely used caret machine learning package in R.<\/li>\n<li>How to use PyCaret to easily evaluate and compare standard machine learning models on a dataset.<\/li>\n<li>How to use PyCaret to easily tune the hyperparameters of a well-performing machine learning model.<\/li>\n<\/ul>\n<p><strong>Do you have any questions?<\/strong><br \/>\nAsk your questions in the comments below and I will do my best to answer.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/pycaret-for-machine-learning\/\">A Gentle Introduction to PyCaret for Machine Learning<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/\">Machine Learning Mastery<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/machinelearningmastery.com\/pycaret-for-machine-learning\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Jason Brownlee PyCaret is a Python open source machine learning library designed to make performing standard tasks in a machine learning project easy. It [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2020\/11\/19\/a-gentle-introduction-to-pycaret-for-machine-learning\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":4113,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/4112"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=4112"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/4112\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/4113"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=4112"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=4112"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=4112"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}