{"id":2449,"date":"2019-08-09T06:31:55","date_gmt":"2019-08-09T06:31:55","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2019\/08\/09\/detecting-bias-with-shap\/"},"modified":"2019-08-09T06:31:55","modified_gmt":"2019-08-09T06:31:55","slug":"detecting-bias-with-shap","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2019\/08\/09\/detecting-bias-with-shap\/","title":{"rendered":"Detecting Bias with SHAP"},"content":{"rendered":"<p>Author: Sean Owen<\/p>\n<div>\n<p><i>Originally published on the <a href=\"https:\/\/databricks.com\/blog\/2019\/06\/17\/detecting-bias-with-shap.html\">Databricks blog<\/a> with <a href=\"https:\/\/pages.databricks.com\/rs\/094-YMS-629\/images\/so_dev_survey_2019.html\">accompanying notebook<\/a>.<\/i><\/p>\n<p>StackOverflow\u2019s annual developer survey concluded earlier this year, and they have graciously published the (anonymized) 2019 results for analysis. They\u2019re a rich view into the experience of software developers around the world \u2014 what\u2019s their favorite editor? how many years of experience? tabs or spaces? and crucially, salary. Software engineers\u2019 salaries are good, and sometimes both eye-watering and news-worthy.<\/p>\n<p>The tech industry is also painfully aware that it does not always live up to its purported meritocratic ideals. Pay isn\u2019t a pure function of merit, and story after story tells us that factors like name-brand school, age, race, and gender have an effect on outcomes like salary.<\/p>\n<p>Can machine learning do more than predict things? Can it explain salaries and so highlight cases where these factors might be undesirably causing pay differences? This example will sketch how standard models can be augmented with SHAP (SHapley Additive exPlanations) to detect individual instances whose predictions may be concerning, and then dig deeper into the specific reasons the data leads to those predictions.<\/p>\n<h1>Model Bias or Data (about) Bias?<\/h1>\n<p>While this topic is often characterized as detecting \u201cmodel bias\u201d, a model is merely a mirror of the data it was trained on. If the model is \u2018biased\u2019 then it learned that from the historical facts of the data. Models are not the problem per se; they are an opportunity to analyze data for evidence of bias.<\/p>\n<p>Explaining models isn\u2019t new, and most libraries can assess the relative importance of the inputs to a model. These are aggregate views of inputs\u2019 effects. However, the output of some machine learning models has highly individual effects: is your loan approved? will you receive financial aid? are you a suspicious traveller?<\/p>\n<p>Indeed, StackOverflow offers a handy calculator to estimate one\u2019s expected salary, based on its survey. We can only speculate about how accurate the predictions are overall, but all that a developer particularly cares about is his or her own prospects.<\/p>\n<p>The right question may not be, does the data suggest bias overall? but rather, does the data show individual instances of bias?<\/p>\n<h1>Assessing the Survey Data<\/h1>\n<p>The 2019 data is, thankfully, clean and free of data problems. It contains responses to 85 questions from about 88,000 developers.<\/p>\n<p>This example focuses only on full-time developers. The data set contains plenty of relevant information, like years of experience, education, role, and demographic information. Notably, this data set doesn\u2019t contain information about bonuses and equity, just salary.<\/p>\n<p>It also has responses to wide-ranging questions about attitudes on blockchain, fizz buzz, and the survey itself. These are excluded here as unlikely to reflect the experience and skills that presumably <i>should<\/i> determine compensation. Likewise, for simplicity, it will also only focus on US-based developers.<\/p>\n<p>The data needs a little more transformation before modeling. Several questions allow multiple responses, like <i>\u201cWhat are your greatest challenges to productivity as a developer?\u201d<\/i> These single questions yield multiple yes\/no responses and need to be broken out into multiple yes\/no features.<\/p>\n<p>Some multiple-choice questions like <i>\u201cApproximately how many people are employed by the company or organization you work for?\u201d<\/i> afford responses like <i>\u201c2-9 employees\u201d<\/i>. These are effectively binned continuous values, and it may be useful to map them back to inferred continuous values like \u201c2\u201d so that the model may consider their order and relative magnitude. This translation is unfortunately manual and entails some judgment calls.<\/p>\n<p>The Apache Spark code that can accomplish this is in the accompanying notebook, for the interested.<\/p>\n<h1>Model Selection with Apache Spark<\/h1>\n<p>With the data in a more machine-learning-friendly form, the next step is to fit a regression model that predicts salary from these features. The data set itself, after filtering and transformation with Spark, is a mere 4MB, containing 206 features from about 12,600 developers, and could easily fit in memory as a Pandas Dataframe on your wristwatch, let alone a server.<\/p>\n<p><code>xgboost<\/code>, a popular gradient-boosted trees package, can fit a model to this data in minutes on a single machine, without Spark. <code>xgboost<\/code> offers many tunable \u201chyperparameters\u201d that affect the quality of the model: maximum depth, learning rate, regularization, and so on. Rather than guess, simple standard practice is to try lots of settings of these values and pick the combination that results in the most accurate model.<\/p>\n<p>Fortunately, this is where Spark comes back in. It can build hundreds of these models in parallel and collect the results of each. Because the data set is small, it\u2019s simple to broadcast it to the workers, create a bunch of combinations of those hyperparameters to try, and use Spark to apply the same simple non-distributed <code>xgboost<\/code> code that could build a model locally to the data with each combination.<\/p>\n<pre><code>...def train_model(params):  (max_depth, learning_rate, reg_alpha, reg_lambda, gamma, min_child_weight) = params  <br>\n  xgb_regressor = XGBRegressor(objective='reg:squarederror', max_depth=max_depth,<br>\n    learning_rate=learning_rate, reg_alpha=reg_alpha, reg_lambda=reg_lambda, gamma=gamma,<br>\n    min_child_weight=min_child_weight, n_estimators=3000, base_score=base_score,<br>\n    importance_type='total_gain', random_state=0)<br>\n  xgb_model = xgb_regressor.fit(b_X_train.value, b_y_train.value,<br>\n    eval_set=[(b_X_test.value, b_y_test.value)],<br>\n                                eval_metric='rmse', early_stopping_rounds=30)<br>\n  n_estimators = len(xgb_model.evals_result()['validation_0']['rmse'])<br>\n  y_pred = xgb_model.predict(b_X_test.value)<br>\n  mae = mean_absolute_error(y_pred, b_y_test.value)<br>\n  rmse = sqrt(mean_squared_error(y_pred, b_y_test.value))<br>\n  return (params + (n_estimators,), (mae, rmse), xgb_model)<br><br>\n...<br><br>\nmax_depth =        np.unique(np.geomspace(3, 7, num=5, dtype=np.int32)).tolist()<br>\nlearning_rate =    np.unique(np.around(np.geomspace(0.01, 0.1, num=5), decimals=3)).tolist()<br>\nreg_alpha =        [0] + np.unique(np.around(np.geomspace(1, 50, num=5), decimals=3)).tolist()<br>\nreg_lambda =       [0] + np.unique(np.around(np.geomspace(1, 50, num=5), decimals=3)).tolist()<br>\ngamma =            np.unique(np.around(np.geomspace(5, 20, num=5), decimals=3)).tolist()<br>\nmin_child_weight = np.unique(np.geomspace(5, 30, num=5, dtype=np.int32)).tolist()<br><br>\nparallelism = 128<br>\nparam_grid = [(choice(max_depth), choice(learning_rate), choice(reg_alpha),<br>\n  choice(reg_lambda), choice(gamma), choice(min_child_weight)) for _ in range(parallelism)]<br><br>\nparams_evals_models = sc.parallelize(param_grid, parallelism).map(train_model).collect()<br><\/code><\/pre>\n<p>That will create a lot of models. To track and evaluate the results, <code>mlflow<\/code> can log each one with its metrics and hyperparameters, and view them in the notebook\u2019s Experiment. Here, one hyperparameter over many runs is compared to the resulting accuracy (mean absolute error):<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-44798\" src=\"https:\/\/databricks.com\/wp-content\/uploads\/2019\/06\/Figure-1.png\" alt=\"\" width=\"700\" height=\"313\"><\/p>\n<p>The single model that showed the lowest error on the held-out validation data set is of interest. It yielded a mean absolute error of about $28,000 on salaries that average about $119,000. Not terrible, although we should realize the model can only explain <i>most<\/i> of the variation in salary.<\/p>\n<h1>Interpreting the xgboost Model<\/h1>\n<p>Although the model can be used to predict future salaries, instead, the question is what the model says about the data. What features seem to matter most when predicting salary accurately? The <code>xgboost<\/code> model itself computes a notion of feature importance:<\/p>\n<pre><code>import mlflow.sklearnbest_run_id = \"...\"model = mlflow.sklearn.load_model(\"runs:\/\" + best_run_id + \"\/xgboost\")<br>\nsorted((zip(model.feature_importances_, X.columns)), reverse=True)[:6]<br><\/code><\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-44799\" src=\"https:\/\/databricks.com\/wp-content\/uploads\/2019\/06\/Figure-2.png\" alt=\"\" width=\"700\" height=\"214\"><\/p>\n<p>Factors like years of coding professionally, organization size, and using Windows are most \u201cimportant\u201d. This is interesting, but hard to interpret. The values reflect relative and not absolute importance. That is, the effect isn\u2019t measured in dollars. The definition of importance here (total gain) is also specific to how decision trees are built and are hard to map to an intuitive interpretation. The important features don\u2019t even necessarily correlate positively with salary, either.<\/p>\n<p>More importantly, this is a \u2018global\u2019 view of how much features matter in aggregate. Factors like gender and ethnicity don\u2019t show up on this list until farther along. This doesn\u2019t mean these factors aren\u2019t still significant. For one, features can be correlated, or interact. It\u2019s possible that factors like gender correlate with other features that the trees selected instead, and this to some degree masks their effect.<\/p>\n<p>The more interesting question is not so much whether these factors matter overall \u2014 it\u2019s possible that their average effect is relatively small \u2014 but, whether they have a significant effect in some individual cases. These are the instances where the model is telling us something important about individuals\u2019 experience, and to those individuals, that experience is what matters.<\/p>\n<h1>Applying SHAP for Developer-Level Explanations<\/h1>\n<p>Fortunately, a set of techniques for more theoretically sound model interpretation at the individual prediction level has emerged over the past five years or so. They are collectively \u201c<b>Sh<\/b>apley <b>A<\/b>dditive Ex<b>p<\/b>lanations\u201d, and conveniently, are implemented in the Python package <code>shap<\/code>.<\/p>\n<p>Given any model, this library computes \u201cSHAP values\u201d from the model. These values are readily interpretable, as each value is a feature\u2019s effect on the prediction, in its units. A SHAP value of 1000 here means \u201cexplained +$1,000 of predicted salary\u201d. SHAP values are computed in a way that attempts to isolate away of correlation and interaction, as well.<\/p>\n<pre><code>import shapexplainer = shap.TreeExplainer(model)shap_values = explainer.shap_values(X, y=y.values)<br><\/code><\/pre>\n<p>SHAP values are also computed for every input, not the model as a whole, so these explanations are available for each input individually. It can also estimate the effect of feature interactions separately from the main effect of each feature, for each prediction.<\/p>\n<h1>Explaining the Features\u2019 Effects Overall<\/h1>\n<p>Developer-level explanations can aggregate into explanations of the features\u2019 effects on salary over the whole data set by simply averaging their absolute values. SHAP\u2019s assessment of the overall most important features is similar:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-44800\" src=\"https:\/\/databricks.com\/wp-content\/uploads\/2019\/06\/Figure-3.png\" alt=\"\" width=\"702\" height=\"215\"><\/p>\n<p>The SHAP values tell a similar story. First, SHAP is able to quantify the effect on salary in dollars, which greatly improves the interpretation of the results. Above is a plot the <i>absolute<\/i> effect of each feature on predicted salary, averaged across developers. Years of professional coding experience still dominates, explaining on average almost $15,000 of effect on salary.<\/p>\n<h1>Examining the Effects of Gender with SHAP Values<\/h1>\n<p>We came to look specifically at the effects of gender, race, and other factors that presumably should not be predictive per se of salary at all. This example will examine the effect of gender, though this by no means suggests that it\u2019s the only or most important, type of bias to look for.<\/p>\n<p>Gender is not binary, and the survey recognizes responses of \u201cMan\u201d, \u201cWoman\u201d, and \u201cNon-binary, genderqueer, or gender non-conforming\u201d as well as \u201cTrans\u201d separately. (Note that while the survey also separately records responses about sexuality, these are not considered here.) SHAP computes the effect on predicted salary for each of these. For a male developer (identifying only as male), the effect of gender is not just the effect of being male, but of not identifying as female, transgender, and so on.<\/p>\n<p>SHAP values let us read off the sum of these effects for developers identifying as each of the four categories:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-44801\" src=\"https:\/\/databricks.com\/wp-content\/uploads\/2019\/06\/Figure-4.png\" alt=\"\" width=\"700\" height=\"116\"><\/p>\n<p>While male developers\u2019 gender explains about a modest -$230 to +$890 with mean about $225, for females, the range is wider, from about -$4,260 to -$690 with mean -$1,320. The results for transgender and non-binary developers is similar, though slightly less negative.<\/p>\n<p>When evaluating what this means below, it\u2019s important to recall the limitations of the data and model here:<\/p>\n<ul>\n<li>Correlation isn\u2019t causation; \u2018explaining\u2019 predicted salary is suggestive, but doesn\u2019t prove, that a feature directly caused salary to be higher or lower<\/li>\n<li>The model isn\u2019t perfectly accurate<\/li>\n<li>This is just 1 year of data, and only from US developers<\/li>\n<li>This reflects only base salary, not bonuses or stock, which can vary more widely<\/li>\n<\/ul>\n<h1>Gender and Interacting Features<\/h1>\n<p>The SHAP library offers interesting visualizations that leverage its ability to isolate the effect of feature interactions. For example, the values above suggest that developers who identify as male are predicted to earn a slightly higher salary than others, but is there more to it? A dependence plot like this one can help:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-44802\" src=\"https:\/\/databricks.com\/wp-content\/uploads\/2019\/06\/Figure-5.png\" alt=\"\" width=\"700\" height=\"491\"><\/p>\n<p>Dots are developers. Developers at the left are those that don\u2019t identify as male, and at the right, those that do, which are predominantly those identifying as only male. (The points are randomly spread horizontally for clarity.) The y-axis is SHAP value, or what identifying as male or not explains about predicted salary for each developer. As above, those not identifying as male show overall negative SHAP values, and one that varies widely, while others consistently show a small positive SHAP value.<\/p>\n<p>What\u2019s behind that variance? SHAP can select a second feature whose effect varies most given the value of, here, identifying as male or not.\u00a0 It selects the answer \u201cI work on what seems most important or urgent\u201d to the question \u201cHow structured or planned is your work?\u201d\u00a0 Among developers identifying as male, those who answered this way (red points) appear to have slightly higher SHAP values. Among the rest, the effect is more mixed but seems to have generally lower SHAP values.<\/p>\n<p>Interpretation is left to the reader, but perhaps: are male developers who feel empowered in this sense also enjoying slightly higher salaries, while other developers enjoy this where it goes hand in hand with lower-paying roles?<\/p>\n<h1>Exploring Instances with Outsized Gender Effects<\/h1>\n<p>What about investigating the developer whose salary is most negatively affected? Just as it\u2019s possible to look at the effect of gender-related features overall, it\u2019s possible to search for the developer whose gender-related features had the <i>largest<\/i> impact on predicted salary. This person is female, and the effect is negative. According to the model, she is predicted to earn about $4,260 less per year because of her gender:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-44803\" src=\"https:\/\/databricks.com\/wp-content\/uploads\/2019\/06\/Figure-6.png\" alt=\"\" width=\"700\" height=\"102\"><\/p>\n<p>The predicted salary, just over $157,000, accurate in this case, as her actual reported salary is $150,000.<\/p>\n<p>The three most positive and negative features influencing predicted salary are that she:<\/p>\n<ul>\n<li>Has a college degree (only) (+$18,200)<\/li>\n<li>Has 10 years professional experience (+$9,400)<\/li>\n<li>Identifies as East Asian (+$9,100)<\/li>\n<li>\u2026<\/li>\n<li>Works 40 hours per week (-$4,000)<\/li>\n<li><b>Does not identify as male (-$4,250)<\/b><\/li>\n<li>Works at a medium-sized org of 100-499 employees (-$9,700)<\/li>\n<\/ul>\n<p>Given the magnitude of the effect on the predicted salary of not identifying as male, we might stop here and investigate the details of this case offline to gain a better understanding of the context around this developer and whether her experience, or salary, or both, need a change.<\/p>\n<h1>Explaining Interactions<\/h1>\n<p>There is more detail available within that -$4,260. SHAP can break down the effects of these features into interactions. The total effect of identifying as female on the prediction can be broken down into the effect of identifying as female <i>and<\/i> being an engineering manager, <i>and<\/i> working with Windows, etc.<\/p>\n<p>The effect on predicted salary explained by the gender factors per se only adds up to about -$630. Rather, SHAP assigns most of the effects of gender to interactions with other features:<\/p>\n<pre><code>gender_interactions = interaction_values[gender_feature_locs].sum(axis=0)max_c = np.argmax(gender_interactions)min_c = np.argmin(gender_interactions)<br>\nprint(X.columns[max_c])<br>\nprint(gender_interactions[max_c])<br>\nprint(X.columns[min_c])<br>\nprint(gender_interactions[min_c])<br><br>\nDatabaseWorkedWith_PostgreSQL<br>\n110.64005<br>\nEthnicity_East_Asian<br>\n-1372.6714<br><\/code><\/pre>\n<p>Identifying as female and working with PostgreSQL affects predicted salary slightly positively, whereas also identifying as East Asian predicted salary more negatively. Interpreting these values at this level of granularity is difficult in this context, but, this additional level of explanation is available.<\/p>\n<h1>Applying SHAP with Apache Spark<\/h1>\n<p>SHAP values are computed independently for each row, given the model, and so this could have also been done in parallel with Spark. The following example computes SHAP values in parallel and similarly locates developers with outsized gender-related SHAP values:<\/p>\n<pre><code>X_df = pruned_parsed_df.drop(\"ConvertedComp\").repartition(16)X_columns = X_df.columns<br>\ndef add_shap(rows):<br>\n  rows_pd = pd.DataFrame(rows, columns=X_columns)<br>\n  shap_values = explainer.shap_values(rows_pd.drop([\"Respondent\"], axis=1))<br>\n  return [Row(*([int(rows_pd[\"Respondent\"][i])] + [float(f) for f in shap_values[i]])) for i in range(len(shap_values))]<br><br>\nshap_df = X_df.rdd.mapPartitions(add_shap).toDF(X_columns)<br><br>\neffects_df = shap_df.<br>\n  withColumn(\"gender_shap\", col(\"Gender_Woman\") + col(\"Gender_Man\") + col(\"Gender_Non_binary__genderqueer__or_gender_non_conforming\") + col(\"Trans\")).<br>\n  select(\"Respondent\", \"gender_shap\")<br>\ntop_effects_df = effects_df.filter(abs(col(\"gender_shap\")) >= 2500).orderBy(\"gender_shap\")<br><\/code><\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-44804\" src=\"https:\/\/databricks.com\/wp-content\/uploads\/2019\/06\/Figure-7.png\" alt=\"\" width=\"449\" height=\"110\"><\/p>\n<h1>Clustering SHAP values<\/h1>\n<p>Applying Spark is advantageous when there are a large number of predictions to assess with SHAP. Given that output, it\u2019s also possible to use Spark to cluster the results with, for example, bisecting k-means:<\/p>\n<pre><code>assembler = VectorAssembler(inputCols=[c for c in to_review_df.columns if c != \"Respondent\"],  outputCol=\"features\")assembled_df = assembler.transform(shap_df).cache()<br><br>\nclusterer = BisectingKMeans().setFeaturesCol(\"features\").setK(50).setMaxIter(50).setSeed(0)<br>\ncluster_model = clusterer.fit(assembled_df)<br>\ntransformed_df = cluster_model.transform(assembled_df).select(\"Respondent\", \"prediction\")<br><\/code><\/pre>\n<p>The cluster whose total gender-related SHAP effects are most negative might bear some further investigation. What are the SHAP values of those respondents in the cluster? What do the members of the cluster look like with respect to the overall developer population?<\/p>\n<pre><code>min_shap_cluster_df = transformed_df.filter(\"prediction = 5\").  join(effects_df, \"Respondent\").  join(X_df, \"Respondent\").<br>\n  select(gender_cols).groupBy(gender_cols).count().orderBy(gender_cols)<br>\nall_shap_df = X_df.select(gender_cols).groupBy(gender_cols).count().orderBy(gender_cols)<br>\nexpected_ratio = transformed_df.filter(\"prediction = 5\").count() \/ X_df.count()<br>\ndisplay(min_shap_cluster_df.join(all_shap_df, on=gender_cols).<br>\n  withColumn(\"ratio\", (min_shap_cluster_df[\"count\"] \/ all_shap_df[\"count\"]) \/ expected_ratio).<br>\n  orderBy(\"ratio\"))<br><\/code><\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-44805\" src=\"https:\/\/databricks.com\/wp-content\/uploads\/2019\/06\/Figure-8.png\" alt=\"\" width=\"700\" height=\"174\"><\/p>\n<p>Developers identifying as female (only) are represented in this cluster at almost 2.8x the rate of the overall developer population, for example. This isn\u2019t surprising given the earlier analysis. This cluster could be further investigated to assess other factors specific to this group that contribute to overall lower predicted salary.<\/p>\n<h1>Conclusion<\/h1>\n<p>This type of analysis with SHAP can be run for any model, and at scale too. As an analytical tool, it turns models into data detectives, to surface individual instances whose predictions suggest that they deserve more examination. The output of SHAP is easily interpretable and yields intuitive plots, that can be assessed case-by-case by business users.<\/p>\n<p>Of course, this analysis isn\u2019t limited to examining questions of gender, age or race bias. More prosaically, it could be applied to customer churn models. There, the question is not just \u201cwill this customer churn?\u201d but \u201cwhy is the customer churning?\u201d A customer who is canceling due to price may be offered a discount, while one canceling due to limited usage might need an upsell.<\/p>\n<p>Finally, this analysis can be run as part of a model validation process. Model validation often focuses on the overall accuracy of a model. It should also focus on the model\u2019s \u2018reasoning\u2019, or what features contributed most to the predictions. With SHAP, it can also help detect when too many individual <i>predictions\u2019<\/i> explanations are at odds with overall feature importance.<\/p>\n<\/div>\n<p><a href=\"https:\/\/www.datasciencecentral.com\/xn\/detail\/6448529:BlogPost:866607\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Sean Owen Originally published on the Databricks blog with accompanying notebook. StackOverflow\u2019s annual developer survey concluded earlier this year, and they have graciously published [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2019\/08\/09\/detecting-bias-with-shap\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":2450,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[26],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2449"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=2449"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/2449\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/2450"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=2449"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=2449"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=2449"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}