{"id":3704,"date":"2020-07-26T19:00:35","date_gmt":"2020-07-26T19:00:35","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2020\/07\/26\/loocv-for-evaluating-machine-learning-algorithms\/"},"modified":"2020-07-26T19:00:35","modified_gmt":"2020-07-26T19:00:35","slug":"loocv-for-evaluating-machine-learning-algorithms","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2020\/07\/26\/loocv-for-evaluating-machine-learning-algorithms\/","title":{"rendered":"LOOCV for Evaluating Machine Learning Algorithms"},"content":{"rendered":"<p>Author: Jason Brownlee<\/p>\n<div>\n<p>The <strong>Leave-One-Out Cross-Validation<\/strong>, or <strong>LOOCV<\/strong>, procedure is used to estimate the performance of machine learning algorithms when they are used to make predictions on data not used to train the model.<\/p>\n<p>It is a computationally expensive procedure to perform, although it results in a reliable and unbiased estimate of model performance. Although simple to use and no configuration to specify, there are times when the procedure should not be used, such as when you have a very large dataset or a computationally expensive model to evaluate.<\/p>\n<p>In this tutorial, you will discover how to evaluate machine learning models using leave-one-out cross-validation.<\/p>\n<p>After completing this tutorial, you will know:<\/p>\n<ul>\n<li>The leave-one-out cross-validation procedure is appropriate when you have a small dataset or when an accurate estimate of model performance is more important than the computational cost of the method.<\/li>\n<li>How to use the scikit-learn machine learning library to perform the leave-one-out cross-validation procedure.<\/li>\n<li>How to evaluate machine learning algorithms for classification and regression using leave-one-out cross-validation.<\/li>\n<\/ul>\n<p>Let&rsquo;s get started.<\/p>\n<div id=\"attachment_10614\" style=\"width: 810px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-10614\" class=\"size-full wp-image-10614\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/07\/LOOCV-for-Evaluating-Machine-Learning-Algorithms.jpg\" alt=\"LOOCV for Evaluating Machine Learning Algorithms\" width=\"800\" height=\"600\" srcset=\"http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2020\/07\/LOOCV-for-Evaluating-Machine-Learning-Algorithms.jpg 800w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2020\/07\/LOOCV-for-Evaluating-Machine-Learning-Algorithms-300x225.jpg 300w, http:\/\/3qeqpr26caki16dnhd19sv6by6v.wpengine.netdna-cdn.com\/wp-content\/uploads\/2020\/07\/LOOCV-for-Evaluating-Machine-Learning-Algorithms-768x576.jpg 768w\" sizes=\"(max-width: 800px) 100vw, 800px\"><\/p>\n<p id=\"caption-attachment-10614\" class=\"wp-caption-text\">LOOCV for Evaluating Machine Learning Algorithms<br \/>Photo by <a href=\"https:\/\/flickr.com\/photos\/smilygrl\/4771680957\/\">Heather Harvey<\/a>, some rights reserved.<\/p>\n<\/div>\n<h2>Tutorial Overview<\/h2>\n<p>This tutorial is divided into three parts; they are:<\/p>\n<ol>\n<li>LOOCV Model Evaluation<\/li>\n<li>LOOCV Procedure in Scikit-Learn<\/li>\n<li>LOOCV to Evaluate Machine Learning Models\n<ol>\n<li>LOOCV for Classification<\/li>\n<li>LOOCV for Regression<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<h2>LOOCV Model Evaluation<\/h2>\n<p>Cross-validation, or k-fold cross-validation, is a procedure used to estimate the performance of a machine learning algorithm when making predictions on data not used during the training of the model.<\/p>\n<p>The cross-validation has a single hyperparameter &ldquo;<em>k<\/em>&rdquo; that controls the number of subsets that a dataset is split into. Once split, each subset is given the opportunity to be used as a test set while all other subsets together are used as a training dataset.<\/p>\n<p>This means that k-fold cross-validation involves fitting and evaluating <em>k<\/em> models. This, in turn, provides k estimates of a model&rsquo;s performance on the dataset, which can be reported using summary statistics such as the mean and standard deviation. This score can then be used to compare and ultimately select a model and configuration to use as the &ldquo;<em>final model<\/em>&rdquo; for a dataset.<\/p>\n<p>Typical values for k are k=3, k=5, and k=10, with 10 representing the most common value. This is because, given extensive testing, 10-fold cross-validation provides a good balance of low computational cost and low bias in the estimate of model performance as compared to other k values and a single train-test split.<\/p>\n<p>For more on k-fold cross-validation, see the tutorial:<\/p>\n<ul>\n<li><a href=\"https:\/\/machinelearningmastery.com\/k-fold-cross-validation\/\">A Gentle Introduction to k-fold Cross-Validation<\/a><\/li>\n<\/ul>\n<p>Leave-one-out cross-validation, or LOOCV, is a configuration of k-fold cross-validation where <em>k<\/em> is set to the number of examples in the dataset.<\/p>\n<p>LOOCV is an extreme version of k-fold cross-validation that has the maximum computational cost. It requires one model to be created and evaluated for each example in the training dataset.<\/p>\n<p>The benefit of so many fit and evaluated models is a more robust estimate of model performance as each row of data is given an opportunity to represent the entirety of the test dataset.<\/p>\n<p>Given the computational cost, LOOCV is not appropriate for very large datasets such as more than tens or hundreds of thousands of examples, or for models that are costly to fit, such as neural networks.<\/p>\n<ul>\n<li><strong>Don&rsquo;t Use LOOCV<\/strong>: Large datasets or costly models to fit.<\/li>\n<\/ul>\n<p>Given the improved estimate of model performance, LOOCV is appropriate when an accurate estimate of model performance is critical. This particularly case when the dataset is small, such as less than thousands of examples, can lead to model overfitting during training and biased estimates of model performance.<\/p>\n<p>Further, given that no sampling of the training dataset is used, this estimation procedure is deterministic, unlike train-test splits and other k-fold cross-validation confirmations that provide a stochastic estimate of model performance.<\/p>\n<ul>\n<li><strong>Use LOOCV<\/strong>: Small datasets or when estimated model performance is critical.<\/li>\n<\/ul>\n<p>Once models have been evaluated using LOOCV and a final model and configuration chosen, a final model is then fit on all available data and used to make predictions on new data.<\/p>\n<p>Now that we are familiar with the LOOCV procedure, let&rsquo;s look at how we can use the method in Python.<\/p>\n<h2>LOOCV Procedure in Scikit-Learn<\/h2>\n<p>The scikit-learn Python machine learning library provides an implementation of the LOOCV via the <a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.model_selection.LeaveOneOut.html\">LeaveOneOut class<\/a>.<\/p>\n<p>The method has no configuration, therefore, no arguments are provided to create an instance of the class.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# create loocv procedure\r\ncv = LeaveOneOut()<\/pre>\n<p>Once created, the <em>split()<\/em> function can be called and provided the dataset to enumerate.<\/p>\n<p>Each iteration will return the row indices that can be used for the train and test sets from the provided dataset.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\nfor train_ix, test_ix in cv.split(X):\r\n\t...<\/pre>\n<p>These indices can be used on the input (<em>X<\/em>) and output (<em>y<\/em>) columns of the dataset array to split the dataset.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# split data\r\nX_train, X_test = X[train_ix, :], X[test_ix, :]\r\ny_train, y_test = y[train_ix], y[test_ix]<\/pre>\n<p>The training set can be used to fit a model and the test set can be used to evaluate it by first making a prediction and calculating a performance metric on the predicted values versus the expected values.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# fit model\r\nmodel = RandomForestClassifier(random_state=1)\r\nmodel.fit(X_train, y_train)\r\n# evaluate model\r\nyhat = model.predict(X_test)<\/pre>\n<p>Scores can be saved from each evaluation and a final mean estimate of model performance can be presented.<\/p>\n<p>We can tie this together and demonstrate how to use LOOCV to evaluate a <a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.ensemble.RandomForestClassifier.html\">RandomForestClassifier<\/a> model for a synthetic binary classification dataset created with the <a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.datasets.make_blobs.html\">make_blobs() function<\/a>.<\/p>\n<p>The complete example is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># loocv to manually evaluate the performance of a random forest classifier\r\nfrom sklearn.datasets import make_blobs\r\nfrom sklearn.model_selection import LeaveOneOut\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.metrics import accuracy_score\r\n# create dataset\r\nX, y = make_blobs(n_samples=100, random_state=1)\r\n# create loocv procedure\r\ncv = LeaveOneOut()\r\n# enumerate splits\r\ny_true, y_pred = list(), list()\r\nfor train_ix, test_ix in cv.split(X):\r\n\t# split data\r\n\tX_train, X_test = X[train_ix, :], X[test_ix, :]\r\n\ty_train, y_test = y[train_ix], y[test_ix]\r\n\t# fit model\r\n\tmodel = RandomForestClassifier(random_state=1)\r\n\tmodel.fit(X_train, y_train)\r\n\t# evaluate model\r\n\tyhat = model.predict(X_test)\r\n\t# store\r\n\ty_true.append(y_test[0])\r\n\ty_pred.append(yhat[0])\r\n# calculate accuracy\r\nacc = accuracy_score(y_true, y_pred)\r\nprint('Accuracy: %.3f' % acc)<\/pre>\n<p>Running the example manually estimates the performance of the random forest classifier on the synthetic dataset.<\/p>\n<p>Given that the dataset has 100 examples, it means that 100 train\/test splits of the dataset were created, with each single row of the dataset given an opportunity to be used as the test set. Similarly, 100 models are created and evaluated.<\/p>\n<p>The classification accuracy across all predictions is then reported, in this case as 99 percent.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Accuracy: 0.990<\/pre>\n<p>A downside of enumerating the folds manually is that it is slow and involves a lot of code that could introduce bugs.<\/p>\n<p>An alternative to evaluating a model using LOOCV is to use the <a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.model_selection.cross_val_score.html\">cross_val_score() function<\/a>.<\/p>\n<p>This function takes the model, the dataset, and the instantiated LOOCV object set via the &ldquo;<em>cv<\/em>&rdquo; argument. A sample of accuracy scores is then returned that can be summarized by calculating the mean and standard deviation.<\/p>\n<p>We can also set the &ldquo;<em>n_jobs<\/em>&rdquo; argument to -1 to use all CPU cores, greatly decreasing the computational cost in fitting and evaluating so many models.<\/p>\n<p>The example below demonstrates evaluating the <em>RandomForestClassifier<\/em> using LOOCV on the same synthetic dataset using the <em>cross_val_score()<\/em> function.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># loocv to automatically evaluate the performance of a random forest classifier\r\nfrom numpy import mean\r\nfrom numpy import std\r\nfrom sklearn.datasets import make_blobs\r\nfrom sklearn.model_selection import LeaveOneOut\r\nfrom sklearn.model_selection import cross_val_score\r\nfrom sklearn.ensemble import RandomForestClassifier\r\n# create dataset\r\nX, y = make_blobs(n_samples=100, random_state=1)\r\n# create loocv procedure\r\ncv = LeaveOneOut()\r\n# create model\r\nmodel = RandomForestClassifier(random_state=1)\r\n# evaluate model\r\nscores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)\r\n# report performance\r\nprint('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))<\/pre>\n<p>Running the example automatically estimates the performance of the random forest classifier on the synthetic dataset.<\/p>\n<p>The mean classification accuracy across all folds matches our manual estimate previously.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Accuracy: 0.990 (0.099)<\/pre>\n<p>Now that we are familiar with how to use the LeaveOneOut class, let&rsquo;s look at how we can use it to evaluate a machine learning model on real datasets.<\/p>\n<h2>LOOCV to Evaluate Machine Learning Models<\/h2>\n<p>In this section, we will explore using the LOOCV procedure to evaluate machine learning models on standard classification and regression predictive modeling datasets.<\/p>\n<h3>LOOCV for Classification<\/h3>\n<p>We will demonstrate how to use LOOCV to evaluate a random forest algorithm on the sonar dataset.<\/p>\n<p>The sonar dataset is a standard machine learning dataset comprising 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.<\/p>\n<p>The dataset involves predicting whether sonar returns indicate a rock or simulated mine.<\/p>\n<ul>\n<li><a href=\"https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.csv\">Sonar Dataset (sonar.csv)<\/a><\/li>\n<li><a href=\"https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.names\">Sonar Dataset Description (sonar.names)<\/a><\/li>\n<\/ul>\n<p>No need to download the dataset; we will download it automatically as part of our worked examples.<\/p>\n<p>The example below downloads the dataset and summarizes its shape.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># summarize the sonar dataset\r\nfrom pandas import read_csv\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.csv'\r\ndataframe = read_csv(url, header=None)\r\n# split into input and output elements\r\ndata = dataframe.values\r\nX, y = data[:, :-1], data[:, -1]\r\nprint(X.shape, y.shape)<\/pre>\n<p>Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">(208, 60) (208,)<\/pre>\n<p>We can now evaluate a model using LOOCV.<\/p>\n<p>First, the loaded dataset must be split into input and output components.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# split into inputs and outputs\r\nX, y = data[:, :-1], data[:, -1]\r\nprint(X.shape, y.shape)<\/pre>\n<p>Next, we define the LOOCV procedure.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# create loocv procedure\r\ncv = LeaveOneOut()<\/pre>\n<p>We can then define the model to evaluate.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# create model\r\nmodel = RandomForestClassifier(random_state=1)<\/pre>\n<p>Then use the <em>cross_val_score()<\/em> function to enumerate the folds, fit models, then make and evaluate predictions. We can then report the mean and standard deviation of model performance.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# evaluate model\r\nscores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)\r\n# report performance\r\nprint('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))<\/pre>\n<p>Tying this together, the complete example is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># loocv evaluate random forest on the sonar dataset\r\nfrom numpy import mean\r\nfrom numpy import std\r\nfrom pandas import read_csv\r\nfrom sklearn.model_selection import LeaveOneOut\r\nfrom sklearn.model_selection import cross_val_score\r\nfrom sklearn.ensemble import RandomForestClassifier\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/sonar.csv'\r\ndataframe = read_csv(url, header=None)\r\ndata = dataframe.values\r\n# split into inputs and outputs\r\nX, y = data[:, :-1], data[:, -1]\r\nprint(X.shape, y.shape)\r\n# create loocv procedure\r\ncv = LeaveOneOut()\r\n# create model\r\nmodel = RandomForestClassifier(random_state=1)\r\n# evaluate model\r\nscores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)\r\n# report performance\r\nprint('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))<\/pre>\n<p>Running the example first loads the dataset and confirms the number of rows in the input and output elements.<\/p>\n<p>The model is then evaluated using LOOCV and the estimated performance when making predictions on new data has an accuracy of about 82.2 percent.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">(208, 60) (208,)\r\nAccuracy: 0.822 (0.382)<\/pre>\n<\/p>\n<h3>LOOCV for Regression<\/h3>\n<p>We will demonstrate how to use LOOCV to evaluate a random forest algorithm on the housing dataset.<\/p>\n<p>The housing dataset is a standard machine learning dataset comprising 506 rows of data with 13 numerical input variables and a numerical target variable.<\/p>\n<p>The dataset involves predicting the house price given details of the house&rsquo;s suburb in the American city of Boston.<\/p>\n<ul>\n<li><a href=\"https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/housing.csv\">Housing Dataset (housing.csv)<\/a><\/li>\n<li><a href=\"https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/housing.names\">Housing Description (housing.names)<\/a><\/li>\n<\/ul>\n<p>No need to download the dataset; we will download it automatically as part of our worked examples.<\/p>\n<p>The example below downloads and loads the dataset as a Pandas DataFrame and summarizes the shape of the dataset.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># load and summarize the housing dataset\r\nfrom pandas import read_csv\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/housing.csv'\r\ndataframe = read_csv(url, header=None)\r\n# summarize shape\r\nprint(dataframe.shape)<\/pre>\n<p>Running the example confirms the 506 rows of data and 13 input variables and single numeric target variables (14 in total).<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">(506, 14)<\/pre>\n<p>We can now evaluate a model using LOOCV.<\/p>\n<p>First, the loaded dataset must be split into input and output components.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# split into inputs and outputs\r\nX, y = data[:, :-1], data[:, -1]\r\nprint(X.shape, y.shape)<\/pre>\n<p>Next, we define the LOOCV procedure.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# create loocv procedure\r\ncv = LeaveOneOut()<\/pre>\n<p>We can then define the model to evaluate.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# create model\r\nmodel = RandomForestRegressor(random_state=1)<\/pre>\n<p>Then use the <em>cross_val_score()<\/em> function to enumerate the folds, fit models, then make and evaluate predictions. We can then report the mean and standard deviation of model performance.<\/p>\n<p>In this case, we use the mean absolute error (MAE) performance metric appropriate for regression.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# evaluate model\r\nscores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)\r\n# force positive\r\nscores = absolute(scores)\r\n# report performance\r\nprint('MAE: %.3f (%.3f)' % (mean(scores), std(scores)))<\/pre>\n<p>Tying this together, the complete example is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># loocv evaluate random forest on the housing dataset\r\nfrom numpy import mean\r\nfrom numpy import std\r\nfrom numpy import absolute\r\nfrom pandas import read_csv\r\nfrom sklearn.model_selection import LeaveOneOut\r\nfrom sklearn.model_selection import cross_val_score\r\nfrom sklearn.ensemble import RandomForestRegressor\r\n# load dataset\r\nurl = 'https:\/\/raw.githubusercontent.com\/jbrownlee\/Datasets\/master\/housing.csv'\r\ndataframe = read_csv(url, header=None)\r\ndata = dataframe.values\r\n# split into inputs and outputs\r\nX, y = data[:, :-1], data[:, -1]\r\nprint(X.shape, y.shape)\r\n# create loocv procedure\r\ncv = LeaveOneOut()\r\n# create model\r\nmodel = RandomForestRegressor(random_state=1)\r\n# evaluate model\r\nscores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)\r\n# force positive\r\nscores = absolute(scores)\r\n# report performance\r\nprint('MAE: %.3f (%.3f)' % (mean(scores), std(scores)))<\/pre>\n<p>Running the example first loads the dataset and confirms the number of rows in the input and output elements.<\/p>\n<p>The model is evaluated using LOOCV and the performance of the model when making predictions on new data is a mean absolute error of about 2.180 (thousands of dollars).<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">(506, 13) (506,)\r\nMAE: 2.180 (2.346)<\/pre>\n<\/p>\n<h2>Further Reading<\/h2>\n<p>This section provides more resources on the topic if you are looking to go deeper.<\/p>\n<h3>Tutorials<\/h3>\n<ul>\n<li><a href=\"https:\/\/machinelearningmastery.com\/k-fold-cross-validation\/\">A Gentle Introduction to k-fold Cross-Validation<\/a><\/li>\n<\/ul>\n<h3>APIs<\/h3>\n<ul>\n<li><a href=\"https:\/\/scikit-learn.org\/stable\/modules\/cross_validation.html\">Cross-validation: evaluating estimator performance, scikit-learn<\/a>.<\/li>\n<li><a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.model_selection.LeaveOneOut.html\">sklearn.model_selection.LeaveOneOut API<\/a>.<\/li>\n<li><a href=\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.model_selection.cross_val_score.html\">sklearn.model_selection.cross_val_score API<\/a>.<\/li>\n<\/ul>\n<h2>Summary<\/h2>\n<p>In this tutorial, you discovered how to evaluate machine learning models using leave-one-out cross-validation.<\/p>\n<p>Specifically, you learned:<\/p>\n<ul>\n<li>The leave-one-out cross-validation procedure is appropriate when you have a small dataset or when an accurate estimate of model performance is more important than the computational cost of the method.<\/li>\n<li>How to use the scikit-learn machine learning library to perform the leave-one-out cross-validation procedure.<\/li>\n<li>How to evaluate machine learning algorithms for classification and regression using leave-one-out cross-validation.<\/li>\n<\/ul>\n<p><strong>Do you have any questions?<\/strong><br \/>\nAsk your questions in the comments below and I will do my best to answer.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/loocv-for-evaluating-machine-learning-algorithms\/\">LOOCV for Evaluating Machine Learning Algorithms<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/\">Machine Learning Mastery<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/machinelearningmastery.com\/loocv-for-evaluating-machine-learning-algorithms\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Jason Brownlee The Leave-One-Out Cross-Validation, or LOOCV, procedure is used to estimate the performance of machine learning algorithms when they are used to make [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2020\/07\/26\/loocv-for-evaluating-machine-learning-algorithms\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":3705,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3704"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=3704"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/3704\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/3705"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=3704"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=3704"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=3704"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}