{"id":4472,"date":"2021-03-11T06:30:06","date_gmt":"2021-03-11T06:30:06","guid":{"rendered":"https:\/\/www.aiproblog.com\/index.php\/2021\/03\/11\/basin-hopping-optimization-in-python\/"},"modified":"2021-03-11T06:30:06","modified_gmt":"2021-03-11T06:30:06","slug":"basin-hopping-optimization-in-python","status":"publish","type":"post","link":"https:\/\/www.aiproblog.com\/index.php\/2021\/03\/11\/basin-hopping-optimization-in-python\/","title":{"rendered":"Basin Hopping Optimization in Python"},"content":{"rendered":"<p>Author: Jason Brownlee<\/p>\n<div>\n<p><strong>Basin hopping<\/strong> is a global optimization algorithm.<\/p>\n<p>It was developed to solve problems in chemical physics, although it is an effective algorithm suited for nonlinear objective functions with multiple optima.<\/p>\n<p>In this tutorial, you will discover the basin hopping global optimization algorithm.<\/p>\n<p>After completing this tutorial, you will know:<\/p>\n<ul>\n<li>Basin hopping optimization is a global optimization that uses random perturbations to jump basins, and a local search algorithm to optimize each basin.<\/li>\n<li>How to use the basin hopping optimization algorithm API in python.<\/li>\n<li>Examples of using basin hopping to solve global optimization problems with multiple optima.<\/li>\n<\/ul>\n<p>Let\u2019s get started.<\/p>\n<div id=\"attachment_11704\" style=\"width: 809px\" class=\"wp-caption aligncenter\">\n<img decoding=\"async\" aria-describedby=\"caption-attachment-11704\" loading=\"lazy\" class=\"size-full wp-image-11704\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/01\/Basin-Hopping-Optimization-in-Python.jpg\" alt=\"Basin Hopping Optimization in Python\" width=\"799\" height=\"533\" srcset=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/01\/Basin-Hopping-Optimization-in-Python.jpg 799w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/01\/Basin-Hopping-Optimization-in-Python-300x200.jpg 300w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2021\/01\/Basin-Hopping-Optimization-in-Python-768x512.jpg 768w\" sizes=\"(max-width: 799px) 100vw, 799px\"><\/p>\n<p id=\"caption-attachment-11704\" class=\"wp-caption-text\">Basin Hopping Optimization in Python<br \/>Photo by <a href=\"https:\/\/www.flickr.com\/photos\/pedrosz\/33931224155\/\">Pedro Szekely<\/a>, some rights reserved.<\/p>\n<\/div>\n<h2>Tutorial Overview<\/h2>\n<p>This tutorial is divided into three parts; they are:<\/p>\n<ol>\n<li>Basin Hopping Optimization<\/li>\n<li>Basin Hopping API<\/li>\n<li>Basin Hopping Examples\n<ol>\n<li>Multimodal Optimization With Local Optima<\/li>\n<li>Multimodal Optimization With Multiple Global Optima<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<h2>Basin Hopping Optimization<\/h2>\n<p>Basin Hopping is a global optimization algorithm developed for use in the field of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Chemical_physics\">chemical physics<\/a>.<\/p>\n<blockquote>\n<p>Basin-Hopping (BH) or Monte-Carlo Minimization (MCM) is so far the most reliable algorithms in chemical physics to search for the lowest-energy structure of atomic clusters and macromolecular systems.<\/p>\n<\/blockquote>\n<p>\u2014 <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0009261404016082\">Basin Hopping With Occasional Jumping<\/a>, 2004.<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Local_search_(optimization)\">Local optimization<\/a> refers to optimization algorithms intended to locate an optima for a univariate objective function or operate in a region where an optima is believed to be present. Whereas <a href=\"https:\/\/en.wikipedia.org\/wiki\/Global_optimization\">global optimization<\/a> algorithms are intended to locate the single global optima among potentially multiple local (non-global) optimal.<\/p>\n<p>Basin Hopping was described by David Wales and Jonathan Doye in their 1997 paper titled \u201c<a href=\"https:\/\/pubs.acs.org\/doi\/abs\/10.1021\/jp970984n\">Global Optimization by Basin-Hopping and the Lowest Energy Structures of Lennard-Jones Clusters Containing up to 110 Atoms<\/a>.\u201d<\/p>\n<p>The algorithms involve cycling two steps, a perturbation of good candidate solutions and the application of a local search to the perturbed solution.<\/p>\n<blockquote>\n<p>[Basin hopping] transforms the complex energy landscape into a collection of basins, and explores them by hopping, which is achieved by random Monte Carlo moves and acceptance\/rejection using the Metropolis criterion.<\/p>\n<\/blockquote>\n<p>\u2014 <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0009261404016082\">Basin Hopping With Occasional Jumping<\/a>, 2004.<\/p>\n<p>The perturbation allows the search algorithm to jump to new regions of the search space and potentially locate a new basin leading to a different optima, e.g. \u201c<em>basin hopping<\/em>\u201d in the techniques name.<\/p>\n<p>The local search allows the algorithm to traverse the new basin to the optima.<\/p>\n<p>The new optima may be kept as the basis for new random perturbations, otherwise, it is discarded. The decision to keep the new solution is controlled by a stochastic decision function with a \u201c<em>temperature<\/em>\u201d variable, much like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Simulated_annealing\">simulated annealing<\/a>.<\/p>\n<p>Temperature is adjusted as a function of the number of iterations of the algorithm. This allows arbitrary solutions to be accepted early in the run when the temperature is high, and a stricter policy of only accepting better quality solutions later in the search when the temperature is low.<\/p>\n<p>In this way, the algorithm is much like an iterated local search with different (perturbed) starting points.<\/p>\n<p>The algorithm runs for a specified number of iterations or function evaluations and can be run multiple times to increase confidence that the global optima was located or that a relative good solution was located.<\/p>\n<p>Now that we are familiar with the basic hopping algorithm from a high level, let\u2019s look at the API for basin hopping in Python.<\/p>\n<h2>Basin Hopping API<\/h2>\n<p>Basin hopping is available in Python via the <a href=\"https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.optimize.basinhopping.html\">basinhopping() SciPy function<\/a>.<\/p>\n<p>The function takes the name of the objective function to be minimized and the initial starting point.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# perform the basin hopping search\r\nresult = basinhopping(objective, pt)<\/pre>\n<p>Another important hyperparameter is the number of iterations to run the search set via the \u201c<em>niter<\/em>\u201d argument and defaults to 100.<\/p>\n<p>This can be set to thousands of iterations or more.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# perform the basin hopping search\r\nresult = basinhopping(objective, pt, niter=10000)<\/pre>\n<p>The amount of perturbation applied to the candidate solution can be controlled via the \u201c<em>stepsize<\/em>\u201d that defines the maximum amount of change applied in the context of the bounds of the problem domain. By default, this is set to 0.5 but should be set to something reasonable in the domain that might allow the search to find a new basin.<\/p>\n<p>For example, if the reasonable bounds of a search space were -100 to 100, then perhaps a step size of 5.0 or 10.0 units would be appropriate (e.g. 2.5% or 5% of the domain).<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# perform the basin hopping search\r\nresult = basinhopping(objective, pt, stepsize=10.0)<\/pre>\n<p>By default, the local search algorithm used is the \u201c<em>L-BFGS-B<\/em>\u201d algorithm.<\/p>\n<p>This can be changed by setting the \u201c<em>minimizer_kwargs<\/em>\u201d argument to a directory with a key of \u201c<em>method<\/em>\u201d and the value as the name of the local search algorithm to use, such as \u201c<em>nelder-mead<\/em>.\u201d Any of the local search algorithms provided by the SciPy library can be used.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# perform the basin hopping search\r\nresult = basinhopping(objective, pt, minimizer_kwargs={'method':'nelder-mead'})<\/pre>\n<p>The result of the search is a <a href=\"https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.optimize.OptimizeResult.html\">OptimizeResult object<\/a> where properties can be accessed like a dictionary. The success (or not) of the search can be accessed via the \u2018<em>success<\/em>\u2018 or \u2018<em>message<\/em>\u2018 key.<\/p>\n<p>The total number of function evaluations can be accessed via \u2018<em>nfev<\/em>\u2018 and the optimal input found for the search is accessible via the \u2018<em>x<\/em>\u2018 key.<\/p>\n<p>Now that we are familiar with the basin hopping API in Python, let\u2019s look at some worked examples.<\/p>\n<h2>Basin Hopping Examples<\/h2>\n<p>In this section, we will look at some examples of using the basin hopping algorithm on multi-modal objective functions.<\/p>\n<p>Multimodal objective functions are those that have multiple optima, such as a global optima and many local optima, or multiple global optima with the same objective function output.<\/p>\n<p>We will look at examples of basin hopping on both functions.<\/p>\n<h3>Multimodal Optimization With Local Optima<\/h3>\n<p>The <a href=\"https:\/\/en.wikipedia.org\/wiki\/Ackley_function\">Ackley function<\/a> is an example of an objective function that has a single global optima and multiple local optima in which a local search might get stuck.<\/p>\n<p>As such, a global optimization technique is required. It is a two-dimensional objective function that has a global optima at [0,0], which evaluates to 0.0.<\/p>\n<p>The example below implements the Ackley and creates a three-dimensional surface plot showing the global optima and multiple local optima.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># ackley multimodal function\r\nfrom numpy import arange\r\nfrom numpy import exp\r\nfrom numpy import sqrt\r\nfrom numpy import cos\r\nfrom numpy import e\r\nfrom numpy import pi\r\nfrom numpy import meshgrid\r\nfrom matplotlib import pyplot\r\nfrom mpl_toolkits.mplot3d import Axes3D\r\n\r\n# objective function\r\ndef objective(x, y):\r\n\treturn -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20\r\n\r\n# define range for input\r\nr_min, r_max = -5.0, 5.0\r\n# sample input range uniformly at 0.1 increments\r\nxaxis = arange(r_min, r_max, 0.1)\r\nyaxis = arange(r_min, r_max, 0.1)\r\n# create a mesh from the axis\r\nx, y = meshgrid(xaxis, yaxis)\r\n# compute targets\r\nresults = objective(x, y)\r\n# create a surface plot with the jet color scheme\r\nfigure = pyplot.figure()\r\naxis = figure.gca(projection='3d')\r\naxis.plot_surface(x, y, results, cmap='jet')\r\n# show the plot\r\npyplot.show()<\/pre>\n<p>Running the example creates the surface plot of the Ackley function showing the vast number of local optima.<\/p>\n<div id=\"attachment_11702\" style=\"width: 810px\" class=\"wp-caption aligncenter\">\n<img decoding=\"async\" aria-describedby=\"caption-attachment-11702\" loading=\"lazy\" class=\"size-full wp-image-11702\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/09\/3D-Surface-Plot-of-the-Ackley-Multimodal-Function-1.png\" alt=\"3D Surface Plot of the Ackley Multimodal Function\" width=\"800\" height=\"600\" srcset=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/09\/3D-Surface-Plot-of-the-Ackley-Multimodal-Function-1.png 800w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/09\/3D-Surface-Plot-of-the-Ackley-Multimodal-Function-1-300x225.png 300w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/09\/3D-Surface-Plot-of-the-Ackley-Multimodal-Function-1-768x576.png 768w\" sizes=\"(max-width: 800px) 100vw, 800px\"><\/p>\n<p id=\"caption-attachment-11702\" class=\"wp-caption-text\">3D Surface Plot of the Ackley Multimodal Function<\/p>\n<\/div>\n<p>We can apply the basin hopping algorithm to the Ackley objective function.<\/p>\n<p>In this case, we will start the search using a random point drawn from the input domain between -5 and 5.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# define the starting point as a random sample from the domain\r\npt = r_min + rand(2) * (r_max - r_min)<\/pre>\n<p>We will use a step size of 0.5, 200 iterations, and the default local search algorithm. This configuration was chosen after a little trial and error.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# perform the basin hopping search\r\nresult = basinhopping(objective, pt, stepsize=0.5, niter=200)<\/pre>\n<p>After the search is complete, it will report the status of the search and the number of iterations performed as well as the best result found with its evaluation.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">...\r\n# summarize the result\r\nprint('Status : %s' % result['message'])\r\nprint('Total Evaluations: %d' % result['nfev'])\r\n# evaluate solution\r\nsolution = result['x']\r\nevaluation = objective(solution)\r\nprint('Solution: f(%s) = %.5f' % (solution, evaluation))<\/pre>\n<p>Tying this together, the complete example of applying basin hopping to the Ackley objective function is listed below.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># basin hopping global optimization for the ackley multimodal objective function\r\nfrom scipy.optimize import basinhopping\r\nfrom numpy.random import rand\r\nfrom numpy import exp\r\nfrom numpy import sqrt\r\nfrom numpy import cos\r\nfrom numpy import e\r\nfrom numpy import pi\r\n\r\n# objective function\r\ndef objective(v):\r\n\tx, y = v\r\n\treturn -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20\r\n\r\n# define range for input\r\nr_min, r_max = -5.0, 5.0\r\n# define the starting point as a random sample from the domain\r\npt = r_min + rand(2) * (r_max - r_min)\r\n# perform the basin hopping search\r\nresult = basinhopping(objective, pt, stepsize=0.5, niter=200)\r\n# summarize the result\r\nprint('Status : %s' % result['message'])\r\nprint('Total Evaluations: %d' % result['nfev'])\r\n# evaluate solution\r\nsolution = result['x']\r\nevaluation = objective(solution)\r\nprint('Solution: f(%s) = %.5f' % (solution, evaluation))<\/pre>\n<p>Running the example executes the optimization, then reports the results.<\/p>\n<p><strong>Note<\/strong>: Your <a href=\"https:\/\/machinelearningmastery.com\/different-results-each-time-in-machine-learning\/\">results may vary<\/a> given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.<\/p>\n<p>In this case, we can see that the algorithm located the optima with inputs very close to zero and an objective function evaluation that is practically zero.<\/p>\n<p>We can see that 200 iterations of the algorithm resulted in 86,020 function evaluations.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Status: ['requested number of basinhopping iterations completed successfully']\r\nTotal Evaluations: 86020\r\nSolution: f([ 5.29778873e-10 -2.29022817e-10]) = 0.00000<\/pre>\n<\/p>\n<h3>Multimodal Optimization With Multiple Global Optima<\/h3>\n<p>The <a href=\"https:\/\/en.wikipedia.org\/wiki\/Himmelblau%27s_function\">Himmelblau<\/a> function is an example of an objective function that has multiple global optima.<\/p>\n<p>Specifically, it has four optima and each has the same objective function evaluation. It is a two-dimensional objective function that has a global optima at [3.0, 2.0], [-2.805118, 3.131312], [-3.779310, -3.283186], [3.584428, -1.848126].<\/p>\n<p>This means each run of a global optimization algorithm may find a different global optima.<\/p>\n<p>The example below implements the Himmelblau and creates a three-dimensional surface plot to give an intuition for the objective function.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># himmelblau multimodal test function\r\nfrom numpy import arange\r\nfrom numpy import meshgrid\r\nfrom matplotlib import pyplot\r\nfrom mpl_toolkits.mplot3d import Axes3D\r\n\r\n# objective function\r\ndef objective(x, y):\r\n\treturn (x**2 + y - 11)**2 + (x + y**2 -7)**2\r\n\r\n# define range for input\r\nr_min, r_max = -5.0, 5.0\r\n# sample input range uniformly at 0.1 increments\r\nxaxis = arange(r_min, r_max, 0.1)\r\nyaxis = arange(r_min, r_max, 0.1)\r\n# create a mesh from the axis\r\nx, y = meshgrid(xaxis, yaxis)\r\n# compute targets\r\nresults = objective(x, y)\r\n# create a surface plot with the jet color scheme\r\nfigure = pyplot.figure()\r\naxis = figure.gca(projection='3d')\r\naxis.plot_surface(x, y, results, cmap='jet')\r\n# show the plot\r\npyplot.show()<\/pre>\n<p>Running the example creates the surface plot of the Himmelblau function showing the four global optima as dark blue basins.<\/p>\n<div id=\"attachment_11703\" style=\"width: 810px\" class=\"wp-caption aligncenter\">\n<img decoding=\"async\" aria-describedby=\"caption-attachment-11703\" loading=\"lazy\" class=\"size-full wp-image-11703\" src=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/09\/3D-Surface-Plot-of-the-Himmelblau-Multimodal-Function.png\" alt=\"3D Surface Plot of the Himmelblau Multimodal Function\" width=\"800\" height=\"600\" srcset=\"https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/09\/3D-Surface-Plot-of-the-Himmelblau-Multimodal-Function.png 800w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/09\/3D-Surface-Plot-of-the-Himmelblau-Multimodal-Function-300x225.png 300w, https:\/\/machinelearningmastery.com\/wp-content\/uploads\/2020\/09\/3D-Surface-Plot-of-the-Himmelblau-Multimodal-Function-768x576.png 768w\" sizes=\"(max-width: 800px) 100vw, 800px\"><\/p>\n<p id=\"caption-attachment-11703\" class=\"wp-caption-text\">3D Surface Plot of the Himmelblau Multimodal Function<\/p>\n<\/div>\n<p>We can apply the basin hopping algorithm to the Himmelblau objective function.<\/p>\n<p>As in the previous example, we will start the search using a random point drawn from the input domain between -5 and 5.<\/p>\n<p>We will use a step size of 0.5, 200 iterations, and the default local search algorithm. At the end of the search, we will report the input for the best located optima,<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\"># basin hopping global optimization for the himmelblau multimodal objective function\r\nfrom scipy.optimize import basinhopping\r\nfrom numpy.random import rand\r\n\r\n# objective function\r\ndef objective(v):\r\n\tx, y = v\r\n\treturn (x**2 + y - 11)**2 + (x + y**2 -7)**2\r\n\r\n# define range for input\r\nr_min, r_max = -5.0, 5.0\r\n# define the starting point as a random sample from the domain\r\npt = r_min + rand(2) * (r_max - r_min)\r\n# perform the basin hopping search\r\nresult = basinhopping(objective, pt, stepsize=0.5, niter=200)\r\n# summarize the result\r\nprint('Status : %s' % result['message'])\r\nprint('Total Evaluations: %d' % result['nfev'])\r\n# evaluate solution\r\nsolution = result['x']\r\nevaluation = objective(solution)\r\nprint('Solution: f(%s) = %.5f' % (solution, evaluation))<\/pre>\n<p>Running the example executes the optimization, then reports the results.<\/p>\n<\/p>\n<div class=\"woo-sc-hr\"><\/div>\n<p><center><\/p>\n<h3>Want to Get Started With Ensemble Learning?<\/h3>\n<p>Take my free 7-day email crash course now (with sample code).<\/p>\n<p>Click to sign-up and also get a free PDF Ebook version of the course.<\/p>\n<p><a style=\"background: #ffce0a; color: #ffffff; text-decoration: none; font-family: Helvetica, Arial, sans-serif; font-weight: bold; font-size: 16px; line-height: 20px; padding: 10px; display: inline-block; max-width: 300px; border-radius: 5px; text-shadow: rgba(0, 0, 0, 0.25) 0px -1px 1px; box-shadow: rgba(255, 255, 255, 0.5) 0px 1px 3px inset, rgba(0, 0, 0, 0.5) 0px 1px 3px;\" href=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/152e11891e72a2%3A164f8be4f346dc\/6434520473600000\/\" target=\"_blank\" rel=\"noopener\">Download Your FREE Mini-Course<\/a><script data-leadbox=\"152e11891e72a2:164f8be4f346dc\" data-url=\"https:\/\/machinelearningmastery.lpages.co\/leadbox\/152e11891e72a2%3A164f8be4f346dc\/6434520473600000\/\" data-config=\"%7B%7D\" type=\"text\/javascript\" src=\"https:\/\/machinelearningmastery.lpages.co\/leadbox-1599234569.js\"><\/script><\/p>\n<p><\/center><\/p>\n<div class=\"woo-sc-hr\"><\/div>\n<p>In this case, we can see that the algorithm located an optima at about [3.0, 2.0].<\/p>\n<p>We can see that 200 iterations of the algorithm resulted in 7,660 function evaluations.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Status : ['requested number of basinhopping iterations completed successfully']\r\nTotal Evaluations: 7660\r\nSolution: f([3. 1.99999999]) = 0.00000<\/pre>\n<p>If we run the search again, we may expect a different global optima to be located.<\/p>\n<p>For example, below, we can see an optima located at about [-2.805118, 3.131312], different from the previous run.<\/p>\n<pre class=\"urvanov-syntax-highlighter-plain-tag\">Status : ['requested number of basinhopping iterations completed successfully']\r\nTotal Evaluations: 7636\r\nSolution: f([-2.80511809 3.13131252]) = 0.00000<\/pre>\n<\/p>\n<h2>Further Reading<\/h2>\n<p>This section provides more resources on the topic if you are looking to go deeper.<\/p>\n<h3>Papers<\/h3>\n<ul>\n<li>\n<a href=\"https:\/\/pubs.acs.org\/doi\/abs\/10.1021\/jp970984n\">Global Optimization by Basin-Hopping and the Lowest Energy Structures of Lennard-Jones Clusters Containing up to 110 Atoms<\/a>, 1997.<\/li>\n<li>\n<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0009261404016082\">Basin Hopping With Occasional Jumping<\/a>, 2004.<\/li>\n<\/ul>\n<h3>Books<\/h3>\n<ul>\n<li>\n<a href=\"https:\/\/amzn.to\/2HeRKLO\">Energy Landscapes: Applications to Clusters, Biomolecules and Glasses<\/a>, 2004.<\/li>\n<\/ul>\n<h3>APIs<\/h3>\n<ul>\n<li>\n<a href=\"https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.optimize.basinhopping.html\">scipy.optimize.basinhopping API<\/a>.<\/li>\n<li>\n<a href=\"https:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.optimize.OptimizeResult.html\">scipy.optimize.OptimizeResult API<\/a>.<\/li>\n<\/ul>\n<h3>Articles<\/h3>\n<ul>\n<li>\n<a href=\"https:\/\/en.wikipedia.org\/wiki\/Global_optimization\">Global optimization, Wikipedia<\/a>.<\/li>\n<li>\n<a href=\"https:\/\/en.wikipedia.org\/wiki\/Basin-hopping\">Basin-hopping, Wikipedia<\/a>.<\/li>\n<\/ul>\n<h2>Summary<\/h2>\n<p>In this tutorial, you discovered the basin hopping global optimization algorithm.<\/p>\n<p>Specifically, you learned:<\/p>\n<ul>\n<li>Basin hopping optimization is a global optimization that uses random perturbations to jump basins, and a local search algorithm to optimize each basin.<\/li>\n<li>How to use the basin hopping optimization algorithm API in python.<\/li>\n<li>Examples of using basin hopping to solve global optimization problems with multiple optima.<\/li>\n<\/ul>\n<p><strong>Do you have any questions?<\/strong><br \/>\nAsk your questions in the comments below and I will do my best to answer.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/basin-hopping-optimization-in-python\/\">Basin Hopping Optimization in Python<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/machinelearningmastery.com\/\">Machine Learning Mastery<\/a>.<\/p>\n<\/div>\n<p><a href=\"https:\/\/machinelearningmastery.com\/basin-hopping-optimization-in-python\/\">Go to Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Author: Jason Brownlee Basin hopping is a global optimization algorithm. It was developed to solve problems in chemical physics, although it is an effective algorithm [&hellip;] <span class=\"read-more-link\"><a class=\"read-more\" href=\"https:\/\/www.aiproblog.com\/index.php\/2021\/03\/11\/basin-hopping-optimization-in-python\/\">Read More<\/a><\/span><\/p>\n","protected":false},"author":1,"featured_media":4473,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[24],"tags":[],"_links":{"self":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/4472"}],"collection":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/comments?post=4472"}],"version-history":[{"count":0,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/posts\/4472\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media\/4473"}],"wp:attachment":[{"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/media?parent=4472"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/categories?post=4472"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aiproblog.com\/index.php\/wp-json\/wp\/v2\/tags?post=4472"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}