HGDL

class hgdl.hgdl.HGDL(func, grad, bounds, hess=None, num_epochs=100000, global_optimizer='genetic', local_optimizer='L-BFGS-B', number_of_optima=1000000, local_max_iter=1000, constraints=(), args=())

This is HGDL, a class for asynchronous HPC-capable optimization.

H … Hybrid

G … Global

D … Deflated

L … Local

The algorithm places a number of walkers inside the domain (the number is determined by the dask client), all of which perform a local optimization in a distributed way in parallel. When the walkers have identified local optima, their positions are communicated back to the host who removes the found optima by deflation, and replaces the fittest walkers by a global optimization step. From here the next epoch begins with distributed local optimizations of the new walkers. The algorithm results in a sorted list of unique optima (only if optima are of the form f’(x) = 0) The method hgdl.optimize instantly returns a result object that can be queried for a growing, sorted list of optima. If a Hessian is provided, those optima are classified as minima, maxima or saddle points.

Parameters
  • func (Callable) – The function to be MINIMIZED. A callable that accepts an np.ndarray and optional arguments, and returns a scalar.

  • grad (Callable) – The gradient of the function to be MINIMIZED. A callable that accepts an np.ndarray and optional arguments, and returns a vector (np.ndarray) of shape (D), where D is the dimensionality of the space in which the optimization takes place.

  • bounds (np.ndarray) – The bounds of the domain; an np.ndarray of shape (D x 2), where D is the dimensionality of the space in which the optimization takes place. Here D is the dimension of the input domain.

  • hess (Callable, optional) – The Hessian of the function to be MINIMIZED. A callable that accepts an np.ndarray and optional arguments, and returns a np.ndarray of shape (D x D). The default value is no-op.

  • num_epochs (int, optional) – The number of epochs the algorithm runs through before being terminated. One epoch is the convergence of all local walkers, the deflation of the identified optima, and the global replacement of the walkers. Note, the algorithm is running asynchronously, so a high number of epochs can be chosen without concerns, it will not affect the run time to obtain the optima. Therefore, the default is 100000.

  • global_optimizer (Callable or str, optional) – The function (identified by a string or a Callable) that replaces the fittest walkers after their local convergence. The possible options are genetic (default), random or a callable that accepts an np.ndarray of shape (U x D) of positions, an np.ndarray of shape (U) of function values, and np.ndarray of shape (D x 2) of bounds, and an integer specifying the number of offspring individuals that should be returned. The callable should return the positions of the offspring individuals as an np.ndarray of shape (number_of_offspring x D).

  • local_optimizer (Callable or str, optional) – The local optimizer that is used. The options are dNewton (default), L-BFGS-B, BFGS, CG, Newton-CG, SLSQP. The above methods have been tested, but most others should work. Visit the scipy.optimize.minimize docs (https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) for specifications and limitations of the local methods. The parameter also accepts a callable of the form func(f,grad,hess,bounds,x0,*args), and returns an object equal to the scipy.optimize.minimize methods.

  • number_of_optima (int, optional) – The number of optima that will be stored in the optima list and deflated. The default is 1e6. After that number is reached, worse-performing optima will not be stored or deflated.

  • local_max_iter (int, optional) – The number of iterations before local optimizations are terminated. The default is 1000. It can be lowered when second-order local optimizers are used.

  • args (tuple, optional) – A tuple of arguments that will be communicated to the function, the gradient, and the Hessian callables. Default = ().

  • constraints (object, optional) – An optional n-tuple of constraint objects. The default is no constraints (). Constraints are defined following scipy.optimize.NonlinearConstraint.

optima

Contains the attribute optima.list in which the optima are stored. However, the method ‘get_latest()’ should be used to access the optima.

Type

object

optimize(dask_client=None, x0=None, tolerance=1e-08)

Function to start the optimization. Note, this function will not return anything. Use the method hgdl.HGDL.get_latest() (non-blocking) or hgdl.HGDL.get_final() (blocking) to query results.

Parameters
  • dask_client (distributed.client.Client, optional) – The client that will be used for the distibuted local optimizations. The default is a local client.

  • x0 (np.ndarray, optional) – An np.ndarray of shape (V x D) of points used as starting positions. If V > number of walkers (specified by the dask client) the array will be truncated. If V < number of walkers, random points will be appended. The default is None, meaning only random points will be used.

  • tolerance (float, optional) – The tolerance used by the local optimizers. The default is 1e-6

get_client_info()

Function to receive info about the workers.

get_latest()

Function to request the current result. No inputs

get_final()

Function to request the final result. CAUTION: This function will block the main thread until the end of all epochs is reached. No inputs.

cancel_tasks()

Function to cancel all tasks and therefore the execution. However, this function does not kill the client.

kill_client()

Function to cancel all tasks and kill the dask client, and therefore the execution. If cancel_tasks() is called before, this will throw an error.

Logging

The HGDL package uses the Loguru library for sophisticated log management. This follows similar principles as the vanilla Python logging framework, with additional functionality and performance benefits. You may want to enable logging in interactive use, or for debugging purposes.

Configuring logging

To enable logging in HGDL:

from loguru import logger
logger.enable("hgdl")

To configure the logging level:

logger.add(sys.stderr, filter="hgdl", level="INFO")

See Python’s reference on levels for more info.

To log to a file:

logger.add("file_{time}.log")

Loguru provides many further options for configuration.

HGDL Constrained Optimization of Rosenbrock’s Function

In this script, we show how HGDL is used for constrained optimization. Unconstrained optimization is simpler and automatically included.

First some function to make nice plots

%load_ext autoreload
%autoreload 2
import numpy as np
import plotly.graph_objects as go
def plot(x,y,z,data = None, constr = None):
    fig = go.Figure()
    fig.add_trace(go.Surface(x = x, y = y,z=z))
    if data is not None: 
        fig.add_trace(go.Scatter3d(x=data[:,0], y=data[:,1], z=data[:,2] + 50,
                                   mode='markers'))
    if constr is not None: 
        fig.add_trace(go.Scatter3d(x=constr[:,0], y=constr[:,1], z=constr[:,2],
                                   mode='markers')) 

    fig.update_layout(title='Surface Plot', autosize=True,
                  width=800, height=800, font=dict(
                  family="Courier New, monospace",
                  size=18),
                  margin=dict(l=65, r=50, b=65, t=90))

    fig.show()

def make_plot(data = None, constraint1 = None, constraint2 = None):
    x1,x2 = np.linspace(-500,500,100),np.linspace(-500,500,100)
    x_pred = np.transpose([np.tile(x1, len(x2)), np.repeat(x2, len(x1))])
    r1 = np.sqrt(160000.)
    r2 = np.sqrt(40000.)
    c1,c2 = r1*np.cos(np.linspace(0,2.*np.pi,100)),r1*np.sin(np.linspace(0,2.*np.pi,100))
    d1,d2 = r2*np.cos(np.linspace(0,2.*np.pi,100)),r2*np.sin(np.linspace(0,2.*np.pi,100))
    c3 = np.zeros((len(c2)))
    d3 = np.zeros((len(c2)))
    x1,x2 = np.meshgrid(x1,x2)
    z = np.zeros((10000))
    zc1 = np.zeros((10000))
    zc2 = np.zeros((10000))
    for i in range(10000): 
        z[i] = rosen(x_pred[i])
        if constraint1: zc1[i] = constraint1(x_pred[i])
        if constraint2: zc2[i] = constraint2(x_pred[i])
    for i in range(len(c1)):
        c3[i] = rosen(np.array([c1[i],c2[i]]))
        d3[i] = rosen(np.array([d1[i],d2[i]]))
    plot(x1,x2,z.reshape(100,100).T, data = data, 
         constr = np.row_stack([np.column_stack([c1,c2,c3]),np.column_stack([d1,d2,d3])]))

Defining the Constraints and some Bounds

Keep in mind that not all local optimizers allow any combination of bounds and constraints Visit https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html for more information on that

bounds = np.array([[-4,4],[-4,4]])
def g1(x):
    return (np.linalg.norm(x)**2/10.0) - 2.0

Now we import HGDL and run a constrained optimization

from hgdl.hgdl import HGDL as hgdl
from hgdl.support_functions import *
import time
from scipy.optimize import rosen, rosen_der, rosen_hess


#constraint definitions form scipy
from scipy.optimize import NonlinearConstraint
nlc = NonlinearConstraint(g1, -np.inf, 0)


a = hgdl(rosen, rosen_der, bounds,
            hess = rosen_hess, ##if this is None, the Hessian will be approximated if the local optimizer needs it
            global_optimizer = "random", #there are a few options to choose from for the global optimizer
            #global_optimizer = "genetic",
            local_optimizer = "dNewton", #dNewton is an example and will be changed automatically to "SLSQP" because constraints are used
            number_of_optima = 30000, #the number fo optima that will be stored and used for deflation
            args = (), num_epochs = 1000, #the number fo total epochs. Since this is an asynchronous algorithms, this number can be very high 
            constraints = (nlc,) #the constraints
            )
    
a.optimize(x0=None)
time.sleep(10)
/home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/hgdl/hgdl.py:134: UserWarning: Warning: dNewton will not adhere to bounds. It is recommended to formulate your objective function such that it is defined on R^N by simple non-linear transformations.
  warnings.warn("Warning: dNewton will not adhere to bounds. It is recommended to formulate your objective function such that it is defined on R^N by simple non-linear transformations.")
/home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/hgdl/hgdl.py:137: UserWarning: Constraints provided, local optimizer changed to 'SLSQP'
  warnings.warn("Constraints provided, local optimizer changed to 'SLSQP'")
/home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/scipy/optimize/_minimize.py:565: RuntimeWarning: Method SLSQP does not use Hessian information (hess).
  warn('Method %s does not use Hessian information (hess).' % method,
res = a.get_latest()
for entry in res: print(entry)
{'x': array([0.99999523, 0.99998985]), 'f(x)': 6.011377014972278e-11, 'classifier': 'minimum', 'Hessian eigvals': array([1.00159320e+03, 3.99412563e-01]), 'df/dx': array([ 0.000235  , -0.00012227]), '|df/dx|': 0.00026490930451375657, 'radius': 2.5036768792125663}
{'x': array([0.99998017, 0.99995505]), 'f(x)': 3.185887715305176e-09, 'classifier': 'degenerate', 'Hessian eigvals': array([0., 0.]), 'df/dx': array([ 0.00207409, -0.0010569 ]), '|df/dx|': 0.002327844809789744, 'radius': 0.0}
res = a.kill_client()
2023-09-29 22:49:33,488 - distributed.worker.state_machine - WARNING - Async instruction for <Task cancelled name='execute(hgdl-b6d02f5aace9cb386f1fb2d05149742e)' coro=<Worker.execute() done, defined at /home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/distributed/worker_state_machine.py:3609>> ended with CancelledError
2023-09-29 22:49:33,493 - distributed.worker.state_machine - WARNING - Async instruction for <Task cancelled name='execute(local_method-e981a73fb19e9d846ff512eb22b3dcac)' coro=<Worker.execute() done, defined at /home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/distributed/worker_state_machine.py:3609>> ended with CancelledError

Making a Plot

You should see the constraints and the foudn optima. If everything worked, the found points are in between the two constraints.

data = [np.append(entry["x"],entry["f(x)"]) for entry in res]
make_plot(data = np.array(data), constraint1 = g1)

HGDL Constrained Optimization of Schwefel’s Function

In this script, we show how HGDL is used for constrained optimization. Unconstrained optimization is simpler and automatically included.

First some function to make nice plots

%load_ext autoreload
%autoreload 2
import numpy as np
import plotly.graph_objects as go
def plot(x,y,z,data = None, constr = None):
    fig = go.Figure()
    fig.add_trace(go.Surface(x = x, y = y,z=z))
    if data is not None: 
        fig.add_trace(go.Scatter3d(x=data[:,0], y=data[:,1], z=data[:,2] + 50,
                                   mode='markers'))
    if constr is not None: 
        fig.add_trace(go.Scatter3d(x=constr[:,0], y=constr[:,1], z=constr[:,2],
                                   mode='markers')) 

    fig.update_layout(title='Surface Plot', autosize=True,
                  width=800, height=800, font=dict(
                  family="Courier New, monospace",
                  size=18),
                  margin=dict(l=65, r=50, b=65, t=90))

    fig.show()

def make_plot(data = None, constraint1 = None, constraint2 = None):
    x1,x2 = np.linspace(-500,500,100),np.linspace(-500,500,100)
    x_pred = np.transpose([np.tile(x1, len(x2)), np.repeat(x2, len(x1))])
    r1 = np.sqrt(160000.)
    r2 = np.sqrt(40000.)
    c1,c2 = r1*np.cos(np.linspace(0,2.*np.pi,100)),r1*np.sin(np.linspace(0,2.*np.pi,100))
    d1,d2 = r2*np.cos(np.linspace(0,2.*np.pi,100)),r2*np.sin(np.linspace(0,2.*np.pi,100))
    c3 = np.zeros((len(c2)))
    d3 = np.zeros((len(c2)))
    x1,x2 = np.meshgrid(x1,x2)
    z = np.zeros((10000))
    zc1 = np.zeros((10000))
    zc2 = np.zeros((10000))
    for i in range(10000): 
        z[i] = schwefel(x_pred[i], 1, 1)
        if constraint1: zc1[i] = constraint1(x_pred[i])
        if constraint2: zc2[i] = constraint2(x_pred[i])
    for i in range(len(c1)):
        c3[i] = schwefel(np.array([c1[i],c2[i]]))
        d3[i] = schwefel(np.array([d1[i],d2[i]]))
    plot(x1,x2,z.reshape(100,100).T, data = data, 
         constr = np.row_stack([np.column_stack([c1,c2,c3]),np.column_stack([d1,d2,d3])]))

Defining the Constraints and some Bounds

Keep in mind that not all local optimizers allow any combination of bounds and constraints Visit https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html for more information on that

bounds = np.array([[-500,500],[-500,500]])
def g1(x):
    return (np.linalg.norm(x)**2/10.0) - 16000.0
def g2(x):
    return (np.linalg.norm(x)**2/10.0) - 4000.0

Now we import HGDL and run a constrained optimization

from hgdl.hgdl import HGDL as hgdl
from hgdl.support_functions import *
import time

#example arguments that will be passed to the function, the gradient and the Hessian function
a  = 5
b  = 6

#constraint definitions form scipy
from scipy.optimize import NonlinearConstraint
nlc = NonlinearConstraint(g1, -np.inf, 0)
nlc = NonlinearConstraint(g2, 0,np.inf)


a = hgdl(schwefel, schwefel_gradient, bounds,
            hess = None, ##if this is None, the Hessian will be approximated if the local optimizer needs it
            global_optimizer = "random", #there are a few options to choose from for the global optimizer
            #global_optimizer = "genetic",
            local_optimizer = "dNewton", #dNewton is an example and will be changed automatically to "SLSQP" because constraints are used
            number_of_optima = 30000, #the number fo optima that will be stored and used for deflation
            args = (a,b), num_epochs = 1000, #the number of total epochs. Since this is an asynchronous algorithms, this number can be very high 
            constraints = (nlc,) #the constraints
            )
    
a.optimize(x0=None)
time.sleep(10)
/home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/hgdl/hgdl.py:134: UserWarning: Warning: dNewton will not adhere to bounds. It is recommended to formulate your objective function such that it is defined on R^N by simple non-linear transformations.
  warnings.warn("Warning: dNewton will not adhere to bounds. It is recommended to formulate your objective function such that it is defined on R^N by simple non-linear transformations.")
/home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/hgdl/hgdl.py:137: UserWarning: Constraints provided, local optimizer changed to 'SLSQP'
  warnings.warn("Constraints provided, local optimizer changed to 'SLSQP'")
/home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/scipy/optimize/_minimize.py:565: RuntimeWarning: Method SLSQP does not use Hessian information (hess).
  warn('Method %s does not use Hessian information (hess).' % method,
/home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/scipy/optimize/_optimize.py:404: RuntimeWarning: Values in x were outside bounds during a minimize step, clipping to bounds
  warnings.warn("Values in x were outside bounds during a "
res = a.get_latest()
for entry in res: print(entry)
{'x': array([420.9688045 , 420.96879718]), 'f(x)': 2.5455884951952612e-05, 'classifier': 'minimum', 'Hessian eigvals': array([0.25236709, 0.25236705]), 'df/dx': array([1.46738533e-05, 1.28251482e-05]), '|df/dx|': 1.948862230944066e-05, 'radius': 3.9624823867511325}
{'x': array([-302.52489718,  420.96876096]), 'f(x)': 118.4383600697845, 'classifier': 'minimum', 'Hessian eigvals': array([0.25328923, 0.25236707]), 'df/dx': array([9.73455887e-06, 3.68404306e-06]), '|df/dx|': 1.0408352875457622e-05, 'radius': 3.962482153163582}
{'x': array([ 420.96869744, -302.5249342 ]), 'f(x)': 118.43836006987272, 'classifier': 'minimum', 'Hessian eigvals': array([0.25236703, 0.25328926]), 'df/dx': array([-1.23460304e-05,  3.56744652e-07]), '|df/dx|': 1.2351183528295221e-05, 'radius': 3.9624827336461275}
{'x': array([420.96874628, 203.81425254]), 'f(x)': 217.13969484570544, 'classifier': 'minimum', 'Hessian eigvals': array([0.25236706, 0.25487078]), 'df/dx': array([-2.0716334e-08, -2.7270938e-08]), '|df/dx|': 3.424719770192841e-08, 'radius': 3.9624822873892604}
{'x': array([203.81410637, 420.96878274]), 'f(x)': 217.13969484859945, 'classifier': 'minimum', 'Hessian eigvals': array([0.25487069, 0.25236708]), 'df/dx': array([-3.72834200e-05,  9.18193322e-06]), '|df/dx|': 3.839741273124853e-05, 'radius': 3.962481950953486}
{'x': array([-302.52500985, -302.52476563]), 'f(x)': 236.87669468836555, 'classifier': 'minimum', 'Hessian eigvals': array([0.25328929, 0.25328921]), 'df/dx': array([-1.88036558e-05,  4.30545595e-05]), '|df/dx|': 4.698161943420848e-05, 'radius': 3.9480559846644785}
{'x': array([-124.82936047,  420.96874565]), 'f(x)': 296.10673921365253, 'classifier': 'minimum', 'Hessian eigvals': array([0.25791668, 0.25236706]), 'df/dx': array([-1.04400268e-06, -1.78898484e-07]), '|df/dx|': 1.0592196475840108e-06, 'radius': 3.962482290875642}
{'x': array([ 420.96874926, -124.82937796]), 'f(x)': 296.1067392137112, 'classifier': 'minimum', 'Hessian eigvals': array([0.2523671, 0.2579167]), 'df/dx': array([ 7.32345392e-07, -5.55555270e-06]), '|df/dx|': 5.603614509580047e-06, 'radius': 3.962481689474944}
{'x': array([-302.5249351 ,  203.81425171]), 'f(x)': 335.57802946014345, 'classifier': 'minimum', 'Hessian eigvals': array([0.25328926, 0.25487079]), 'df/dx': array([ 1.30306860e-07, -2.38569747e-07]), '|df/dx|': 2.718370869940228e-07, 'radius': 3.9480552699594993}
{'x': array([ 203.81421222, -302.52511028]), 'f(x)': 335.57802946421555, 'classifier': 'minimum', 'Hessian eigvals': array([0.25487076, 0.25328931]), 'df/dx': array([-1.03050528e-05, -4.42423220e-05]), '|df/dx|': 4.542661298813133e-05, 'radius': 3.9480544894950356}
{'x': array([ 65.54786623, 420.96874718]), 'f(x)': 355.34793077601205, 'classifier': 'minimum', 'Hessian eigvals': array([0.26492047, 0.25236706]), 'df/dx': array([-1.29745149e-05,  5.93482733e-05]), '|df/dx|': 6.074994303716131e-05, 'radius': 3.9624822769301162}
{'x': array([420.96873511,  65.54786493]), 'f(x)': 355.34793077602785, 'classifier': 'minimum', 'Hessian eigvals': array([0.25236705, 0.26492046]), 'df/dx': array([-2.84001161e-06, -4.17150565e-08]), '|df/dx|': 2.8403179596050964e-06, 'radius': 3.962482390237514}
{'x': array([420.96876043, -25.87738006]), 'f(x)': 394.89995250472214, 'classifier': 'minimum', 'Hessian eigvals': array([0.25236707, 0.28660974]), 'df/dx': array([3.55157366e-06, 1.06865544e-05]), '|df/dx|': 1.1261266413776195e-05, 'radius': 3.9624821566499633}
{'x': array([-302.52493981, -124.82935849]), 'f(x)': 414.545073828091, 'classifier': 'minimum', 'Hessian eigvals': array([0.25328926, 0.25791668]), 'df/dx': array([-1.06379391e-06, -5.32820357e-07]), '|df/dx|': 1.1897709955229697e-06, 'radius': 3.9480552111218126}
{'x': array([-124.82944242, -302.52498916]), 'f(x)': 414.5450738294053, 'classifier': 'minimum', 'Hessian eigvals': array([0.25791675, 0.2532893 ]), 'df/dx': array([-2.21809501e-05, -1.35636077e-05]), '|df/dx|': 2.5999346170416066e-05, 'radius': 3.9480545812125727}
{'x': array([420.96874404,   5.23919723]), 'f(x)': 415.03761110228345, 'classifier': 'minimum', 'Hessian eigvals': array([0.25236706, 0.40385527]), 'df/dx': array([-5.84439113e-07, -8.36403526e-07]), '|df/dx|': 1.0203626489228354e-06, 'radius': 3.9624823048211675}
{'x': array([  5.23928767, 420.96878108]), 'f(x)': 415.03761110401103, 'classifier': 'minimum', 'Hessian eigvals': array([0.40385504, 0.25236708]), 'df/dx': array([3.56901057e-05, 8.76118963e-06]), '|df/dx|': 3.674972231040302e-05, 'radius': 3.962481964899009}
{'x': array([203.81424732, 203.81424402]), 'f(x)': 434.2793642362915, 'classifier': 'minimum', 'Hessian eigvals': array([0.25487079, 0.25487077]), 'df/dx': array([-1.35762804e-06, -2.19987106e-06]), '|df/dx|': 2.585069931509543e-06, 'radius': 3.9235570187928217}
{'x': array([  65.5478512 , -302.52492529]), 'f(x)': 473.7862653904888, 'classifier': 'minimum', 'Hessian eigvals': array([0.26492044, 0.25328925]), 'df/dx': array([-3.68056188e-06,  2.61546709e-06]), '|df/dx|': 4.515219134769428e-06, 'radius': 3.9480553980179995}
{'x': array([-302.52486535,   65.54787699]), 'f(x)': 473.78626539109365, 'classifier': 'minimum', 'Hessian eigvals': array([0.25328923, 0.26492049]), 'df/dx': array([1.77964349e-05, 3.15353196e-06]), '|df/dx|': 1.807367867121428e-05, 'radius': 3.948055674901272}
{'x': array([-124.82936047,  203.81425139]), 'f(x)': 513.2464086042255, 'classifier': 'minimum', 'Hessian eigvals': array([0.25791668, 0.25487079]), 'df/dx': array([-4.46400972e-05, -4.21989504e-05]), '|df/dx|': 6.142873668611826e-05, 'radius': 3.923556685516284}
{'x': array([ 203.8142588 , -124.82936425]), 'f(x)': 513.246408604236, 'classifier': 'minimum', 'Hessian eigvals': array([0.25487079, 0.25791669]), 'df/dx': array([ 1.56655594e-06, -2.01882746e-06]), '|df/dx|': 2.555339863894385e-06, 'radius': 3.9235567453351456}
{'x': array([-302.52490698,  -25.8773891 ]), 'f(x)': 513.3382871191541, 'classifier': 'minimum', 'Hessian eigvals': array([0.25328927, 0.28660977]), 'df/dx': array([7.25108206e-06, 8.09717552e-06]), '|df/dx|': 1.0869334959497892e-05, 'radius': 3.9480551453620474}
{'x': array([ -25.87737682, -302.52481729]), 'f(x)': 513.3382871209441, 'classifier': 'minimum', 'Hessian eigvals': array([0.28660973, 0.2532892 ]), 'df/dx': array([1.16166243e-05, 2.99686052e-05]), '|df/dx|': 3.214130140053623e-05, 'radius': 3.948056290966692}
{'x': array([-302.52493641,    5.2392003 ]), 'f(x)': 533.47594571672, 'classifier': 'minimum', 'Hessian eigvals': array([0.25328926, 0.40385527]), 'df/dx': array([-2.01181781e-07,  4.03835785e-07]), '|df/dx|': 4.511734149196672e-07, 'radius': 3.9480552561153375}
{'x': array([203.81426766,  65.54784779]), 'f(x)': 572.487600166653, 'classifier': 'minimum', 'Hessian eigvals': array([0.25487079, 0.26492044]), 'df/dx': array([ 3.82524740e-06, -4.58336849e-06]), '|df/dx|': 5.969906559009717e-06, 'radius': 3.923556774390022}
{'x': array([-25.87742625, 203.81430399]), 'f(x)': 612.0396218954181, 'classifier': 'minimum', 'Hessian eigvals': array([0.2866099 , 0.25487082]), 'df/dx': array([-2.55225846e-06,  1.30855469e-05]), '|df/dx|': 1.333212510418422e-05, 'radius': 3.923556300966503}
{'x': array([203.81409956, -25.8773771 ]), 'f(x)': 612.0396218982894, 'classifier': 'minimum', 'Hessian eigvals': array([0.2548707 , 0.28660973]), 'df/dx': array([-3.90173301e-05,  1.15352833e-05]), '|df/dx|': 4.0686789127905764e-05, 'radius': 3.923558129715023}
{'x': array([203.81406291,   5.23923766]), 'f(x)': 632.1772804977398, 'classifier': 'minimum', 'Hessian eigvals': array([0.25487068, 0.40385517]), 'df/dx': array([-4.83590742e-05,  1.54930195e-05]), '|df/dx|': 5.078024924682363e-05, 'radius': 3.923558415136675}
res = a.kill_client()
2023-09-29 22:49:52,173 - distributed.worker.state_machine - WARNING - Async instruction for <Task cancelled name='execute(hgdl-b6d02f5aace9cb386f1fb2d05149742e)' coro=<Worker.execute() done, defined at /home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/distributed/worker_state_machine.py:3609>> ended with CancelledError
2023-09-29 22:49:52,181 - distributed.worker.state_machine - WARNING - Async instruction for <Task cancelled name='execute(local_method-167e0d34564572c43a2d60832005bf4d)' coro=<Worker.execute() done, defined at /home/docs/checkouts/readthedocs.org/user_builds/hgdl/envs/stable/lib/python3.9/site-packages/distributed/worker_state_machine.py:3609>> ended with CancelledError

Making a Plot

You should see the constraints and the foudn optima. If everything worked, the found points are in between the two constraints.

data = [np.append(entry["x"],entry["f(x)"]) for entry in res]
make_plot(data = np.array(data), constraint1 = g1, constraint2 = g2)

HGDL

HGDL

Welcome to the documentation of the HGDL API. HGDL is an optimization algorithm specialized in finding not only one but a diverse set of optima, alleviating challenges of non-uniqueness that are common in modern applications such as inversion problems and training of machine learning models.

HGDL is customized for distributed HP computing; all workers can be distributed across as many nodes or cores. All local optimizations will then be executed in parallel. As solutions are found, they are deflated which effectively removes those optima from the function, so that they cannot be reidentified by subsequent local searches. For more information please have a look at the content below.

See Also