SciPy – 20 – ottimizzazione – 5

Continuo da qui, copio qui.

Minimizzatori di funzione univariata (minimize_scalar)
Often only the minimum of an univariate function (i.e. a function that takes a scalar as input) is needed. In these circumstances, other optimization techniques have been developed that can work faster. These are accessible from the minimize_scalar function which proposes several algorithms.

Minimizzazione non vincolata (method='brent')
There are actually two methods that can be used to minimize an univariate function: brent and golden, but golden is included only for academic purposes and should rarely be used. These can be respectively selected through the method parameter in minimize_scalar. The brent method uses Brent’s algorithm for locating a minimum. Optimally a bracket (the bracket parameter) should be given which contains the minimum desired. A bracket is a triple (a,b,c) such that f(a)>f(b)<f(c) and a<b<c. If this is not given, then alternatively two starting points can be chosen and a bracket will be found from these points using a simple marching algorithm. If these two starting points are not provided 0 and 1 will be used (this may not be the right choice for your function and result in an unexpected minimum being returned).

Here is an example:

Minimizzazione limitata (method='bounded')
Very often, there are constraints that can be placed on the solution space before minimization occurs. The bounded method in minimize_scalar is an example of a constrained minimization procedure that provides a rudimentary interval constraint for scalar functions. The interval constraint allows the minimization to occur only between two fixed endpoints, specified using the mandatory bounds parameter.

For example, to find the minimum of J1(x) near x=5, minimize_scalar can be called using the interval [4,7] as a constraint. The result is xmin=5.3314:

Minimizzatori personalizzati
Sometimes, it may be useful to use a custom method as a (multivariate or univariate) minimizer, for example when using some library wrappers of minimize (e.g. basinhopping).

We can achieve that by, instead of passing a method name, we pass a callable (either a function or an object implementing a __call__ method) as the method parameter.

Let us consider an (admittedly rather virtual) need to use a trivial custom multivariate minimization method that will just search the neighborhood in each dimension independently with a fixed step size:

Codice troppo lungo lo inserisco in cm1.py.

import numpy as np
from scipy.optimize import minimize
from scipy.optimize import OptimizeResult

def rosen(x):
    return sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
    
def custmin(fun, x0, args=(), maxfev=None, stepsize=0.1,
            maxiter=100, callback=None, **options):
    bestx = x0
    besty = fun(x0)
    funcalls = 1
    niter = 0
    improved = True
    stop = False

    while improved and not stop and niter < maxiter:
        improved = False
        niter += 1
        for dim in range(np.size(x0)):
            for s in [bestx[dim] - stepsize, bestx[dim] + stepsize]:
                testx = np.copy(bestx)
                testx[dim] = s
                testy = fun(testx, *args)
                funcalls += 1
                if testy < besty: besty = testy bestx = testx improved = True if callback is not None: callback(bestx) if maxfev is not None and funcalls >= maxfev:
                stop = True
                break

    return OptimizeResult(fun=besty, x=bestx, nit=niter,
                          nfev=funcalls, success=(niter > 1))

x0 = [1.35, 0.9, 0.8, 1.1, 1.2]
res = minimize(rosen, x0, method=custmin, options=dict(stepsize=0.05))
print(res.x)

This will work just as well in case of univariate optimization (cm2.py):

import numpy as np
from scipy.optimize import minimize_scalar
from scipy.optimize import OptimizeResult

def custmin(fun, bracket, args=(), maxfev=None, stepsize=0.1,
            maxiter=100, callback=None, **options):
    bestx = (bracket[1] + bracket[0]) / 2.0
    besty = fun(bestx)
    funcalls = 1
    niter = 0
    improved = True
    stop = False

    while improved and not stop and niter < maxiter:
        improved = False
        niter += 1
        for testx in [bestx - stepsize, bestx + stepsize]:
            testy = fun(testx, *args)
            funcalls += 1
            if testy < besty: besty = testy bestx = testx improved = True if callback is not None: callback(bestx) if maxfev is not None and funcalls >= maxfev:
            stop = True
            break

    return OptimizeResult(fun=besty, x=bestx, nit=niter,
                          nfev=funcalls, success=(niter > 1))
                          
def f(x):
    return (x - 2)**2 * (x + 2)**2

res = minimize_scalar(f, bracket=(-3.5, 0), method=custmin,
                  options=dict(stepsize = 0.05))
print(res.x)

:mrgreen:

Posta un commento o usa questo indirizzo per il trackback.

Trackback

Rispondi

Inserisci i tuoi dati qui sotto o clicca su un'icona per effettuare l'accesso:

Logo di WordPress.com

Stai commentando usando il tuo account WordPress.com. Chiudi sessione /  Modifica )

Google photo

Stai commentando usando il tuo account Google. Chiudi sessione /  Modifica )

Foto Twitter

Stai commentando usando il tuo account Twitter. Chiudi sessione /  Modifica )

Foto di Facebook

Stai commentando usando il tuo account Facebook. Chiudi sessione /  Modifica )

Connessione a %s...

Questo sito utilizza Akismet per ridurre lo spam. Scopri come vengono elaborati i dati derivati dai commenti.

%d blogger hanno fatto clic su Mi Piace per questo: