API documentation

Heuristic optimization algorithms

Classic optimization methods

jbopt.classic.classical(transform, loglikelihood, parameter_names, prior, start=0.5, ftol=0.1, disp=0, nsteps=40000, method='neldermead', **args)[source]

Classic optimization methods

Parameters:
  • start – start position vector (before transform)
  • ftol – accuracy required to stop at optimum
  • disp – verbosity
  • nsteps – number of steps
  • method – string neldermead, cobyla (via scipy.optimize) bobyqa, ralg, algencan, ipopt, mma, auglag and many others from the OpenOpt framework (via openopt.NLP) minuit (via PyMinuit)
jbopt.classic.onebyone(transform, loglikelihood, parameter_names, prior, start=0.5, ftol=0.1, disp=0, nsteps=40000, parallel=False, find_uncertainties=False, **args)[source]

Convex optimization based on Brent’s method

A strict assumption of one optimum between the parameter limits is used. The bounds are narrowed until it is found, i.e. the likelihood function is flat within the bounds. * If optimum outside bracket, expands bracket until contained. * Thus guaranteed to return local optimum. * Supports parallelization (multiple parameters are treated independently) * Supports finding ML uncertainties (Delta-Chi^2=1)

Very useful for 1-3d problems. Otherwise useful, reproducible/deterministic algorithm for finding the minimum in well-behaved likelihoods, where the parameters are weakly independent, or to find a good starting point. Optimizes each parameter in order, assuming they are largely independent.

For 1-dimensional algorithm used, see jbopt.opt_grid()

Parameters:
  • ftol – difference in values at which the function can be considered flat
  • compute_errors – compute standard deviation of gaussian around optimum

Differential evolution

Differential evolution

jbopt.de.de(output_basename, parameter_names, transform, loglikelihood, prior, nsteps=40000, vizfunc=None, printfunc=None, **problem)[source]

Differential evolution

via inspyred

specially tuned. steady state replacement, n-point crossover,
pop size 20, gaussian mutation noise 0.01 & 1e-6.

stores intermediate results (can be used for resume, see seeds)

Parameters:
  • start – start point
  • seeds – list of start points
  • vizfunc – callback to do visualization of current best solution
  • printfunc – callback to summarize current best solution
  • seed – RNG initialization (if set)

Markov Chain Monte Carlo methods

MCMC

jbopt.mcmc.ensemble(transform, loglikelihood, parameter_names, nsteps=40000, nburn=400, start=0.5, **problem)[source]

Ensemble MCMC

via emcee

jbopt.mcmc.mcmc(transform, loglikelihood, parameter_names, nsteps=40000, nburn=400, stdevs=0.1, start=0.5, **problem)[source]

Metropolis Hastings MCMC

with automatic step width adaption. Burnin period is also used to guess steps.

Parameters:
  • nburn – number of burnin steps
  • stdevs – step widths to start with
jbopt.mcmc.mcmc_advance(start, stdevs, logp, nsteps=1e+300, adapt=True, callback=None)[source]

Generic Metropolis MCMC. Advances the chain by nsteps. Called by mcmc()

Parameters:adapt – enables adaptive stepwidth alteration (converges).

Nested Sampling

MultiNest

jbopt.mn.multinest(parameter_names, transform, loglikelihood, output_basename, **problem)[source]

MultiNest Nested Sampling

via PyMultiNest.

Parameters:parameter_names – name of parameters; not directly used here, but for multinest_marginal.py plotting tool.

Simple, custom 1d methods

2 custom algorithms are implemented:

  • jbopt.independent.opt_normalizations()

  • jbopt.optimize1d.optimize() (used by jbopt.classic.onebyone())

    1D optimization, which should be robust against local flatness in the target function, but also estimate errors on the final parameter.

    We know that there is only one minimum, and it is far from the constraints.

    jbopt.optimize1d.optimize(function, x0, cons=[], ftol=0.2, disp=0, plot=False)[source]

    Optimization method based on Brent’s method

    First, a bracket (a b c) is sought that contains the minimum (b value is smaller than both a or c).

    The bracket is then recursively halfed. Here we apply some modifications to ensure our suggested point is not too close to either a or c, because that could be problematic with the local approximation. Also, if the bracket does not seem to include the minimum, it is expanded generously in the right direction until it covers it.

    Thus, this function is fail safe, and will always find a local minimum.

    Custom minimization algorithms in jbopt

    minimization routines that assume the parameters to be mostly independent from each other, optimizing each parameter in turn.

    jbopt.independent.opt_grid(params, func, limits, ftol=0.01, disp=0, compute_errors=True)[source]

    see optimize1d.optimize(), considers each parameter in order

    Parameters:
    • ftol – difference in values at which the function can be considered flat
    • compute_errors – compute standard deviation of gaussian around optimum
    jbopt.independent.opt_grid_parallel(params, func, limits, ftol=0.01, disp=0, compute_errors=True)[source]

    parallelized version of opt_grid()

    jbopt.independent.opt_normalizations(params, func, limits, abandon_threshold=100, noimprovement_threshold=0.001, disp=0)[source]

    optimization algorithm for scale variables (positive value of unknown magnitude)

    Each parameter is a normalization of a feature, and its value is sought. The parameters are handled in order (assumed to be independent), but a second round can be run. Various magnitudes of the normalization are tried. If the normalization converges to zero, the largest value yielding a comparable value is used.

    Optimizes each normalization parameter in rough steps using multiples of 3 of start point to find reasonable starting values for another algorithm.

    parameters, minimization function, parameter space definition [(lo, hi) for i in params]

    Parameters:
    • abandon_threshold – if in one direction the function increases by this much over the best value, abort search in this direction
    • noimprovement_threshold – when decreasing the normalization, if the function increases by less than this amount, abort search in this direction
    • disp – verbosity

Table Of Contents

Previous topic

Example code

This Page