Classic optimization methods
Classic optimization methods
Parameters: |
|
---|
Convex optimization based on Brent’s method
A strict assumption of one optimum between the parameter limits is used. The bounds are narrowed until it is found, i.e. the likelihood function is flat within the bounds. * If optimum outside bracket, expands bracket until contained. * Thus guaranteed to return local optimum. * Supports parallelization (multiple parameters are treated independently) * Supports finding ML uncertainties (Delta-Chi^2=1)
Very useful for 1-3d problems. Otherwise useful, reproducible/deterministic algorithm for finding the minimum in well-behaved likelihoods, where the parameters are weakly independent, or to find a good starting point. Optimizes each parameter in order, assuming they are largely independent.
For 1-dimensional algorithm used, see jbopt.opt_grid()
Parameters: |
|
---|
Differential evolution
Differential evolution
via inspyred
stores intermediate results (can be used for resume, see seeds)
Parameters: |
|
---|
MCMC
Ensemble MCMC
via emcee
Metropolis Hastings MCMC
with automatic step width adaption. Burnin period is also used to guess steps.
Parameters: |
|
---|
MultiNest
MultiNest Nested Sampling
via PyMultiNest.
Parameters: | parameter_names – name of parameters; not directly used here, but for multinest_marginal.py plotting tool. |
---|
2 custom algorithms are implemented:
jbopt.optimize1d.optimize() (used by jbopt.classic.onebyone())
1D optimization, which should be robust against local flatness in the target function, but also estimate errors on the final parameter.
We know that there is only one minimum, and it is far from the constraints.
- jbopt.optimize1d.optimize(function, x0, cons=[], ftol=0.2, disp=0, plot=False)[source]¶
Optimization method based on Brent’s method
First, a bracket (a b c) is sought that contains the minimum (b value is smaller than both a or c).
The bracket is then recursively halfed. Here we apply some modifications to ensure our suggested point is not too close to either a or c, because that could be problematic with the local approximation. Also, if the bracket does not seem to include the minimum, it is expanded generously in the right direction until it covers it.
Thus, this function is fail safe, and will always find a local minimum.
Custom minimization algorithms in jbopt
minimization routines that assume the parameters to be mostly independent from each other, optimizing each parameter in turn.
- jbopt.independent.opt_grid(params, func, limits, ftol=0.01, disp=0, compute_errors=True)[source]¶
see optimize1d.optimize(), considers each parameter in order
Parameters:
- ftol – difference in values at which the function can be considered flat
- compute_errors – compute standard deviation of gaussian around optimum
- jbopt.independent.opt_grid_parallel(params, func, limits, ftol=0.01, disp=0, compute_errors=True)[source]¶
parallelized version of opt_grid()
- jbopt.independent.opt_normalizations(params, func, limits, abandon_threshold=100, noimprovement_threshold=0.001, disp=0)[source]¶
optimization algorithm for scale variables (positive value of unknown magnitude)
Each parameter is a normalization of a feature, and its value is sought. The parameters are handled in order (assumed to be independent), but a second round can be run. Various magnitudes of the normalization are tried. If the normalization converges to zero, the largest value yielding a comparable value is used.
Optimizes each normalization parameter in rough steps using multiples of 3 of start point to find reasonable starting values for another algorithm.
parameters, minimization function, parameter space definition [(lo, hi) for i in params]
Parameters:
- abandon_threshold – if in one direction the function increases by this much over the best value, abort search in this direction
- noimprovement_threshold – when decreasing the normalization, if the function increases by less than this amount, abort search in this direction
- disp – verbosity