ultranest package
Submodules
ultranest.calibrator module
Calibration of step sampler
- class ultranest.calibrator.ReactiveNestedCalibrator(param_names, loglike, transform=None, **kwargs)[source]
Bases:
object
Calibrator for the number of steps in step samplers.
The number of steps in a step sampler needs to be chosen. A calibration recommended (e.g. https://ui.adsabs.harvard.edu/abs/2019MNRAS.483.2044H) is to run a sequence of nested sampling runs with increasing number of steps, and stop when log(Z) converges.
This class automates this. See the
ReactiveNestedCalibrator.run()
for details.Usage
Usage is designed to be a drop-in replacement for ReactiveNestedSampler.
- If your code was::
sampler = ReactiveNestedSampler(my_param_names, my_loglike, my_transform) sampler.stepsampler = SliceSampler(nsteps=10, generate_direction=region_oriented_direction) sampler.run(min_num_livepoints=400)
- You would change it to::
sampler = ReactiveNestedCalibrator(my_param_names, my_loglike, my_transform) sampler.stepsampler = SliceSampler(nsteps=10, generate_direction=region_oriented_direction) sampler.run(min_num_livepoints=400)
The run() command will print the number of slice sampler steps that appear safe for the inference task.
The initial value for nsteps (e.g. in SliceSampler(nsteps=…)) is overwritten by this class.
Initialise nested sampler calibrator.
- param param_names:
Names of the parameters. Length gives dimensionality of the sampling problem.
- type param_names:
list of str
- param loglike:
log-likelihood function.
- type loglike:
function
- param transform:
parameter transform from unit cube to physical parameters.
- type transform:
function
- param kwargs:
further arguments passed to ReactiveNestedSampler
- type kwargs:
dict
- param if log_dir is set:
- param then the suffix -nsteps%d is added for each:
- param run where %d is replaced with the number of steps (2:
- param 4:
- param 8 etc).:
- run(**kwargs)[source]
Run a sequence of ReactiveNestedSampler runs until convergence.
The first run is made with the number of steps set to the number of parameters. Each subsequent run doubles the number of steps. Runs are made until convergence is reached. Then this generator stops yielding results.
Convergence is defined as three consecutive runs which 1) are not ordered in their log(Z) results, and 2) the consecutive log(Z) error bars must overlap.
- Parameters:
**kwargs (dict) – All arguments are passed to
ReactiveNestedSampler.run()
.- Yields:
nsteps (int) – number of steps for the current run
result (dict) – return value of
ReactiveNestedSampler.run()
for the current run
ultranest.dychmc module
Constrained Hamiltanean Monte Carlo step sampling.
Uses gradient to reflect at nested sampling boundaries.
- ultranest.dychmc.stop_criterion(thetaminus, thetaplus, rminus, rplus)[source]
Compute the stop condition in the main loop
computes: dot(dtheta, rminus) >= 0 & dot(dtheta, rplus >= 0)
- Parameters:
thetaminus (ndarray[float, ndim=1]) – under position
thetaplus (ndarray[float, ndim=1]) – above position
rminus (ndarray[float, ndim=1]) – under momentum
rplus (ndarray[float, ndim=1]) – above momentum
- Returns:
criterion – whether the condition is valid
- Return type:
bool
- ultranest.dychmc.step_or_reflect(theta, v, epsilon, transform, loglike, gradient, Lmin)[source]
Make a step from theta towards v with stepsize epsilon.
- ultranest.dychmc.build_tree(theta, v, direction, j, epsilon, transform, loglike, gradient, Lmin)[source]
The main recursion.
- ultranest.dychmc.tree_sample(theta, p, logL, v, epsilon, transform, loglike, gradient, Lmin, maxheight=inf)[source]
Build NUTS-like tree of sampling path from theta towards p with stepsize epsilon.
- ultranest.dychmc.generate_uniform_direction(d, massmatrix)[source]
Draw unit direction vector according to mass matrix.
- class ultranest.dychmc.DynamicCHMCSampler(scale, nsteps, adaptive_nsteps=False, delta=0.9, nudge=1.04)[source]
Bases:
object
Dynamic Constrained Hamiltonian/Hybrid Monte Carlo technique
Run a billiard ball inside the likelihood constrained. The ball reflects off the constraint.
The trajectory is explored in steps of stepsize epsilon. A No-U-turn criterion and randomized doubling of forward or backward steps is used to avoid repeating circular trajectories. Because of this, the number of steps is dynamic.
- Parameters:
nsteps (int) – number of accepted steps until the sample is considered independent.
adaptive_nsteps (False, 'proposal-distance', 'move-distance') – if not false, allow earlier termination than nsteps. The ‘proposal-distance’ strategy stops when the sum of all proposed vectors exceeds the mean distance between pairs of live points. As distance, the Mahalanobis distance is used. The ‘move-distance’ strategy stops when the distance between start point and current position exceeds the mean distance between pairs of live points.
delta (float) – step size
nudge (float) – change in step size, must be >1.
- move(ui, pi, Li, region, Lmin, ndraw=1, plot=False)[source]
Move from position ui, Li, gradi with a HMC trajectory.
- Returns:
unew (vector) – new position in cube space
pnew (vector) – new position in physical parameter space
Lnew (float) – new likelihood
nc (int) – number of likelihood evaluations
alpha (float) – acceptance rate of trajectory
treeheight (int) – height of NUTS tree
ultranest.dyhmc module
Experimental constrained Hamiltanean Monte Carlo step sampling
Contrary to CHMC, this uses the likelihood gradients throughout the path. A helper surface is created using the live points.
- ultranest.dyhmc.stop_criterion(thetaminus, thetaplus, rminus, rplus)[source]
Compute the stop condition in the main loop dot(dtheta, rminus) >= 0 & dot(dtheta, rplus >= 0)
- Parameters:
thetaminus (ndarray[float, ndim=1]) – under position
thetaplus (ndarray[float, ndim=1]) – above position
rminus (ndarray[float, ndim=1]) – under momentum
rplus (ndarray[float, ndim=1]) – above momentum
- Returns:
criterion – whether the condition is valid
- Return type:
bool
- ultranest.dyhmc.leapfrog(theta, r, grad, epsilon, invmassmatrix, f)[source]
Leap frog step from theta with momentum r and stepsize epsilon. The local gradient grad is updated with function f
- ultranest.dyhmc.build_tree(theta, r, grad, v, j, epsilon, invmassmatrix, f, joint0)[source]
The main recursion.
- ultranest.dyhmc.tree_sample(theta, logp, r0, grad, extra, epsilon, invmassmatrix, f, joint, maxheight=inf)[source]
Build NUTS-like tree of sampling path from theta towards p with stepsize epsilon.
- ultranest.dyhmc.find_beta_params_static(d, u10)[source]
Define auxiliary distribution following naive intuition. Make 50% quantile to be at u=0.1, and very flat at high u.
- ultranest.dyhmc.find_beta_params_dynamic(d, u10)[source]
Define auxiliary distribution taking into account kinetic energy of a d-dimensional HMC. Make exp(-d/2) quantile to be at u=0.1, and 95% quantile at u=0.5.
- ultranest.dyhmc.generate_momentum_normal(d, massmatrix)[source]
draw direction vector according to mass matrix
- ultranest.dyhmc.generate_momentum(d, massmatrix, alpha, beta)[source]
draw momentum from a circle, with amplitude following the beta distribution
- ultranest.dyhmc.generate_momentum_circle(d, massmatrix)[source]
draw from a circle, with a little noise in amplitude
- ultranest.dyhmc.generate_momentum_flattened(d, massmatrix)[source]
like normal distribution, but make momenta distributed like a single gaussian. this is the one being used
- class ultranest.dyhmc.FlattenedProblem(d, Ls, function, layer)[source]
Bases:
object
Creates a suitable auxiliary distribution from samples of likelihood values
The distribution is the CDF of a beta distribution, with 0 -> logLmin 1 -> 90% quantile of logLs 0.5 -> 10% quantile of logLs
.modify_Lgrad() returns the conversion from logL, grad to the equivalents on the auxiliary distribution.
.__call__(x) returns logL, grad on the auxiliary distribution.
- class ultranest.dyhmc.DynamicHMCSampler(ndim, nsteps, transform_loglike_gradient, delta=0.9, nudge=1.04)[source]
Bases:
object
Dynamic Hamiltonian/Hybrid Monte Carlo technique
Typically, HMC operates on the posterior. It has the benefit of producing “orbit” trajectories, that can follow the guidance of gradients.
In nested sampling, we need to sample the prior subject to the likelihood constraint. This means a HMC would most of the time go in straight lines, until it steps outside the boundary. Techniques such as Constrained HMC and Galilean MC use the gradient outside to select the reflection direction.
However, it would be beneficial to be repelled by the likelihood boundary, and to take advantage of gradient guidance. This implements a new technique that does this.
The trick is to define a auxiliary distribution from the likelihood, generate HMC trajectories from it, and draw points from the trajectory with inverse the probability of the auxiliary distribution to sample from the prior. Thus, the auxiliary distribution should be mostly flat, and go to zero at the boundaries to repell the HMC.
Given Lmin and Lmax from the live points, use a beta approximation of log-likelihood
p=1 if L>Lmin u = (L - Lmin) / (Lmax - Lmin) p = Beta_PDF(u; alpha, beta)
- then define
d log(p) / dx = dlog(p_orig)/dlog(p) * dlog(p_orig) / dx new gradient = conversion * original gradient
- with conversion
dlogp/du = 0 if u>1; otherwise: dlogp/du = u**(1-alpha) * (1-u)**(1-beta) / Ic(u; alpha, beta) / Beta_PDF(u, alpha, beta) du/dL = 1 / (Lmax - Lmin)
The beta distribution achieves: * a flattening of the loglikelihood to avoid seeing only “walls” * using the gradient to identify how to orbit the likelihood contour * at higher, unseen likelihoods, the exploration is in straight lines * trajectory do not have the energy to go below Lmin. * alpha and beta parameters allow flexible choice of “contour avoidance”
Run HMC trajectory on p This will draw samples proportional to p Modify multinomial acceptance by 1/p to get uniform samples. and reject porig < p_1
The remaining choices for HMC are how long the trajectories should run (number of steps) and the step size. The former is solved by No-U-Turn Sampler or dynamic HMC, which randomly build forward and backward paths until the trajectory turns around. Then, a random point from the trajectory is chosen.
The step size is chosen by targeting an acceptance rate of delta~0.95, and decreasing(increasing) every time the region is rebuilt if the acceptance rate is below(above).
Initialise sampler.
- Parameters:
nsteps (int) – number of accepted steps until the sample is considered independent.
transform_loglike_gradient (function) – called with unit cube position vector u, returns transformed parameter vector p, loglikelihood and gradient (dlogL/du, not just dlogL/dp)
- move(ui, pi, Li, gradi, region, ndraw=1, Lflat=None, gradflat=None, plot=False)[source]
Move from position ui, Li, gradi with a HMC trajectory.
- Returns:
unew (vector) – new position in cube space
pnew (vector) – new position in physical parameter space
Lnew (float) – new likelihood
gradnew (vector) – new gradient
Lflat (float) – new likelihood on auxiliary distribution
gradflat (vector) – new gradient on auxiliary distribution
nc (int) – number of likelihood evaluations
alpha (float) – acceptance rate of HMC trajectory
beta (float) – acceptance rate of inverse-beta-biased HMC trajectory
ultranest.flatnuts module
FLATNUTS is a implementation of No-U-turn sampler for nested sampling assuming a flat prior space (hyper-cube u-space).
This is highly experimental. It is similar to NoGUTS and suffers from the same stability problems.
Directional sampling within regions.
Work in unit cube space. assume a step size.
starting from a live point
choose a random direction based on whitened space metric
for forward and backward direction:
find distance where leaving spheres (surely outside)
bisect the step that leads out of the likelihood threshold
can we scatter forward?
if we stepped outside the unit cube, use normal to the parameter(s) we stepped out from
if gradient available, use it at first outside point
for each sphere that contains the last inside point:
resize so that first outside point is on the surface, get tangential vector there (this vector is just the difference between sphere center and last inside point)
compute reflection of direction vector with tangential plane
choose a forward reflection at random (if any)
3.4) test if next point is inside again. If yes, continue NUTS
- NUTS:
alternatingly double the number of steps to the forward or backward side
build a tree; terminate when start and end directions are not forward any more
choose a end point at random out of the sequence
If the number of steps on any straight line is <10 steps, make step size smaller If the number of steps on any straight line is >100 steps, make step size slightly bigger
- param - Number of NUTS tracks:
- type - Number of NUTS tracks:
has to be user-tuned to ensure sufficiently independent samples; starting from 1, look when Z does not change anymore
- param - Step size:
- type - Step size:
self-adjusting
- Benefit of this algorithm:
insensitive to step size
insensitive to dimensionality (sqrt scaling), better than slice sampling
takes advantage of region information, can accelerate low-d problems as well
- Drawbacks:
inaccurate reflections degrade dimensionality scaling
more complex to implement than slice sampling
- class ultranest.flatnuts.SingleJumper(stepsampler, nsteps=0)[source]
Bases:
object
Jump on step at a time. If unsuccessful, reverse direction.
- class ultranest.flatnuts.DirectJumper(stepsampler, nsteps, log=False)[source]
Bases:
object
Jump to n steps immediately. If unsuccessful, takes rest in other direction.
- class ultranest.flatnuts.IntervalJumper(stepsampler, nsteps)[source]
Bases:
object
Use interval to choose final point randomly
- class ultranest.flatnuts.ClockedSimpleStepSampler(contourpath, plot=False, log=False)[source]
Bases:
object
Find a new point with a series of small steps
Starts a sampling track from x in direction v. is_inside is a function that returns true when a given point is inside the volume
epsilon gives the step size in direction v. samples, if given, helps choose the gradient – To be removed plot: if set to true, make some debug plots
- reverse(reflpoint, v, plot=False)[source]
Reflect off the surface at reflpoint going in direction v
returns the new direction.
- class ultranest.flatnuts.ClockedStepSampler(contourpath, plot=False, log=False)[source]
Bases:
ClockedSimpleStepSampler
Find a new point with a series of small steps
Starts a sampling track from x in direction v. is_inside is a function that returns true when a given point is inside the volume
epsilon gives the step size in direction v. samples, if given, helps choose the gradient – To be removed plot: if set to true, make some debug plots
- class ultranest.flatnuts.ClockedBisectSampler(contourpath, plot=False, log=False)[source]
Bases:
ClockedStepSampler
Step sampler that does not require each step to be evaluated
Starts a sampling track from x in direction v. is_inside is a function that returns true when a given point is inside the volume
epsilon gives the step size in direction v. samples, if given, helps choose the gradient – To be removed plot: if set to true, make some debug plots
- class ultranest.flatnuts.ClockedNUTSSampler(contourpath, plot=False, log=False)[source]
Bases:
ClockedBisectSampler
No-U-turn sampler (NUTS) on flat surfaces.
Starts a sampling track from x in direction v. is_inside is a function that returns true when a given point is inside the volume
epsilon gives the step size in direction v. samples, if given, helps choose the gradient – To be removed plot: if set to true, make some debug plots
- next(Llast=None)[source]
Alternatingly doubles the number of steps to forward and backward direction (which may include reflections, see StepSampler and BisectSampler). When track returns (start and end of tree point toward each other), terminates and returns a random point on that track.
ultranest.hotstart module
Warm start
Helper functions for deforming the parameter space to enable a more efficient sampling.
Based on ideas from Petrosyan & Handley (2022, https://arxiv.org/abs/2212.01760).
- ultranest.hotstart.get_auxiliary_problem(loglike, transform, ctr, invcov, enlargement_factor, df=1)[source]
Return a new loglike and transform based on an auxiliary distribution.
Given a likelihood and prior transform, and information about the (expected) posterior peak, generates a auxiliary likelihood and prior transform that is identical but requires fewer nested sampling iterations.
This is achieved by deforming the prior space, and undoing that transformation by correction weights in the likelihood.
The auxiliary distribution used for transformation/weighting is a d-dimensional Student-t distribution.
Usage:
aux_loglikelihood, aux_aftertransform = get_auxiliary_problem(loglike, transform, ctr, invcov, enlargement_factor, df=1) aux_sampler = ReactiveNestedSampler(parameters, aux_loglikelihood) aux_results = aux_sampler.run() posterior_samples = [aux_aftertransform(sample) for sample in aux_results['samples']]
- Parameters:
loglike (function) – original likelihood function
transform (function) – original prior transform function
ctr (array) – Posterior center (in u-space).
invcov (array) – Covariance of the posterior (in u-space).
enlargement_factor (float) – Factor by which the scale of the auxiliary distribution is enlarged in all dimensions. For Gaussian-like posteriors, sqrt(ndim) seems to work, Heavier tailed or non-elliptical distributions may need larger factors.
df (float) – Number of degrees of freedom of the auxiliary student-t distribution. The default is recommended. For truly gaussian posteriors, the student-t can be made more gaussian (by df>=30) for accelation.
- Returns:
aux_loglike (function) – auxiliary loglikelihood function.
aux_aftertransform (function) – auxiliary transform function. Takes d u-space coordinates, and returns d + 1 p-space parameters. The first d return coordinates are identical to what
transform
would return. The final coordinate is the correction weight.
- ultranest.hotstart.get_extended_auxiliary_problem(loglike, transform, ctr, invcov, enlargement_factor, df=1)[source]
Return a new loglike and transform based on an auxiliary distribution.
Given a likelihood and prior transform, and information about the (expected) posterior peak, generates a auxiliary likelihood and prior transform that is identical but requires fewer nested sampling iterations.
This is achieved by deforming the prior space, and undoing that transformation by correction weights in the likelihood.
The auxiliary distribution used for transformation/weighting is a d-dimensional Student-t distribution.
- Parameters:
loglike (function) – original likelihood function
transform (function) – original prior transform function
ctr (array) – Posterior center (in u-space).
invcov (array) – Covariance of the posterior (in u-space).
enlargement_factor (float) –
Factor by which the scale of the auxiliary distribution is enlarged in all dimensions.
For Gaussian-like posteriors, sqrt(ndim) seems to work, Heavier tailed or non-elliptical distributions may need larger factors.
df (float) – Number of degrees of freedom of the auxiliary student-t distribution. The default is recommended. For truly gaussian posteriors, the student-t can be made more gaussian (by df>=30) for accelation.
- Returns:
aux_loglike (function) – auxiliary loglikelihood function. Takes d + 1 parameters (see below). The likelihood is the same as loglike, but adds weights.
aux_transform (function) – auxiliary transform function. Takes d u-space coordinates, and returns d + 1 p-space parameters. The first d return coordinates are identical to what
transform
would return. The final coordinate is the correction weight.
- ultranest.hotstart.get_extended_auxiliary_independent_problem(loglike, transform, ctr, err, df=1)[source]
Return a new loglike and transform based on an auxiliary distribution.
Given a likelihood and prior transform, and information about the (expected) posterior peak, generates a auxiliary likelihood and prior transform that is identical but requires fewer nested sampling iterations.
This is achieved by deforming the prior space, and undoing that transformation by correction weights in the likelihood.
The auxiliary distribution used for transformation/weighting is a independent Student-t distribution for each parameter.
Usage:
aux_loglikelihood, aux_transform = get_auxiliary_problem(loglike, transform, ctr, invcov, enlargement_factor, df=1) aux_sampler = ReactiveNestedSampler(parameters, aux_loglikelihood, transform=aux_transform, derived_param_names=['logweight']) aux_results = aux_sampler.run() posterior_samples = aux_results['samples'][:,-1]
- Parameters:
loglike (function) – original likelihood function
transform (function) – original prior transform function
ctr (array) – Posterior center (in u-space).
err (array) – Standard deviation around the posterior center (in u-space).
df (float) – Number of degrees of freedom of the auxiliary student-t distribution. The default is recommended. For truly gaussian posteriors, the student-t can be made more gaussian (by df>=30) for accelation.
- Returns:
aux_loglike (function) – auxiliary loglikelihood function.
aux_transform (function) – auxiliary transform function. Takes d u-space coordinates, and returns d + 1 p-space parameters. The first d return coordinates are identical to what
transform
would return. The final coordinate is the log of the correction weight.
- ultranest.hotstart.compute_quantile_intervals(steps, upoints, uweights)[source]
Compute lower and upper axis quantiles.
- Parameters:
steps (array) – list of quantiles q to compute.
upoints (array) – samples, with dimensions (N, d)
uweights (array) – sample weights
- Returns:
ulo (array) – list of lower quantiles (at q), one entry for each dimension d.
uhi (array) – list of upper quantiles (at 1-q), one entry for each dimension d.
- ultranest.hotstart.compute_quantile_intervals_refined(steps, upoints, uweights, logsteps_max=20)[source]
Compute lower and upper axis quantiles.
- Parameters:
steps (array) – list of quantiles q to compute, with dimensions
upoints (array) – samples, with dimensions (N, d)
uweights (array) – sample weights. N entries.
logsteps_max (int) – number of intermediate steps to inject between largest quantiles interval and full unit cube
- Returns:
ulo (array) – list of lower quantiles (at q), of shape (M, d), one entry per quantile and dimension d.
uhi (array) – list of upper quantiles (at 1-q), of shape (M, d), one entry per quantile and dimension d.
uinterpspace (array) – list of steps (length of steps plus logsteps_max long)
- ultranest.hotstart.get_auxiliary_contbox_parameterization(param_names, loglike, transform, upoints, uweights, vectorized=False)[source]
Return a new loglike and transform based on an auxiliary distribution.
Given a likelihood and prior transform, and information about the (expected) posterior peak, generates a auxiliary likelihood and prior transform that is identical but requires fewer nested sampling iterations.
This is achieved by deforming the prior space, and undoing that transformation by correction weights in the likelihood. A additional parameter, “aux_logweight”, is added at the end, which contains the correction weight. You can ignore it.
The auxiliary distribution used for transformation/weighting is factorized. Each axis considers the ECDF of the auxiliary samples, and segments it into quantile segments. Within each segment, the parameter edges in u-space are linearly interpolated. To see the interpolation quantiles for each axis, use:
steps = 10**-(1.0 * np.arange(1, 8, 2)) ulos, uhis, uinterpspace = compute_quantile_intervals_refined(steps, upoints, uweights)
- Parameters:
param_names (list) – parameter names
loglike (function) – original likelihood function
transform (function) – original prior transform function
upoints (array) – Posterior samples (in u-space).
uweights (array) – Weights of samples (needs to sum of 1)
vectorized (bool) – whether the loglike & transform functions are vectorized
- Returns:
aux_param_names (list) – new parameter names (param_names) plus additional ‘aux_logweight’
aux_loglike (function) – auxiliary loglikelihood function.
aux_transform (function) – auxiliary transform function. Takes d u-space coordinates, and returns d + 1 p-space parameters. The first d return coordinates are identical to what
transform
would return. The final coordinate is the log of the correction weight.vectorized (bool) – whether the returned functions are vectorized
Usage
——
:: –
- aux_loglikelihood, aux_transform = get_auxiliary_contbox_parameterization(
loglike, transform, auxiliary_usamples)
aux_sampler = ReactiveNestedSampler(parameters, aux_loglikelihood, transform=aux_transform, derived_param_names=[‘logweight’]) aux_results = aux_sampler.run() posterior_samples = aux_results[‘samples’][:,-1]
- ultranest.hotstart.reuse_samples(param_names, loglike, points, logl, logw=None, logz=0.0, logzerr=0.0, upoints=None, batchsize=128, vectorized=False, log_weight_threshold=-10, **kwargs)[source]
Reweight existing nested sampling run onto a new loglikelihood.
- Parameters:
param_names (list of strings) – Names of the parameters
loglike (function) – New likelihood function
points (np.array of shape (npoints, ndim)) – Equally weighted (unless logw is passed) posterior points
logl (np.array(npoints)) – Previously likelihood values of points
logw (np.array(npoints)) – Log-weights of existing points.
logz (float) – Previous evidence / marginal likelihood value.
logzerr (float) – Previous evidence / marginal likelihood uncertainty.
upoints (np.array of shape (npoints, ndim)) – Posterior points before transformation.
vectorized (bool) – Whether loglike function is vectorized
batchsize (int) – Number of points simultaneously passed to vectorized loglike function
log_weight_threshold (float) – Lowest log-weight to consider
- Returns:
results – All information of the run. Important keys: Number of nested sampling iterations (niter), Evidence estimate (logz), Effective Sample Size (ess), weighted samples (weighted_samples), equally weighted samples (samples), best-fit point information (maximum_likelihood), posterior summaries (posterior).
- Return type:
dict
ultranest.integrator module
Nested sampling integrators
This module provides the high-level class ReactiveNestedSampler
,
for calculating the Bayesian evidence and posterior samples of arbitrary models.
- class ultranest.integrator.ReactiveNestedSampler(param_names, loglike, transform=None, derived_param_names=[], wrapped_params=None, resume='subfolder', run_num=None, log_dir=None, num_test_samples=2, draw_multiple=True, num_bootstraps=30, vectorized=False, ndraw_min=128, ndraw_max=65536, storage_backend='hdf5', warmstart_max_tau=-1)[source]
Bases:
object
Nested sampler with reactive exploration strategy.
Storage & resume capable, optionally MPI parallelised.
Initialise nested sampler.
- Parameters:
param_names (list of str, names of the parameters.) – Length gives dimensionality of the sampling problem.
loglike (function) – log-likelihood function. Receives multiple parameter vectors, returns vector of likelihood.
transform (function) – parameter transform from unit cube to physical parameters. Receives multiple cube vectors, returns multiple parameter vectors.
derived_param_names (list of str) – Additional derived parameters created by transform. (empty by default)
log_dir (str) – where to store output files
resume ('resume', 'resume-similar', 'overwrite' or 'subfolder') –
if ‘overwrite’, overwrite previous data.
if ‘subfolder’, create a fresh subdirectory in log_dir.
if ‘resume’ or True, continue previous run if available. Only works when dimensionality, transform or likelihood are consistent.
if ‘resume-similar’, continue previous run if available. Only works when dimensionality and transform are consistent. If a likelihood difference is detected, the existing likelihoods are updated until the live point order differs. Otherwise, behaves like resume.
run_num (int or None) – If resume==’subfolder’, this is the subfolder number. Automatically increments if set to None.
wrapped_params (list of bools) – indicating whether this parameter wraps around (circular parameter).
num_test_samples (int) – test transform and likelihood with this number of random points for errors first. Useful to catch bugs.
vectorized (bool) – If true, loglike and transform function can receive arrays of points.
draw_multiple (bool) – If efficiency goes down, dynamically draw more points from the region between ndraw_min and ndraw_max. If set to False, few points are sampled at once.
ndraw_min (int) – Minimum number of points to simultaneously propose. Increase this if your likelihood makes vectorization very cheap.
ndraw_max (int) – Maximum number of points to simultaneously propose. Increase this if your likelihood makes vectorization very cheap. Memory allocation may be slow for extremely high values.
num_bootstraps (int) – number of logZ estimators and MLFriends region bootstrap rounds.
storage_backend (str or class) – Class to use for storing the evaluated points (see ultranest.store) ‘hdf5’ is strongly recommended. ‘tsv’ and ‘csv’ are also possible.
warmstart_max_tau (float) – Maximum disorder to accept when resume=’resume-similar’; Live points are reused as long as the live point order is below this normalised Kendall tau distance. Values from 0 (highly conservative) to 1 (extremely negligent).
- run(update_interval_volume_fraction=0.8, update_interval_ncall=None, log_interval=None, show_status=True, viz_callback='auto', dlogz=0.5, dKL=0.5, frac_remain=0.01, Lepsilon=0.001, min_ess=400, max_iters=None, max_ncalls=None, max_num_improvement_loops=-1, min_num_live_points=400, cluster_num_live_points=40, insertion_test_zscore_threshold=4, insertion_test_window=10, region_class=<class 'ultranest.mlfriends.MLFriends'>, widen_before_initial_plateau_num_warn=10000, widen_before_initial_plateau_num_max=50000)[source]
Run until target convergence criteria are fulfilled.
- Parameters:
update_interval_volume_fraction (float) – Update region when the volume shrunk by this amount.
update_interval_ncall (int) – Update region after update_interval_ncall likelihood calls (not used).
log_interval (int) – Update stdout status line every log_interval iterations
show_status (bool) – show integration progress as a status line. If no output desired, set to False.
viz_callback (function) – callback function when region was rebuilt. Allows to show current state of the live points. See
nicelogger()
orLivePointsWidget
. If no output desired, set to False.dlogz (float) – Target evidence uncertainty. This is the std between bootstrapped logz integrators.
dKL (float) – Target posterior uncertainty. This is the Kullback-Leibler divergence in nat between bootstrapped integrators.
frac_remain (float) – Integrate until this fraction of the integral is left in the remainder. Set to a low number (1e-2 … 1e-5) to make sure peaks are discovered. Set to a higher number (0.5) if you know the posterior is simple.
Lepsilon (float) – Terminate when live point likelihoods are all the same, within Lepsilon tolerance. Increase this when your likelihood function is inaccurate, to avoid unnecessary search.
min_ess (int) – Target number of effective posterior samples.
max_iters (int) – maximum number of integration iterations.
max_ncalls (int) – stop after this many likelihood evaluations.
max_num_improvement_loops (int) – run() tries to assess iteratively where more samples are needed. This number limits the number of improvement loops.
min_num_live_points (int) – minimum number of live points throughout the run
cluster_num_live_points (int) – require at least this many live points per detected cluster
insertion_test_zscore_threshold (float) – z-score used as a threshold for the insertion order test. Set to infinity to disable.
insertion_test_window (int) – Number of iterations after which the insertion order test is reset.
region_class (
MLFriends
orRobustEllipsoidRegion
orSimpleRegion
) – Whether to use MLFriends+ellipsoidal+tellipsoidal region (better for multi-modal problems) or just ellipsoidal sampling (faster for high-dimensional, gaussian-like problems) or a axis-aligned ellipsoid (fastest, to be combined with slice sampling).widen_before_initial_plateau_num_warn (int) – If a likelihood plateau is encountered, increase the number of initial live points so that once the plateau is traversed, min_num_live_points live points remain. If the number exceeds widen_before_initial_plateau_num_warn, a warning is raised.
widen_before_initial_plateau_num_max (int) – If a likelihood plateau is encountered, increase the number of initial live points so that once the plateau is traversed, min_num_live_points live points remain, but not more than widen_before_initial_plateau_num_warn.
- Returns:
results (dict) –
samples (ndarray): re-weighted posterior samples: distributed according to \(p(\theta | d)\) - these points are not sorted, and can be assumed to have been randomly shuffled. See
ultranest.utils.resample_equal()
for more details.logz (float64): natural logarithm of the evidence \(\log Z = \log \int p(d|\theta) p(\theta) \text{d}\theta\)
logzerr (float64): global estimate of the \(1\sigma\) error on \(\log Z\) (can be safely assumed to be Gaussian); obtained as the quadratic sum of
logz_bs
andlogz_tail
. Users are advised to uselogz
\(\pm\)logzerr
as the best estimate for the evidence and its error.niter (int): number of sampler iterations
ncall (int): total number of likelihood evaluations (accepted and not)
logz_bs (float64): estimate of \(\log Z\) from bootstrapping - for details, see the ultranest paper
logzerr_bs (float64): estimate of the error on the of \(\log Z\) from bootstrapping
logz_single (float64): estimate of \(\log Z\) from a single sampler
logzerr_single (float64): estimate of the error \(\log Z\) from a single sampler, obtained as \(\sqrt{H / n_{\text{live}}}\)
logzerr_tail (float64): contribution of the tail (i.e. the terminal leaves of the tree) to the error on \(\log Z\) (?)
ess (float64): effective sample size, i.e. number of samples divided by the estimated correlation length, estimated as \(N / (1 + N^{-1} \sum_i (N w_i - 1)^2)\) where \(w_i\) are the sample weights while \(N\) is the number of samples
H (float64): information gained
Herr (float64): (Gaussian) \(1\sigma\) error on \(H\)
posterior (dict): summary information on the posterior marginal distributions for each parameter - a dictionary of lists each with as many items as the fit parameters, indexed as \(\theta_i\) in the following:
mean (list): expectation value of \(\theta_i\)
stdev (list): standard deviation of \(\theta_i\)
median (list): median of \(\theta_i\)
errlo (list): one-sigma lower quantile of the marginal for \(\theta_i\), i.e. 15.8655% quantile
errup (list): one-sigma upper quantile of the marginal for \(\theta_i\), i.e. 84.1345% quantile
information_gain_bits (list): information gain from the marginal prior on \(\theta_i\) to the posterior
weighted_samples (dict): weighted samples from the posterior, as computed during sampling, sorted by their log-likelihood value
upoints (ndarray): sample locations in the unit cube \([0, 1]^{d}\), where \(d\) is the number of parameters - the shape is
n_iter
by \(d\)points (ndarray): sample locations in the physical, user-provided space (same shape as
upoints
)weights (ndarray): sample weights - shape
n_iter
, they sum to 1logw (ndarray): logs of the sample weights (?)
bootstrapped_weights (ndarray): bootstrapped estimate of the sample weights
logl (ndarray): log-likelihood values at the sample points
maximum_likelihood (dict): summary information on the maximum likelihood value \(\theta_{ML}\) found by the posterior exploration
logl (float64): value of the log-likelihood at this point: \(\log p(d | \theta_{ML})\)
point (list): coordinates of \(\theta_{ML}\) in the physical space
point_untransformed (list): coordinates of \(\theta_{ML}\) in the unit cube \([0, 1]^{d}\)
paramnames (list): input parameter names
insertion_order_MWW_test (dict): results for the Mann-Whitney U-test; for more information, see the
ultranest.netiter.MultiCounter
class or section 4.5.2 of Buchner 2023independent_iterations (float): shortest insertion order test run length
converged (bool): whether the run is converged according to the MWW test, at the given threshold
- Return type:
Results dictionary, with the following entries:
- run_iter(update_interval_volume_fraction=0.8, update_interval_ncall=None, log_interval=None, dlogz=0.5, dKL=0.5, frac_remain=0.01, Lepsilon=0.001, min_ess=400, max_iters=None, max_ncalls=None, max_num_improvement_loops=-1, min_num_live_points=400, cluster_num_live_points=40, show_status=True, viz_callback='auto', insertion_test_window=10000, insertion_test_zscore_threshold=2, region_class=<class 'ultranest.mlfriends.MLFriends'>, widen_before_initial_plateau_num_warn=10000, widen_before_initial_plateau_num_max=50000)[source]
Iterate towards convergence.
Use as an iterator like so:
for result in sampler.run_iter(...): print('lnZ = %(logz).2f +- %(logzerr).2f' % result)
Parameters as described in run() method.
- Yields:
results (dict) – Results dictionary computed at the current iteration, with the same keys as discussed in the
run()
method.
- print_results(use_unicode=True)[source]
Give summary of marginal likelihood and parameter posteriors.
- Parameters:
use_unicode (bool) – Whether to print a unicode plot of the posterior distributions
- plot_corner()[source]
Make corner plot.
Writes corner plot to plots/ directory if log directory was specified, otherwise show interactively.
This does essentially:
from ultranest.plot import cornerplot cornerplot(results)
- class ultranest.integrator.NestedSampler(param_names, loglike, transform=None, derived_param_names=[], resume='subfolder', run_num=None, log_dir='logs/test', num_live_points=1000, vectorized=False, wrapped_params=[])[source]
Bases:
object
Simple Nested sampler for reference.
Set up nested sampler.
- Parameters:
param_names (list of str, names of the parameters.) – Length gives dimensionality of the sampling problem.
loglike (function) – log-likelihood function. Receives multiple parameter vectors, returns vector of likelihood.
transform (function) – parameter transform from unit cube to physical parameters. Receives multiple cube vectors, returns multiple parameter vectors.
derived_param_names (list of str) – Additional derived parameters created by transform. (empty by default)
log_dir (str) – where to store output files
resume ('resume', 'overwrite' or 'subfolder') – if ‘overwrite’, overwrite previous data. if ‘subfolder’, create a fresh subdirectory in log_dir. if ‘resume’ or True, continue previous run if available.
wrapped_params (list of bools) – indicating whether this parameter wraps around (circular parameter).
num_live_points (int) – Number of live points
vectorized (bool) – If true, loglike and transform function can receive arrays of points.
run_num (int) – unique run number. If None, will be automatically incremented.
- run(update_interval_iter=None, update_interval_ncall=None, log_interval=None, dlogz=0.001, max_iters=None)[source]
Explore parameter space.
- Parameters:
update_interval_iter (None | int) – Update region after this many iterations.
update_interval_ncall (None | int) – Update region after update_interval_ncall likelihood calls.
log_interval (None | int) – Update stdout status line every log_interval iterations
dlogz (float) – Target evidence uncertainty.
max_iters (None | int) – maximum number of integration iterations.
- Returns:
results – dictionary with posterior samples and original weighted_samples, number of likelihood calls ncall, number of nested sampling iterations niter, evidence estimate logz and uncertainty logzerr.
- Return type:
dict
- ultranest.integrator.read_file(log_dir, x_dim, num_bootstraps=20, random=True, verbose=False, check_insertion_order=True)[source]
Read the output HDF5 file of UltraNest.
- Parameters:
log_dir (str) – Folder containing results
x_dim (int) – number of dimensions
num_bootstraps (int) – number of bootstraps to use for estimating logZ.
random (bool) – use randomization for volume estimation.
verbose (bool) – show progress
check_insertion_order (bool) – whether to perform MWW insertion order test for assessing convergence
- Returns:
sequence (dict) –
contains arrays storing for each iteration estimates of:
logz: log evidence estimate
logzerr: log evidence uncertainty estimate
logvol: log volume estimate
samples_n: number of live points
logwt: log weight
logl: log likelihood
final (dict) – same as ReactiveNestedSampler.results and ReactiveNestedSampler.run return values
- ultranest.integrator.warmstart_from_similar_file(usample_filename, param_names, loglike, transform, vectorized=False, min_num_samples=50)[source]
Warmstart from a previous run.
Usage:
aux_paramnames, aux_log_likelihood, aux_prior_transform, vectorized = warmstart_from_similar_file( 'model1/chains/weighted_post_untransformed.txt', parameters, log_likelihood_with_background, prior_transform) aux_sampler = ReactiveNestedSampler(aux_paramnames, aux_log_likelihood, transform=aux_prior_transform,vectorized=vectorized) aux_sampler.run() posterior_samples = aux_results['samples'][:,-1]
See
ultranest.hotstart.get_auxiliary_contbox_parameterization()
for more information.The remaining parameters have the same meaning as in
ReactiveNestedSampler
.- Parameters:
usample_filename (str) – ‘directory/chains/weighted_post_untransformed.txt’ contains posteriors in u-space (untransformed) of a previous run. Columns are weight, logl, param1, param2, …
min_num_samples (int) – minimum number of samples in the usample_filename file required. Too few samples will give a poor approximation.
param_names (list)
loglike (function)
transform (function)
vectorized (bool)
- Returns:
aux_param_names (list) – new parameter list
aux_loglikelihood (function) – new loglikelihood function
aux_transform (function) – new prior transform function
vectorized (bool) – whether the new functions are vectorized
ultranest.mlfriends module
Region construction methods
Construct and sample from regions of neighbourhoods around the live points. Includes
an efficient implementation of MLFriends, with transformation layers and clustering. * RadFriends: Buchner (2014) https://arxiv.org/abs/1407.5459 * MLFriends: Buchner (2019) https://arxiv.org/abs/1707.04476
a single-ellipsoid region (Mukherjee et al., 2006, https://arxiv.org/abs/astro-ph/0508461)
a very fast single-ellipsoid, axis-aligned region, for use with step-samplers in high dimensions
- class ultranest.mlfriends.AffineLayer(ctr=0, T=1, invT=1, nclusters=1, wrapped_dims=[], clusterids=None)
Bases:
ScalingLayer
Affine whitening transformation.
Learns the covariance of points.
For learning the next layer’s covariance, the clustering is considered: the sample covariance is computed after subtracting the cluster mean. This co-centers all clusters and gets the average cluster shape, avoiding learning a covariance dominated by the distance between clusters.
Initialise layer.
The parameters are optional and can be learned from points with
optimize()
- Parameters:
ctr (vector) – Center of points
T (matrix) – Transformation matrix. This matrix whitens the points to a unit Gaussian.
invT (matrix) – Inverse transformation matrix. For transforming a unit Gaussian into something with the sample cov.
nclusters (int) – number of clusters
wrapped_dims (array of bools) – indicates which parameter axes are circular.
clusterids (array of int) – cluster id for each point
- create_new(upoints, maxradiussq, minvol=0.0)
Learn next layer from this optimized layer’s clustering.
- Parameters:
upoints (array) – points to use for optimize (in u-space)
maxradiussq (float) – square of the MLFriends radius
minvol (float) – Minimum volume to regularize sample covariance
- Return type:
A new, optimized AffineLayer.
- optimize(points, centered_points, clusterids=None, minvol=0.0)
Optimize layer.
Estimates covariance of
centered_points
.minvol
sets the smallest allowed size of the covariance to avoid numerical collapse.- Parameters:
points (array) – points to use for optimize (in u-space)
centered_points (array) – points with their cluster center subtracted
clusterids (array of ints) – for each point, which cluster they belong to
minvol (float) – Minimum volume to regularize sample covariance
- transform(u)
Transform points from cube space to a whitened space.
- untransform(ww)
Transform points from whitened space back to cube space.
- class ultranest.mlfriends.LocalAffineLayer(ctr=0, T=1, invT=1, nclusters=1, wrapped_dims=[], clusterids=None)
Bases:
AffineLayer
Affine whitening transformation.
For learning the next layer’s covariance, the points within the MLradius are co-centered. This should give a more “local” covariance.
Initialise layer.
The parameters are optional and can be learned from points with
optimize()
- Parameters:
ctr (vector) – Center of points
T (matrix) – Transformation matrix. This matrix whitens the points to a unit Gaussian.
invT (matrix) – Inverse transformation matrix. For transforming a unit Gaussian into something with the sample cov.
nclusters (int) – number of clusters
wrapped_dims (array of bools) – indicates which parameter axes are circular.
clusterids (array of int) – cluster id for each point
- create_new(upoints, maxradiussq, minvol=0.0)
Learn next layer from this optimized layer’s clustering.
- Parameters:
upoints (array) – points to use for optimize (in u-space)
maxradiussq (float) – square of the MLFriends radius
minvol (float) – Minimum volume to regularize sample covariance
- Return type:
A new, optimized LocalAffineLayer.
- class ultranest.mlfriends.MLFriends(u, transformLayer)
Bases:
object
MLFriends region.
Defines a region around nested sampling live points for
checking whether a proposed point likely also fulfills the likelihood constraints
proposing new points.
Learns geometry of region from existing live points.
Initialise region.
- Parameters:
u (array of vectors) – live points
transformLayer (ScalingLayer or AffineLayer) – whitening layer
- compute_enlargement(nbootstraps=50, minvol=0.0, rng=<module 'numpy.random' from '/home/user/.local/lib/python3.12/site-packages/numpy/random/__init__.py'>)
Return MLFriends radius and ellipsoid enlargement using bootstrapping.
The wrapping ellipsoid covariance is determined in each bootstrap round.
- Parameters:
nbootstraps (int) – number of bootstrapping rounds
minvol (float) – minimum volume to enforce to wrapping ellipsoid
rng – random number generator
- Returns:
max_distance (float) – square radius of MLFriends algorithm
max_radius (float) – square radius of enclosing ellipsoid.
- compute_maxradiussq(nbootstraps=50)
Run MLFriends bootstrapping
- Parameters:
nbootstraps (int) – number of bootstrapping rounds
- Return type:
square radius that safely encloses all live points.
- compute_mean_pair_distance()
- create_ellipsoid(minvol=0.0)
Create wrapping ellipsoid and store its center and covariance.
- Parameters:
minvol (float) – If positive, make sure ellipsoid has at least this volume.
- estimate_volume()
Estimate the order of magnitude of the volume around a single point given the current transformLayer.
Does not account for: * the number of live points * their overlap * the intersection with the unit cube borders
- Returns:
volume – Volume
- Return type:
float
- inside(pts)
Check if inside region.
- Parameters:
pts (array of vectors) – Points to check
- Returns:
is_inside – True if inside MLFriends region and wrapping ellipsoid, for each point in
pts
.- Return type:
array of bools
- inside_ellipsoid(u)
Check if inside wrapping ellipsoid.
- Parameters:
u (array of vectors) – Points to check
- Returns:
is_inside – True if inside wrapping ellipsoid, for each point in pts.
- Return type:
array of bools
- sample(nsamples=100)
Draw uniformly sampled points from MLFriends region.
Switches automatically between the
sampling_methods
(attribute).- Parameters:
nsamples (int) – number of samples to draw
- Returns:
samples (array of shape (nsamples, dimension)) – samples drawn
idx (array of integers (nsamples)) – index of a point nearby (MLFriends.u)
- sample_from_boundingbox(nsamples=100)
Draw uniformly sampled points from MLFriends region.
Draws uniformly from bounding box around region.
Parameters as described in sample().
- sample_from_points(nsamples=100)
Draw uniformly sampled points from MLFriends region.
Chooses randomly from points and their ellipsoids.
Parameters as described in sample().
- sample_from_transformed_boundingbox(nsamples=100)
Draw uniformly sampled points from MLFriends region.
Draws uniformly from bounding box around region (in whitened space).
Parameters as described in sample().
- sample_from_wrapping_ellipsoid(nsamples=100)
Draw uniformly sampled points from MLFriends region.
Draws uniformly from wrapping ellipsoid and filters with region.
Parameters as described in
sample()
.
- set_transformLayer(transformLayer)
Update transformation layer. Invalidates attribute maxradius.
- Parameters:
transformLayer (ScalingLayer or AffineLayer) – t-space transformation layer
- class ultranest.mlfriends.MaxPrincipleGapAffineLayer(ctr=0, T=1, invT=1, nclusters=1, wrapped_dims=[], clusterids=None)
Bases:
AffineLayer
Affine whitening transformation.
For learning the next layer’s covariance, the clustering and principal axis is considered: the sample covariance is computed after subtracting the cluster mean. All points are projected onto the line defined by the principle axis vector starting from the origin. Then, on the sorted line positions, the largest gap is identified. All points before the gap are mean-subtracted, and all points after the gap are mean-subtracted. Then, the final sample covariance is computed. This should give a more “local” covariance, even in the case where clusters could not yet be clearly identified.
Initialise layer.
The parameters are optional and can be learned from points with
optimize()
- Parameters:
ctr (vector) – Center of points
T (matrix) – Transformation matrix. This matrix whitens the points to a unit Gaussian.
invT (matrix) – Inverse transformation matrix. For transforming a unit Gaussian into something with the sample cov.
nclusters (int) – number of clusters
wrapped_dims (array of bools) – indicates which parameter axes are circular.
clusterids (array of int) – cluster id for each point
- create_new(upoints, maxradiussq, minvol=0.0)
Learn next layer from this optimized layer’s clustering.
- Parameters:
upoints (array) – points to use for optimize (in u-space)
maxradiussq (float) – square of the MLFriends radius
minvol (float) – Minimum volume to regularize sample covariance
- Return type:
A new, optimized MaxPrincipleGapAffineLayer.
- class ultranest.mlfriends.RobustEllipsoidRegion(u, transformLayer)
Bases:
MLFriends
Ellipsoidal region.
Defines a region around nested sampling live points for
checking whether a proposed point likely also fulfills the likelihood constraints
proposing new points.
Learns geometry of region from existing live points.
Initialise region.
- Parameters:
u (array of vectors) – live points
transformLayer (ScalingLayer or AffineLayer) – whitening layer
- compute_enlargement(nbootstraps=50, minvol=0.0, rng=<module 'numpy.random' from '/home/user/.local/lib/python3.12/site-packages/numpy/random/__init__.py'>)
Return MLFriends radius and ellipsoid enlargement using bootstrapping.
The wrapping ellipsoid covariance is determined in each bootstrap round.
- Parameters:
nbootstraps (int) – number of bootstrapping rounds
minvol (float) – minimum volume to enforce to wrapping ellipsoid
rng – random number generator
- Returns:
max_distance (float) – square radius of MLFriends algorithm
max_radius (float) – square radius of enclosing ellipsoid.
- estimate_volume()
Estimate the volume of the ellipsoid.
Does not account for the intersection with the unit cube borders.
- Returns:
logvolume – logarithm of the volume.
- Return type:
float
- inside(pts)
Check if inside region.
- Parameters:
pts (array of vectors) – Points to check
- Returns:
is_inside – True if inside MLFriends region and wrapping ellipsoid, for each point in
pts
.- Return type:
array of bools
- sample(nsamples=100)
Draw uniformly sampled points from MLFriends region.
Switches automatically between the
sampling_methods
(attribute).- Parameters:
nsamples (int) – number of samples to draw
- Returns:
samples (array of shape (nsamples, dimension)) – samples drawn
idx (array of integers (nsamples)) – index of a point nearby (MLFriends.u)
- sample_from_boundingbox(nsamples=100)
Draw uniformly sampled points from MLFriends region.
Draws uniformly from bounding box around region.
Parameters as described in sample().
- sample_from_transformed_boundingbox(nsamples=100)
Draw uniformly sampled points from MLFriends region.
Draws uniformly from bounding box around region (in whitened space).
Parameters as described in sample().
- sample_from_wrapping_ellipsoid(nsamples=100)
Draw uniformly sampled points from MLFriends region.
Draws uniformly from wrapping ellipsoid and filters with region.
Parameters as described in
sample()
.
- class ultranest.mlfriends.ScalingLayer(mean=0, std=1, nclusters=1, wrapped_dims=[], clusterids=None)
Bases:
object
Simple transformation layer that only shifts and scales each axis.
Initialise layer.
- create_new(upoints, maxradiussq, minvol=0.0)
Learn next layer from this optimized layer’s clustering.
- Parameters:
upoints (array) – points to use for optimize (in u-space)
maxradiussq (float) – square of the MLFriends radius
minvol (float) – Minimum volume to regularize sample covariance
- Return type:
A new, optimized ScalingLayer.
- optimize(points, centered_points, clusterids=None, minvol=0.0)
Optimize layer.
Estimates mean and std of points.
- Parameters:
points (array) – points to use for optimize (in u-space)
centered_points (array) – points with their cluster center subtracted
clusterids (array of ints) – for each point, which cluster they belong to
minvol – ignored
- optimize_wrap(points)
Optimization for wrapped/circular parameters. Does nothing if there are no wrapped/circular parameters.
- Parameters:
points (array) – points to use for optimization (in u-space)
there. (Find largest gap in live points and wrap parameter space)
example (For) – |***** ****|
has:: (if a wrapped axis) – |***** ****|
middle (it would identify the)
it (subtract)
space (so that the new)
is:: –
**** |
- set_clusterids(clusterids=None, npoints=None)
Updates the cluster id assigned to each point.
- transform(u)
Transform points from cube space to a whitened space.
- untransform(ww)
Transform points from whitened space back to cube space.
- unwrap(wpoints)
Undo wrapping for circular parameters.
- wrap(points)
Wrap points for circular parameters.
- class ultranest.mlfriends.SimpleRegion(u, transformLayer)
Bases:
RobustEllipsoidRegion
Axis-aligned ellipsoidal region.
Defines a region around nested sampling live points for
checking whether a proposed point likely also fulfills the likelihood constraints
proposing new points.
Learns geometry of region from existing live points.
Initialise region.
- Parameters:
u (array of vectors) – live points
transformLayer (ScalingLayer or AffineLayer) – whitening layer
- compute_enlargement(nbootstraps=50, minvol=0.0, rng=<module 'numpy.random' from '/home/user/.local/lib/python3.12/site-packages/numpy/random/__init__.py'>)
Return MLFriends radius and ellipsoid enlargement using bootstrapping.
The wrapping ellipsoid covariance is determined in each bootstrap round.
- Parameters:
nbootstraps (int) – number of bootstrapping rounds
minvol (float) – minimum volume to enforce to wrapping ellipsoid
rng – random number generator
- Returns:
max_distance (float) – square radius of MLFriends algorithm
max_radius (float) – square radius of enclosing ellipsoid.
- create_ellipsoid(minvol=0.0)
Create wrapping ellipsoid and store its center and covariance.
- Parameters:
minvol (float) – If positive, make sure ellipsoid has at least this volume.
- class ultranest.mlfriends.WrappingEllipsoid(u)
Bases:
object
Ellipsoid which safely wraps points.
Initialise region.
- Parameters:
u (array of vectors) – live points
- compute_enlargement(nbootstraps=50, rng=<module 'numpy.random' from '/home/user/.local/lib/python3.12/site-packages/numpy/random/__init__.py'>)
Return ellipsoid enlargement after nbootstraps bootstrapping rounds.
The wrapping ellipsoid covariance is determined in each bootstrap round.
- create_ellipsoid(minvol=0.0)
Create wrapping ellipsoid and store its center and covariance.
- inside(u)
Check if inside wrapping ellipsoid.
- Parameters:
u (array of vectors) – Points to check
- Returns:
is_inside – True if inside wrapping ellipsoid, for each point in pts.
- Return type:
array of bools
- update_center(ctr)
Update ellipsoid center, considering fixed dimensions.
- Parameters:
ctr (vector) – new center
- ultranest.mlfriends.bounding_ellipsoid(x, minvol=0.0)
Calculate bounding ellipsoid containing a set of points x.
- Parameters:
x ((npoints, ndim) ndarray) – Coordinates of uniformly sampled points.
pointvol (float, optional) – Used to set a minimum bound on the ellipsoid volume when minvol is True.
- Return type:
mean and covariance of points
- ultranest.mlfriends.compute_mean_pair_distance(pts, clusterids)
Compute the average distance between pairs of points. Pairs from different clusters are excluded in the computation.
- Parameters:
pts (array) – points
clusterids (array of ints or None) – for each point, index of the associated cluster.
- Return type:
mean distance between point pairs.
- ultranest.mlfriends.find_nearby(apts, bpts, radiussq, nnearby)
Gets the index of a point in a within square radius radiussq, for each point b in bpts.
The number is written to nnearby (of same length as bpts). If none is found, -1 is written.
- Parameters:
apts (array) – points
bpts (array) – points
radiussq (float) – square of the MLFriends radius
nnearby (array of ints) – The result will be written here.
- ultranest.mlfriends.make_eigvals_positive(a, targetprod)
For the symmetric square matrix
a
, increase any zero eigenvalues to fulfill a target product of eigenvalues.- Parameters:
a (array) – covariance matrix
targetprod (array) – target product of eigenvalues
- Return type:
covariance matrix
- ultranest.mlfriends.subtract_nearby(upoints, maxradiussq)
Subtract from each point apts the mean of points within square radius radiussq, store in bpts.
- Parameters:
apts (array) – points
radiussq (float) – square of the MLFriends radius
- Returns:
upoints with the nearby centers subtracted.
- Return type:
overlapped_points
- ultranest.mlfriends.update_clusters(upoints, tpoints, maxradiussq, clusterids=None)
Clusters upoints, so that clusters are distinct if no member pair is within a radius of sqrt(maxradiussq).
- Parameters:
upoints (array) – points (in u-space)
tpoints (array) – points (in t-space)
maxradiussq (float) – square of the MLFriends radius
clusterids (array of ints or None) – for each point, index of the associated cluster.
- Returns:
nclusters (int) – the number of clusters found, which is also clusterids.max()
new_clusterids (array of int) – the new clusterids for each point
overlapped_points – upoints with their cluster centers subtracted.
The existing cluster ids are re-used when assigning new clusters,
if possible.
Clustering is performed on a transformed coordinate space (tpoints).
Returned values are based on upoints.
- ultranest.mlfriends.vol_prefactor(n)
Volume constant for an
n
-dimensional sphere.for
n
even: $$ (2pi)^(n /2) / (2 * 4 * … * n)$$ forn
odd : $$2 * (2pi)^((n-1)/2) / (1 * 3 * … * n)$$- Parameters:
n (int) – dimensionality
- Return type:
volume (float)
ultranest.netiter module
Graph-based nested sampling
A formulation of nested sampling exploration as a tree, presented in section 3.4 of Buchner (2023, https://arxiv.org/abs/2101.09675).
The root represents the prior volume, branches and sub-branches split the volume. The leaves of the tree are the integration tail.
Nested sampling proceeds as a breadth first graph search, with active nodes sorted by likelihood value. The number of live points are the number of parallel edges (active nodes to do).
Most functions receive the argument “roots”, which are the children of the tree root (main branches).
The exploration is bootstrap-capable without requiring additional computational effort: The roots are indexed, and the bootstrap explorer can ignore the rootids it does not know about.
- class ultranest.netiter.TreeNode(value=None, id=None, children=None)[source]
Bases:
object
Simple tree node.
Initialise.
- Parameters:
value (float) – value used to order nodes (typically log-likelihood)
id (int) – identifier, refers to the order of discovery and storage (PointPile)
children (list) – children nodes, should be
TreeNode
objects. if None, a empty list is used.
- class ultranest.netiter.BreadthFirstIterator(roots)[source]
Bases:
object
Generator exploring the tree.
Nodes are ordered by value and expanded in order. The number of edges passing the node “in parallel” are “active”.
Start with initial set of nodes roots.
- next_node()[source]
Get next node in order.
Does not remove the node from active set.
- Returns:
None if done. rootid, node, (active_nodes, active_root_ids, active_node_values, active_node_ids) otherwise
- Return type:
tuple or None
- expand_children_of(rootid, node)[source]
Replace the current node with its children in the iterators list of active nodes.
- Parameters:
rootid (int) – index of the root returned by the most recent call to
BreadthFirstIterator.next_node()
node (
TreeNode
) – node returned by the most recent call toBreadthFirstIterator.next_node()
- ultranest.netiter.print_tree(roots, title='Tree:')[source]
Print a pretty yet compact graphic of the tree.
- Parameters:
roots (list) – list of
TreeNode
specifying the roots of the tree.title (str) – Print this string first.
- ultranest.netiter.dump_tree(filename, roots, pointpile)[source]
Write a copy of the tree to a HDF5 file.
- ultranest.netiter.count_tree(roots)[source]
Return the total number of nodes and maximum number of parallel edges.
- Parameters:
roots (list) – list of
TreeNode
specifying the roots of the tree.- Returns:
count (int) – total number of nodes
maxwidth (int) – maximum number of active/parallel nodes encountered
- ultranest.netiter.count_tree_between(roots, lo, hi)[source]
Compute basic statistics about a tree.
Return the total number of nodes and maximum number of parallel edges, but only considering a interval of the tree.
- Parameters:
roots (list) – list of
TreeNode
specifying the roots of the tree.lo (float) – lower value threshold
hi (float) – upper value threshold
- Returns:
nnodes (int) – total number of nodes in the value interval lo .. hi (inclusive).
maxwidth (int) – maximum number of parallel edges
- ultranest.netiter.find_nodes_before(root, value)[source]
Identify all nodes that have children above value.
If a root child is above the value, its parent (root) is the leaf returned.
- Parameters:
root (
TreeNode
) – treevalue (float) – selection threshold
- Returns:
list_of_parents (list of nodes) – parents
list_of_nforks (list of floats) – The list of number of forks experienced is: 1 if direct descendent of one of the root node’s children, where no node had more than one child. 12 if the root child had 4 children, one of which had 3 children.
- class ultranest.netiter.PointPile(udim, pdim, chunksize=1000)[source]
Bases:
object
A in-memory linearized storage of point coordinates.
TreeNode
objects only store the logL value and id, which is the index in the point pile. The point pile stores the point coordinates in u and p-space (transformed and untransformed).Set up point pile.
- Parameters:
udim (int) – number of parameters, dimension of unit cube points
pdim (int) – number of physical (and derived) parameters
chunksize (int) – the point pile grows as needed, in these intervals.
- class ultranest.netiter.SingleCounter(random=False)[source]
Bases:
object
Evidence log(Z) and posterior weight summation for a Nested Sampling tree.
Initialise.
- Parameters:
random (bool) – if False, use mean estimator for volume shrinkage if True, draw a random sample
- property logZremain
Estimate conservatively the logZ of the current tail (un-opened nodes).
- class ultranest.netiter.MultiCounter(nroots, nbootstraps=10, random=False, check_insertion_order=False)[source]
Bases:
object
Like
SingleCounter
, but bootstrap capable.Attributes:
logZ
,logZerr
,logVolremaining
: main estimatorlogZerr
is probably not reliable, because it needsnlive
to convertH
tologZerr
.Lmax
: highest loglikelihood currently knownlogZ_bs
,logZerr_bs
: bootstrapped logZ estimatelogZremain
,remainder_ratio
: weight and fraction of the unexplored remainder
Each of the following has as many entries as number of iterations:
all_H
,all_logZ
,all_logVolremaining
,logweights
: information for all instances first entry is the main estimator, i.e., not bootstrappedistail
: whether that node was a leaf.nlive
: number of parallel arcs (“live points”)
Initialise counter.
- Parameters:
nroots (int) – number of children the tree root has
nbootstraps (int) – number of bootstrap rounds
random (bool) – if False, use mean estimator for volume shrinkage if True, draw a random sample
check_insertion_order (bool) – whether to run insertion order rank U test
- reset(nentries)[source]
Reset counters/integrator.
- Parameters:
nentries (int) – number of iterators
- property logZ_bs
Estimate logZ from the bootstrap ensemble.
- property logZerr_bs
Estimate logZ error from the bootstrap ensemble.
- property insertion_order_runlength
Get shortest insertion order test run.
- Returns:
shortest_run_length (int) – Shortest insertion order test run length.
The MWW (U-test) statistic is considered at each iteration.
When it exceeds a threshold (4 sigma by default, insertion_order_threshold),
the statistic is reset. The run length is recorded.
This property returns the shortest run length of all recorded
so far, or infinity otherwise.
At 4 sigma, run lengths no shorter than 10^5.5 are expected
in unbiased runs.
- property insertion_order_converged
Check convergence.
- Returns:
converged (bool) – Whether the run is unbiased according to a U-test.
The MWW (U-test) statistic is considered at each iteration.
When it exceeds a threshold (4 sigma by default, insertion_order_threshold),
the statistic is reset. The run length is recorded.
This property returns the shortest run length of all recorded
so far, or infinity otherwise.
At 4 sigma, run lengths no shorter than 10^5.5 are expected
in unbiased runs. If the number of runs exceeds the number
of iterations divided by 10^5.5, the run is likely biased
and not converged.
If not converged, the step sampler may need to use more steps,
or the problem needs to be reparametrized.
- ultranest.netiter.combine_results(saved_logl, saved_nodeids, pointpile, main_iterator, mpi_comm=None)[source]
Combine a sequence of likelihoods and nodes into a summary dictionary.
- Parameters:
saved_logl (list of floats) – loglikelihoods of dead points
saved_nodeids (list of ints) – indices of dead points
pointpile (
PointPile
) – Point pile.main_iterator (
BreadthFirstIterator
) – iterator usedmpi_comm (None | object) – MPI communicator object, or None if MPI is not used.
- Returns:
results – All information of the run. Important keys: Number of nested sampling iterations (niter), Evidence estimate (logz), Effective Sample Size (ess), H (information gain), weighted samples (weighted_samples), equally weighted samples (samples), best-fit point information (maximum_likelihood), posterior summaries (posterior). The rank order test score (insertion_order_MWW_test) is included if the iterator has it.
- Return type:
dict
- ultranest.netiter.logz_sequence(root, pointpile, nbootstraps=12, random=True, onNode=None, verbose=False, check_insertion_order=True)[source]
Run MultiCounter through tree root.
Keeps track of, and returns
(logz, logzerr, logv, nlive)
.- Parameters:
root (
TreeNode
) – Treepointpile (
PointPile
) – Point pilenbootstraps (int) – Number of independent iterators
random (bool) – Whether to randomly draw volume estimates
onNode (function) – Function to call for every node. receives current node and the iterator
verbose (bool) – Whether to show a progress indicator on stderr
check_insertion_order (bool) – Whether to perform a rolling insertion order rank test
- Returns:
results (dict) – Run information, see
combine_results()
sequence (dict) – Each entry of the dictionary is results[‘niter’] long, and contains the state of information at that iteration. Important keys are: Iteration number (niter), Volume estimate (logvol), loglikelihood (logl), absolute logarithmic weight (logwt), Relative weight (weights), point (samples), Number of live points (nlive), Evidence estimate (logz) and its uncertainty (logzerr), Rank test score (insert_order).
ultranest.ordertest module
U test for a uniform distribution of integers
A test for biased nested sampling, presented in section 4.5.2 of Buchner (2023, https://arxiv.org/abs/2101.09675).
This implements the same idea as https://arxiv.org/abs/2006.03371 except their KS test is problematic because the variable (insertion order) is not continuous. Instead, this implements a Mann-Whitney-Wilcoxon U test, which also is in practice more sensitive than the KS test. A highly efficient implementation is achieved by keeping only a histogram of the insertion orders and comparing those to expectations from a uniform distribution.
To quantify the convergence of a run, one route is to apply this test at the end of the run. Another approach is to reset the counters every time the test exceeds a z-score of 3 sigma, and report the run lengths, which quantify how many iterations nested sampling was able to proceed without detection of a insertion order problem.
- ultranest.ordertest.infinite_U_zscore(sample, B)[source]
Compute Mann-Whitney-Wilcoxon U test for a sample of integers to be uniformly distributed between 0 and B.
- Parameters:
sample (array of integers) – values between 0 and B (inclusive).
B (int) – maximum rank allowed.
- Returns:
zscore
- Return type:
float
- class ultranest.ordertest.UniformOrderAccumulator[source]
Bases:
object
U test accumulator.
Stores rank orders (1 to N), for comparison with a uniform order.
See section 4.5.2 of Buchner (2023, https://arxiv.org/abs/2101.09675), based on the Mann-Whitney-Wilcoxon U test against a uniform integer distribution.
Initiate empty accumulator.
- add(order, N)[source]
Accumulate rank order (0 to N).
- Parameters:
order (int) – rank between 0 and N (inclusive).
N (int) – maximum rank allowed.
- property zscore
z-score of the null hypothesis (uniform distribution) probability.
ultranest.pathsampler module
MCMC-like step sampling on a trajectory.
These features are experimental.
- class ultranest.pathsampler.SamplingPathSliceSampler(nsteps)[source]
Bases:
StepSampler
Slice sampler, respecting the region, on the sampling path.
This first builds up a complete trajectory, respecting reflections. Then, from the trajectory a new point is drawn with slice sampling.
The trajectory is built by doubling the length to each side and checking if the point is still inside. If not, reflection is attempted with the gradient (either provided or region-based estimate).
Initialise sampler.
- Parameters:
nsteps (int) – number of accepted steps until the sample is considered independent.
- generate_direction(ui, region, scale=1)[source]
Choose new initial direction according to region.transformLayer axes.
- class ultranest.pathsampler.SamplingPathStepSampler(nresets, nsteps, scale=1.0, balance=0.01, nudge=1.1, log=False)[source]
Bases:
StepSampler
Step sampler on a sampling path.
Initialise sampler.
- Parameters:
nresets (int) – after this many iterations, select a new direction
nsteps (int) – how many steps to make in total
scale (float) – initial step size
balance (float) – acceptance rate to target if below, scale is increased, if above, scale is decreased
nudge (float) – factor for increasing scale (must be >=1) nudge=1 implies no step size adaptation.
- adjust_accept(accepted, unew, pnew, Lnew, nc)[source]
Adjust proposal given that we have been accepted at a new point after nc calls.
- class ultranest.pathsampler.OtherSamplerProxy(nnewdirections, sampler='steps', nsteps=0, balance=0.9, scale=0.1, nudge=1.1, log=False)[source]
Bases:
object
Proxy for ClockedSamplers.
Initialise sampler.
- Parameters:
nnewdirections (int) – number of accepted steps until the sample is considered independent.
sampler (str) – which sampler to use
nsteps – number of steps in sampler
balance – acceptance rate to target
scale – initial proposal scale
nudge – adjustment factor for scale when acceptance rate is too low or high. must be >=1.
ultranest.plot module
Plotting utilities
- ultranest.plot.runplot(results, span=None, logplot=False, kde=True, nkde=1000, color='blue', plot_kwargs=None, label_kwargs=None, lnz_error=True, lnz_truth=None, truth_color='red', truth_kwargs=None, max_x_ticks=8, max_y_ticks=3, use_math_text=True, mark_final_live=True, fig=None)[source]
Plot live points, ln(likelihood), ln(weight), and ln(evidence) vs. ln(prior volume).
- Parameters:
results (dynesty.results.Results instance) – dynesty.results.Results instance from a nested sampling run.
span (iterable with shape (4,), optional) –
A list where each element is either a length-2 tuple containing lower and upper bounds or a float from (0., 1.] giving the fraction below the maximum. If a fraction is provided, the bounds are chosen to be equal-tailed. An example would be:
span = [(0., 10.), 0.001, 0.2, (5., 6.)]
Default is (0., 1.05 * max(data)) for each element.
logplot (bool, optional) – Whether to plot the evidence on a log scale. Default is False.
kde (bool, optional) – Whether to use kernel density estimation to estimate and plot the PDF of the importance weights as a function of log-volume (as opposed to the importance weights themselves). Default is True.
nkde (int, optional) – The number of grid points used when plotting the kernel density estimate. Default is 1000.
color (str or iterable with shape (4,), optional) – A ~matplotlib-style color (either a single color or a different value for each subplot) used when plotting the lines in each subplot. Default is ‘blue’.
plot_kwargs (dict, optional) – Extra keyword arguments that will be passed to plot.
label_kwargs (dict, optional) – Extra keyword arguments that will be sent to the ~matplotlib.axes.Axes.set_xlabel and ~matplotlib.axes.Axes.set_ylabel methods.
lnz_error (bool, optional) – Whether to plot the 1, 2, and 3-sigma approximate error bars derived from the ln(evidence) error approximation over the course of the run. Default is True.
lnz_truth (float, optional) – A reference value for the evidence that will be overplotted on the evidence subplot if provided.
truth_color (str or iterable with shape (ndim,), optional) – A ~matplotlib-style color used when plotting lnz_truth. Default is ‘red’.
truth_kwargs (dict, optional) – Extra keyword arguments that will be used for plotting lnz_truth.
max_x_ticks (int, optional) – Maximum number of ticks allowed for the x axis. Default is 8.
max_y_ticks (int, optional) – Maximum number of ticks allowed for the y axis. Default is 4.
use_math_text (bool, optional) – Whether the axis tick labels for very large/small exponents should be displayed as powers of 10 rather than using e. Default is False.
mark_final_live (bool, optional) – Whether to indicate the final addition of recycled live points (if they were added to the resulting samples) using a dashed vertical line. Default is True.
fig ((~matplotlib.figure.Figure, ~matplotlib.axes.Axes), optional) – If provided, overplot the run onto the provided figure. Otherwise, by default an internal figure is generated.
- Returns:
runplot – Output summary plot.
- Return type:
(~matplotlib.figure.Figure, ~matplotlib.axes.Axes)
- ultranest.plot.cornerplot(results, min_weight=0.0001, with_legend=True, logger=None, levels=[0.9973, 0.9545, 0.6827, 0.3934], plot_datapoints=False, plot_density=False, show_titles=True, quiet=True, contour_kwargs={'colors': ['navy', 'navy', 'navy', 'purple'], 'linestyles': ['-', '-.', ':', '--']}, color='purple', quantiles=[0.15866, 0.5, 0.8413], **corner_kwargs)[source]
Make a healthy corner plot with corner.
Essentially does:
paramnames = results['paramnames'] data = results['weighted_samples']['points'] weights = results['weighted_samples']['weights'] return corner.corner( results['weighted_samples']['points'], weights=results['weighted_samples']['weights'], labels=results['paramnames'])
- Parameters:
results (dict) – data dictionary
min_weight (float) – cut off low-weight posterior points. Avoids meaningless stragglers when plot_datapoints is True.
with_legend (bool) – whether to add a legend to show meaning of the lines.
logger (None | object) – where to log
levels (list) – list of credible interval levels
plot_datapoints (bool) – Draw individual data points.
plot_density (bool) – Draw the density colormap.
show_titles (bool) – Displays a title above each 1-D histogram showing the 0.5 quantile with the upper and lower errors supplied by the quantiles argument.
quiet (bool) – If true, suppress warnings for small datasets.
contour_kwargs (dict) – Any additional keyword arguments to pass to the contour method.
color (str) –
matplotlib
style color for all histograms.quantiles (list) – fractional quantiles to show on the 1-D histograms as vertical dashed lines.
**corner_kwargs (dict) – Any remaining keyword arguments are sent to
corner.corner()
.
- Returns:
fig – The
matplotlib
figure instance for the corner plot.- Return type:
~matplotlib.figure.Figure
- ultranest.plot.traceplot(results, span=None, quantiles=[0.025, 0.5, 0.975], smooth=0.02, post_color='blue', post_kwargs=None, kde=True, nkde=1000, trace_cmap='plasma', trace_color=None, trace_kwargs=None, connect=False, connect_highlight=10, connect_color='red', connect_kwargs=None, max_n_ticks=5, use_math_text=False, labels=None, label_kwargs=None, show_titles=False, title_fmt='.2f', title_kwargs=None, truths=None, truth_color='red', truth_kwargs=None, verbose=False, fig=None)[source]
Plot traces and marginalized posteriors for each parameter.
- Parameters:
results (~dynesty.results.Results instance) – A ~dynesty.results.Results instance from a nested sampling run. Compatible with results derived from nestle.
span (iterable with shape (ndim,), optional) –
A list where each element is either a length-2 tuple containing lower and upper bounds or a float from (0., 1.] giving the fraction of (weighted) samples to include. If a fraction is provided, the bounds are chosen to be equal-tailed. An example would be:
span = [(0., 10.), 0.95, (5., 6.)]
Default is 0.999999426697 (5-sigma credible interval) for each parameter.
quantiles (iterable, optional) – A list of fractional quantiles to overplot on the 1-D marginalized posteriors as vertical dashed lines. Default is [0.025, 0.5, 0.975] (the 95%/2-sigma credible interval).
smooth (float or iterable with shape (ndim,), optional) – The standard deviation (either a single value or a different value for each subplot) for the Gaussian kernel used to smooth the 1-D marginalized posteriors, expressed as a fraction of the span. Default is 0.02 (2% smoothing). If an integer is provided instead, this will instead default to a simple (weighted) histogram with bins=smooth.
post_color (str or iterable with shape (ndim,), optional) – A ~matplotlib-style color (either a single color or a different value for each subplot) used when plotting the histograms. Default is ‘blue’.
post_kwargs (dict, optional) – Extra keyword arguments that will be used for plotting the marginalized 1-D posteriors.
kde (bool, optional) – Whether to use kernel density estimation to estimate and plot the PDF of the importance weights as a function of log-volume (as opposed to the importance weights themselves). Default is True.
nkde (int, optional) – The number of grid points used when plotting the kernel density estimate. Default is 1000.
trace_cmap (str or iterable with shape (ndim,), optional) – A ~matplotlib-style colormap (either a single colormap or a different colormap for each subplot) used when plotting the traces, where each point is colored according to its weight. Default is ‘plasma’.
trace_color (str or iterable with shape (ndim,), optional) – A ~matplotlib-style color (either a single color or a different color for each subplot) used when plotting the traces. This overrides the trace_cmap option by giving all points the same color. Default is None (not used).
trace_kwargs (dict, optional) – Extra keyword arguments that will be used for plotting the traces.
connect (bool, optional) – Whether to draw lines connecting the paths of unique particles. Default is False.
connect_highlight (int or iterable, optional) – If connect=True, highlights the paths of a specific set of particles. If an integer is passed,
connect_highlight
random particle paths will be highlighted. If an iterable is passed, then the particle paths corresponding to the provided indices will be highlighted.connect_color (str, optional) – The color of the highlighted particle paths. Default is ‘red’.
connect_kwargs (dict, optional) – Extra keyword arguments used for plotting particle paths.
max_n_ticks (int, optional) – Maximum number of ticks allowed. Default is 5.
use_math_text (bool, optional) – Whether the axis tick labels for very large/small exponents should be displayed as powers of 10 rather than using e. Default is False.
labels (iterable with shape (ndim,), optional) – A list of names for each parameter. If not provided, the default name used when plotting will follow \(x_i\) style.
label_kwargs (dict, optional) – Extra keyword arguments that will be sent to the ~matplotlib.axes.Axes.set_xlabel and ~matplotlib.axes.Axes.set_ylabel methods.
show_titles (bool, optional) – Whether to display a title above each 1-D marginalized posterior showing the 0.5 quantile along with the upper/lower bounds associated with the 0.025 and 0.975 (95%/2-sigma credible interval) quantiles. Default is True.
title_fmt (str, optional) – The format string for the quantiles provided in the title. Default is ‘.2f’.
title_kwargs (dict, optional) – Extra keyword arguments that will be sent to the ~matplotlib.axes.Axes.set_title command.
truths (iterable with shape (ndim,), optional) – A list of reference values that will be overplotted on the traces and marginalized 1-D posteriors as solid horizontal/vertical lines. Individual values can be exempt using None. Default is None.
truth_color (str or iterable with shape (ndim,), optional) – A ~matplotlib-style color (either a single color or a different value for each subplot) used when plotting truths. Default is ‘red’.
truth_kwargs (dict, optional) – Extra keyword arguments that will be used for plotting the vertical and horizontal lines with truths.
verbose (bool, optional) – Whether to print the values of the computed quantiles associated with each parameter. Default is False.
fig ((~matplotlib.figure.Figure, ~matplotlib.axes.Axes), optional) – If provided, overplot the traces and marginalized 1-D posteriors onto the provided figure. Otherwise, by default an internal figure is generated.
- Returns:
traceplot – Output trace plot.
- Return type:
(~matplotlib.figure.Figure, ~matplotlib.axes.Axes)
- class ultranest.plot.PredictionBand(x, shadeargs={}, lineargs={})[source]
Bases:
object
Plot bands of model predictions as calculated from a chain.
call add(y) to add predictions from each chain point
x = numpy.linspace(0, 1, 100) band = PredictionBand(x) for c in chain: band.add(c[0] * x + c[1]) # add median line. As an option a matplotlib ax can be given. band.line(color='k') # add 1 sigma quantile band.shade(color='k', alpha=0.3) # add wider quantile band.shade(q=0.01, color='gray', alpha=0.1) plt.show()
To plot onto a specific axis, use band.line(…, ax=myaxis).
Initialise.
- Parameters:
x (array) – Independent variable.
shadeargs (dict) – default arguments for shade function.
lineargs (dict) – default arguments for line function.
ultranest.popstepsampler module
Vectorized step samplers
Likelihood based on GPUs (model emulators based on neural networks, or JAX implementations) can evaluate hundreds of points as efficiently as one point. The implementations in this module leverage this power, by providing random walks of populations of walkers.
- ultranest.popstepsampler.generate_cube_oriented_direction(ui, region, scale=1)
Draw a unit direction vector in direction of a random unit cube axes.
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region – not used
scale (float) – length of returned vector
- Returns:
v – Random axis vectors of length scale, one for each starting point.
- Return type:
np.array((npoints, ndim), dtype=float)
- ultranest.popstepsampler.generate_cube_oriented_direction_scaled(ui, region, scale=1)
Draw a unit direction vector in direction of a random unit cube axes. Scale by the live point min-max range.
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region – not used
scale (float) – length of returned vector
- Returns:
v – Random axis vectors of length scale, one for each starting point.
- Return type:
np.array((npoints, ndim), dtype=float)
- ultranest.popstepsampler.generate_random_direction(ui, region, scale=1)
Draw uniform direction vector in unit cube space of length scale.
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region (MLFriends object) – current region (not used)
scale (float) – length of direction vector
- Returns:
v – new direction vector
- Return type:
array
- ultranest.popstepsampler.generate_region_oriented_direction(ui, region, scale=1)
Draw a random direction vector in direction of one of the region axes.
If given, the vector length is scale. If not, the vector length in transformed space is tscale.
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region (MLFriends object) – current region
scale (float) – length of direction vector in t-space
- Returns:
v – new direction vector (in u-space)
- Return type:
array
- ultranest.popstepsampler.generate_region_random_direction(ui, region, scale=1)
Draw a direction vector in a random direction of the region.
The vector length is scale (in unit cube space).
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region (MLFriends object) – current region
scale (float:) – length of direction vector (in t-space)
- Returns:
v – new direction vector
- Return type:
array
- class ultranest.popstepsampler.PopulationRandomWalkSampler(popsize, nsteps, generate_direction, scale, scale_adapt_factor=0.9, scale_min=1e-20, scale_max=20, log=False, logfile=None)[source]
Bases:
GenericPopulationSampler
Vectorized Gaussian Random Walk sampler.
Initialise.
- Parameters:
popsize (int) – number of walkers to maintain. this should be fairly large (~100), if too large you probably get memory issues Also, some results have to be discarded as the likelihood threshold increases. Observe the nested sampling efficiency.
nsteps (int) – number of steps to take until the found point is accepted as independent. To find the right value, see
ultranest.calibrator.ReactiveNestedCalibrator
generate_direction (function) – Function that gives proposal kernel shape, one of:
ultranest.popstepsampler.generate_cube_oriented_direction()
ultranest.popstepsampler.generate_cube_oriented_direction_scaled()
ultranest.popstepsampler.generate_random_direction()
ultranest.popstepsampler.generate_region_oriented_direction()
ultranest.popstepsampler.generate_region_random_direction()
scale (float) – initial guess for the proposal scaling factor
scale_adapt_factor (float) – if 1, no adapting is done. if <1, the scale is increased if the acceptance rate is below 23.4%, or decreased if it is above, by scale_adapt_factor.
scale_min (float) – lowest value allowed for scale, do not adapt down further
scale_max (float) – highest value allowed for scale, do not adapt up further
logfile (file) – where to print the current scaling factor and acceptance rate
- class ultranest.popstepsampler.PopulationSliceSampler(popsize, nsteps, generate_direction, scale=1.0, scale_adapt_factor=0.9, log=False, logfile=None)[source]
Bases:
GenericPopulationSampler
Vectorized slice/HARM sampler.
Can revert until all previous steps have likelihoods allL above Lmin. Updates currentt, generation and allL, in-place.
Initialise.
- Parameters:
popsize (int) – number of walkers to maintain
nsteps (int) – number of steps to take until the found point is accepted as independent. To find the right value, see
ultranest.calibrator.ReactiveNestedCalibrator
generate_direction (function (u, region, scale) -> v) – function such as generate_unit_directions, which generates a random slice direction.
scale (float) – initial guess scale for the length of the slice
scale_adapt_factor (float) – smoothing factor for updating scale. if near 1, scale is barely updating, if near 0, the last slice length is used as a initial guess for the next.
- setup_start(us, Ls, starting)[source]
Initialize walker starting points.
For iteration zero, randomly selects a live point as starting point.
- Parameters:
us (np.array((nlive, ndim))) – live points
Ls (np.array(nlive)) – loglikelihoods live points
starting (np.array(nwalkers, dtype=bool)) – which walkers to initialize.
- property status
Return compact string representation of the current status.
- setup_brackets(mask_starting, region)[source]
Pick starting direction and range for slice.
- Parameters:
mask_starting (np.array(nwalkers, dtype=bool)) – which walkers to set up.
region (MLFriends object) – Region
- advance(transform, loglike, Lmin, region)[source]
Advance the walker population.
- Parameters:
transform (function) – prior transform function
loglike (function) – loglikelihood function
Lmin (float) – current log-likelihood threshold
region (MLFriends object) – Region
- Returns:
nc – Number of likelihood function calls
- Return type:
int
- class ultranest.popstepsampler.PopulationSimpleSliceSampler(popsize, nsteps, generate_direction, scale_adapt_factor=1.0, adapt_slice_scale_target=2.0, scale=1.0, scale_jitter_func=None, slice_limit=<function slice_limit_to_unitcube>, max_it=100, shrink_factor=1.0)[source]
Bases:
GenericPopulationSampler
Vectorized Slice sampler without stepping out procedure for quick look fits.
Unlike :py:class:PopulationSliceSampler, in :py:class:PopulationSimpleSliceSampler, the likelihood is always called with the same number of points.
Sliced are defined by the :py:func:generate_direction function on a interval defined around the current point. The centred interval has the width of the scale parameter, i.e, there is no stepping out procedure as in :py:class:PopulationSliceSampler. Slices are then shrink towards the current point until a point is found with a likelihood above the threshold.
In the default case, i.e. scale=None, the slice width is defined as the intersection between itself and the unit cube. To improve the efficiency of the sampler, the slice can be reduced to an interval of size 2*scale centred on the point. scale can be adapted with the scale_adapt_factor parameter based on the median distance between the current and the next point in a chains among all the chains. If the median distance is above scale/adapt_slice_scale_target, the scale is increased by scale_adapt_factor, and decreased otherwise. The scale parameter can also be jittered by a user supplied function :py:func:scale_jitter_func to counter balance the effect of a strong adaptation.
In the case scale!=None, the detailed balance is not guaranteed, so this sampler should be use with caution.
Multiple (popsize) slice sampling chains are run independently and in parallel. In that case, we read points as if they were the next selected each after the other. For a points to update the slice, it needs to be still in the part of the slices searched after the first point have been read. In that case, we update as normal, otherwise we discard the point.
Initialise.
- Parameters:
popsize (int) – number of walkers to maintain.
nsteps (int) – number of steps to take until the found point is accepted as independent. To calibrate, try several runs with increasing nsteps (doubling). The ln(Z) should become stable at some value.
generate_direction (function) – Function that gives proposal kernel shape, one of:
ultranest.popstepsampler.generate_random_direction()
ultranest.popstepsampler.generate_region_oriented_direction()
ultranest.popstepsampler.generate_region_random_direction()
ultranest.popstepsampler.generate_differential_direction()
ultranest.popstepsampler.generate_mixture_random_direction()
ultranest.popstepsampler.generate_cube_oriented_direction()
-> no adaptation in that caseultranest.popstepsampler.generate_cube_oriented_direction_scaled()
-> no adaptation in that casescale (float) – initial guess for the slice width.
scale_jitter_func (function) – User supplied function to multiply the scale by a random factor. For example,
lambda : scipy.stats.truncnorm.rvs(-0.5, 5., loc=0, scale=1)+1.()
scale_adapt_factor (float) – adaptation of scale. If 1: no adaptation. if <1, the scale is increased/decreased by this factor if the final slice length is shorter/longer than the adapt_slice_scale_target*scale.
adapt_slice_scale_target (float) – Targeted ratio of the median distance between slice mid and final point among all chains of scale. Default: 2.0. Higher values are more conservative, lower values are faster.
slice_limit (function) – Function setting the initial slice upper and lower bound. The default is :py:func:slice_limit_to_unitcube which defines the slice limit as the intersection between the slice and the unit cube. An alternative when the scale is used is :py:func:slice_limit_to_scale which defines the slice limit as an interval of size 2*scale. This function should either return a copy of the tleft and tright arguments or new arrays of the same shape.
max_it (int) – maximum number of iterations to find a point on the slice. If the maximum number of iterations is reached, the current point is returned as the next one.
shrink_factor (float) – For standard slice sampling shrinking, shrink_factor=1, the slice bound is updated to the last rejected point. Setting shrink_factor>1 aggressively accelerates the shrinkage, by updating the new slice bound to 1/shrink_factor of the distance between the current point and rejected point.
ultranest.samplingpath module
Sparsely sampled, virtual sampling path.
Supports reflections at unit cube boundaries, and regions.
- ultranest.samplingpath.nearest_box_intersection_line(ray_origin, ray_direction, fwd=True)[source]
Compute intersection of a line (ray) and a unit box (0:1 in all axes).
Based on http://www.iquilezles.org/www/articles/intersectors/intersectors.htm
To continue forward traversing at the reflection point use:
while True: # update current point x x, _, i = box_line_intersection(x, v) # change direction v[i] *= -1
- Parameters:
ray_origin (vector) – starting point of line
ray_direction (vector) – line direction vector
- Returns:
p (vector) – intersection point
t (float) – intersection point distance from ray_origin in units in ray_direction
i (int) – axes which change direction at pN
- ultranest.samplingpath.box_line_intersection(ray_origin, ray_direction)[source]
Find intersections of a line with the unit cube, in both sides.
- Parameters:
ray_origin (vector) – starting point of line
ray_direction (vector) – line direction vector
- Returns:
left (nearest_box_intersection_line return value) – from negative direction
right (nearest_box_intersection_line return value) – from positive direction
- ultranest.samplingpath.linear_steps_with_reflection(ray_origin, ray_direction, t, wrapped_dims=None)[source]
Go t steps in direction ray_direction from ray_origin.
Reflect off the unit cube if encountered, respecting wrapped dimensions. In any case, the distance should be
t * ray_direction
.- Returns:
new_point (vector) – end point
new_direction (vector) – new direction.
- ultranest.samplingpath.get_sphere_tangent(sphere_center, edge_point)[source]
Compute tangent at sphere surface point.
Assume a sphere centered at sphere_center with radius so that edge_point is on the surface. At edge_point, in which direction does the normal vector point?
- Parameters:
sphere_center (vector) – center of sphere
edge_point (vector) – point at the surface
- Returns:
tangent – vector pointing to the sphere center.
- Return type:
vector
- ultranest.samplingpath.get_sphere_tangents(sphere_center, edge_point)[source]
Compute tangent at sphere surface point.
Assume a sphere centered at sphere_center with radius so that edge_point is on the surface. At edge_point, in which direction does the normal vector point?
This function is vectorized and handles arrays of arguments.
- Parameters:
sphere_center (array) – centers of spheres
edge_point (array) – points at the surface
- Returns:
tangent – vectors pointing to the sphere center.
- Return type:
array
- ultranest.samplingpath.reflect(v, normal)[source]
Reflect vector
v
off anormal
vector, return new direction vector.
- ultranest.samplingpath.distances(direction, center, r=1)[source]
Compute sphere-line intersection.
- Parameters:
direction (vector) – direction vector (line starts at 0)
center (vector) – center of sphere (coordinate vector)
r (float) – radius of sphere
- Returns:
tpos, tneg – the positive and negative coordinate along the l vector where r is intersected. If no intersection, throws AssertError.
- Return type:
floats
- ultranest.samplingpath.angle(a, b)[source]
Compute dot product between vectors a and b.
The arccos of the return value would give an actual angle.
- ultranest.samplingpath.extrapolate_ahead(dj, xj, vj, contourpath=None)[source]
Make di steps of size vj from xj.
Reflect off unit cube if necessary.
- ultranest.samplingpath.interpolate(i, points, fwd_possible, rwd_possible, contourpath=None)[source]
Interpolate a point on the path indicated by points.
Given a sparsely sampled track (stored in .points), potentially encountering reflections, extract the corrdinates of the point with index i. That point may not have been evaluated yet.
- Parameters:
i (int) – position on track to return.
points (list of tuples (index, coordinate, direction, loglike)) – points on the path
fwd_possible (bool) – whether the path could be extended in the positive direction.
rwd_possible (bool) – whether the path could be extended in the negative direction.
contourpath (ContourPath) – Use region to reflect. Not used at the moment.
- class ultranest.samplingpath.SamplingPath(x0, v0, L0)[source]
Bases:
object
Path described by a (potentially sparse) sequence of points.
Convention of the stored point tuple
(i, x, v, L)
: i: index (0 is starting point) x: point v: direction L: loglikelihood valueInitialise with path starting point.
Starting point (x0), direction (v0) and loglikelihood value (L0) of the path. Is given index 0.
- class ultranest.samplingpath.ContourSamplingPath(samplingpath, region)[source]
Bases:
object
Region-aware form of the sampling path.
Uses region points to guess a likelihood contour gradient.
Initialise with samplingpath and region.
- extrapolate(i)[source]
Advance beyond the current path, extrapolate from the end point.
- Parameters:
i (int) – index on path.
- Returns:
coords – coordinates of the new point.
- Return type:
vector
- gradient(reflpoint, plot=False)[source]
Compute gradient approximation.
Finds spheres enclosing the reflpoint, and chooses their mean as the direction to go towards. If no spheres enclose the reflpoint, use nearest sphere.
v is not used, because that would break detailed balance.
- Considerations:
in low-d, we want to focus on nearby live point spheres The border traced out is fairly accurate, at least in the normal away from the inside.
in high-d, reflpoint is contained by all live points, and none of them point to the center well. Because the sampling is poor, the “region center” position will be very stochastic.
- Parameters:
reflpoint (vector) – point outside the likelihood contour, reflect there
v (vector) – previous direction vector
- Returns:
gradient – normal of ellipsoid
- Return type:
vector
ultranest.solvecompat module
Drop-in replacement for pymultinest.solve.
Example:
from ultranest.solvecompat import pymultinest_solve_compat as solve
# is a drop-in replacement for
from pymultinest.solve import solve
- ultranest.solvecompat.pymultinest_solve_compat(LogLikelihood, Prior, n_dims, paramnames=None, outputfiles_basename=None, resume=False, n_live_points=400, evidence_tolerance=0.5, seed=-1, max_iter=0, wrapped_params=None, verbose=True, speed='safe', **kwargs)[source]
Run nested sampling analysis.
Disadvantages compared to using ReactiveNestedSampler directly: cannot resume easily, cannot plot interactively. Limited results.
It is recommended that you directly use:
sampler = ReactiveNestedSampler(paramnames, LogLikelihood, transform=Prior) sampler.run()
following the UltraNest documentation and manuals, as this gives you more control on resuming and sampler options.
ultranest.stepfuncs module
Efficient helper functions for vectorized step-samplers
- ultranest.stepfuncs.evolve(transform, loglike, Lmin, currentu, currentL, currentt, currentv, current_left, current_right, searching_left, searching_right)
Evolve each slice sampling walker.
- Parameters:
transform (function) – prior transform function
loglike (function) – loglikelihood function
Lmin (float) – current log-likelihood threshold
currentu (np.array((nwalkers, ndim))) – slice starting point (where currentt=0)
currentL (np.array(nwalkers)) – current loglikelihood
currentt (np.array(nwalkers)) – proposed coordinate on the slice
currentv (np.array((nwalkers, ndim))) – slice direction vector
current_left (np.array(nwalkers)) – current slice negative end
current_right (np.array(nwalkers)) – current slice positive end
searching_left (np.array(nwalkers, dtype=bool)) – whether stepping out in the negative direction
searching_right (np.array(nwalkers, dtype=bool)) – whether stepping out in the positive direction
- Returns:
currentt (np.array(nwalkers)) – as above
currentv (np.array((nwalkers, ndim))) – as above
current_left (np.array(nwalkers)) – as above
current_right (np.array(nwalkers)) – as above
searching_left (np.array(nwalkers, dtype=bool)) – as above
searching_right (np.array(nwalkers, dtype=bool)) – as above
success (np.array(nwalkers, dtype=bool)) – whether the walker accepts the point.
unew (np.array((success.sum(), ndim))) – coordinates of accepted points
pnew (np.array((success.sum(), nparams))) – transformed coordinates of accepted points
Lnew (np.array(success.sum())) – log-likelihoods of accepted points
nc (int) – number of points for which the log-likelihood function was called.
This function writes in-place to
currentt, currentv, current_left, current_right, searching_left,
searching_right and currentu, but also returns these.
- ultranest.stepfuncs.evolve_prepare(searching_left, searching_right)
Get auxiliary slice sampler state selectors.
Vectorized computation for multiple (nwalkers) walkers.
- Parameters:
searching_left (np.array(nwalkers, dtype=bool)) – whether stepping out in the negative direction
searching_right (np.array(nwalkers, dtype=bool)) – whether stepping out in the positive direction
- Returns:
search_right (np.array(nwalkers, dtype=bool):) – if searching right and not left
bisecting (np.array(nwalkers, dtype=bool):) – if not searching right nor left any more
- ultranest.stepfuncs.evolve_update(acceptable, Lnew, Lmin, search_right, bisecting, currentt, current_left, current_right, searching_left, searching_right, success)
Update the state of each walker.
This uses the robust logic of slice sampling, with stepping out by doubling.
- Parameters:
acceptable (np.array(nwalkers, dtype=bool)) – whether a likelihood evaluation was made. If false, rejected because out of contour.
Lnew (np.array(acceptable.sum(), dtype=bool)) – likelihood value of proposed point
Lmin (float) – current log-likelihood threshold
search_right (np.array(nwalkers, dtype=bool)) – whether stepping out in the positive direction
bisecting (np.array(nwalkers, dtype=bool)) – whether bisecting. If neither search_right nor bisecting, then
currentt (np.array(nwalkers)) – proposed coordinate on the slice
current_left (np.array(nwalkers)) – current slice negative end
current_right (np.array(nwalkers)) – current slice positive end
searching_left (np.array(nwalkers, dtype=bool)) – whether stepping out in the negative direction
searching_right (np.array(nwalkers, dtype=bool)) – whether stepping out in the positive direction
success (np.array(nwalkers, dtype=bool)) – whether the walker accepts the point.
Notes
Writes to currentt, current_left, current_right, searching_left, searching_right, success.
- ultranest.stepfuncs.generate_cube_oriented_direction(ui, region, scale=1)
Draw a unit direction vector in direction of a random unit cube axes.
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region – not used
scale (float) – length of returned vector
- Returns:
v – Random axis vectors of length scale, one for each starting point.
- Return type:
np.array((npoints, ndim), dtype=float)
- ultranest.stepfuncs.generate_cube_oriented_direction_scaled(ui, region, scale=1)
Draw a unit direction vector in direction of a random unit cube axes. Scale by the live point min-max range.
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region – not used
scale (float) – length of returned vector
- Returns:
v – Random axis vectors of length scale, one for each starting point.
- Return type:
np.array((npoints, ndim), dtype=float)
- ultranest.stepfuncs.generate_differential_direction(ui, region, scale=1)
Sample a vector using the difference between two randomly selected live points.
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting point
region (MLFriends object) – current region
scale (float:) – length of direction vector (in t-space)
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepfuncs.generate_mixture_random_direction(ui, region, scale=1)
Sample randomly uniformly from two proposals.
Randomly applies either
generate_differential_direction()
, which transports far, orgenerate_region_oriented_direction()
, which is stiffer.Best method according to https://arxiv.org/abs/2211.09426
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting point
region (MLFriends object) – current region
scale (float:) – length of direction vector (in t-space)
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepfuncs.generate_random_direction(ui, region, scale=1)
Draw uniform direction vector in unit cube space of length scale.
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region (MLFriends object) – current region (not used)
scale (float) – length of direction vector
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepfuncs.generate_region_oriented_direction(ui, region, scale=1)
Draw a random direction vector in direction of one of the region axes.
If given, the vector length is scale. If not, the vector length in transformed space is tscale.
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region (MLFriends object) – current region
scale (float) – length of direction vector in t-space
- Returns:
v – new direction vector (in u-space)
- Return type:
array
- ultranest.stepfuncs.generate_region_random_direction(ui, region, scale=1)
Draw a direction vector in a random direction of the region.
The vector length is scale (in unit cube space).
- Parameters:
ui (np.array((npoints, ndim), dtype=float)) – starting points (not used)
region (MLFriends object) – current region
scale (float:) – length of direction vector (in t-space)
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepfuncs.step_back(Lmin, allL, generation, currentt, log=False)
Revert walkers which have wandered astray.
Revert until all previous steps have likelihoods allL above Lmin. Updates currentt, generation and allL, in-place.
- Parameters:
Lmin (float) – current loglikelihood threshold
allL (np.array((nwalkers, ngenerations))) – loglikelihoods of the chain. NaN where not evaluated yet.
generation (np.array(nwalkers, dtype=int)) – how many iterations each walker has completed.
currentt (np.array(nwalkers)) – current slice coordinate
log (bool) – whether to print when steps are reverted
- ultranest.stepfuncs.update_vectorised_slice_sampler(t, tleft, tright, proposed_L, proposed_u, proposed_p, worker_running, status, Likelihood_threshold, shrink_factor, allu, allL, allp, popsize)
Update the slice sampler state of each walker in the populations.
- Parameters:
t (array) – proposed slice coordinate
tleft (array) – current slice negative end
tright (array) – current slice positive end
proposed_L (array) – log-likelihood of proposed point
proposed_u (array) – proposed point in unit cube space
proposed_p (array) – proposed point in transformed space
worker_running (array) – index of the point associated with each worker
status (array) – integer status of the point
Likelihood_threshold (float) – current log-likelihood threshold
shrink_factor (float) – factor by which to shrink the slice
allu (array) – Accepted points in unit cube space
allL (array) – log-likelihoods of accepted points
allp (array) – Accepted points in transformed space
popsize (int) – number of points
- Returns:
tleft (array) – updated current slice negative end
tright (array) – updated current slice positive end
worker_running (array) – updated index of the point associated with each worker
status (array) – updated integer status of the point
allu (array) – updated accepted points in unit cube space
allL (array) – updated log-likelihoods of accepted points
allp (array) – updated accepted points in transformed space
discarded (int) – Point where the likelihood was evaluated but was not taken into account.
- ultranest.stepfuncs.within_unit_cube(u)
whether all fields are between 0 and 1, for each row
- Parameters:
u (np.array((npoints, ndim), dtype=float):) – points
- Returns:
within – for each point, whether it is within the unit cube
- Return type:
np.array(npoints, dtype=bool):
ultranest.stepsampler module
MCMC-like step sampling
The classes implemented here are generators that, in each iteration, only make one likelihood call. This allows running in parallel a population of samplers that have the same execution time per call, even if they do not terminate at the same number of iterations.
- ultranest.stepsampler.generate_random_direction(ui, region, scale=1)[source]
Sample uniform direction vector in unit cube space of length scale.
Samples a direction from a unit multi-variate Gaussian.
- Parameters:
ui (array) – starting point
region (MLFriends object) – current region (not used)
scale (float) – length of direction vector
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepsampler.generate_cube_oriented_direction(ui, region, scale=1)[source]
Sample a unit direction vector in direction of a random unit cube axes.
Chooses one parameter, randomly uniformly, upon which the slice will be defined.
- Parameters:
ui (array) – starting point
region (MLFriends object) – current region (not used)
scale (float) – factor to multiple the vector
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepsampler.generate_cube_oriented_differential_direction(ui, region, scale=1)[source]
Sample a direction vector on a randomly chose parameter based on two randomly selected live points.
Chooses one parameter, randomly uniformly, upon which the slice will be defined. Guess the length from the difference of two points in that axis.
- Parameters:
ui (array) – starting point
region (MLFriends object) – current region
scale (float) – factor to multiple the vector
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepsampler.generate_differential_direction(ui, region, scale=1)[source]
Sample a vector using the difference between two randomly selected live points.
- Parameters:
ui (array) – starting point
region (MLFriends object) – current region
scale (float) – factor to multiple the vector
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepsampler.generate_partial_differential_direction(ui, region, scale=1)[source]
Sample a vector using the difference between two randomly selected live points.
Only 10% of parameters are allowed to vary at a time.
- Parameters:
ui (array) – starting point
region (MLFriends object) – current region
scale (float) – factor to multiple the vector
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepsampler.generate_region_oriented_direction(ui, region, scale=1)[source]
Sample a vector along one region principle axes, chosen at random.
The region transformLayer axes are considered (
AffineLayer
orScalingLayer
). One axis is chosen at random.- Parameters:
ui (array) – starting point
region (MLFriends object) – current region
scale (float) – factor to multiple the vector
- Returns:
v – new direction vector (in u-space)
- Return type:
array
- ultranest.stepsampler.generate_region_random_direction(ui, region, scale=1)[source]
Sample a direction vector based on the region covariance.
The region transformLayer axes are considered (
AffineLayer
orScalingLayer
). With this covariance matrix, a random direction is generated. Generating proceeds by transforming a unit multi-variate Gaussian.- Parameters:
ui (array) – starting point
region (MLFriends object) – current region
scale (float:) – length of direction vector (in t-space)
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepsampler.generate_mixture_random_direction(ui, region, scale=1)[source]
Sample randomly uniformly from two proposals.
Randomly applies either
generate_differential_direction()
, which transports far, orgenerate_region_oriented_direction()
, which is stiffer.Best method according to https://arxiv.org/abs/2211.09426
- Parameters:
region (MLFriends) – region
ui (array) – vector of starting point
scale (float) – length of the vector.
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepsampler.generate_region_sample_direction(ui, region, scale=1)[source]
Sample a point directly from the region, and return the difference vector to the current point.
- Parameters:
region (MLFriends) – region
ui (array) – vector of starting point
scale (float) – length of the vector.
- Returns:
v – new direction vector
- Return type:
array
- ultranest.stepsampler.inside_region(region, unew, uold)[source]
Check if unew is inside region.
- Parameters:
region (MLFriends object) – current region
unew (array) – point to check
uold (array) – not used
- Returns:
v – boolean whether point is inside the region
- Return type:
array
- ultranest.stepsampler.adapt_proposal_total_distances(region, history, mean_pair_distance, ndim)[source]
Check jump distance (deprecated function).
- ultranest.stepsampler.adapt_proposal_total_distances_NN(region, history, mean_pair_distance, ndim)[source]
Check jump distance (deprecated function).
- ultranest.stepsampler.adapt_proposal_summed_distances(region, history, mean_pair_distance, ndim)[source]
Check jump distance (deprecated function).
- ultranest.stepsampler.adapt_proposal_summed_distances_NN(region, history, mean_pair_distance, ndim)[source]
Check jump distance (deprecated function).
- ultranest.stepsampler.adapt_proposal_move_distances(region, history, mean_pair_distance, ndim)[source]
Compare random walk travel distance to MLFriends radius.
Compares in whitened space (t-space), the L2 norm between final point and starting point to the MLFriends bootstrapped radius.
- Parameters:
region (MLFriends) – built region
history (list) – list of tuples, containing visited point and likelihood.
mean_pair_distance (float) – not used
ndim (int) – dimensionality
- Returns:
far_enough (bool) – whether the distance is larger than the radius
info (tuple) – distance and radius (both float)
- ultranest.stepsampler.adapt_proposal_move_distances_midway(region, history, mean_pair_distance, ndim)[source]
Compare first half of the travel distance to MLFriends radius.
Compares in whitened space (t-space), the L2 norm between the middle point of the walk and the starting point, to the MLFriends bootstrapped radius.
- Parameters:
region (MLFriends) – built region
history (list) – list of tuples, containing visited point and likelihood.
mean_pair_distance (float) – not used
ndim (int) – dimensionality
- Returns:
far_enough (bool) – whether the distance is larger than the radius
info (tuple) – distance and radius (both float)
- ultranest.stepsampler.select_random_livepoint(us, Ls, Lmin)[source]
Select random live point as chain starting point.
- Parameters:
us (array) – positions of live points
Ls (array) – likelihood of live points
Lmin (float) – current log-likelihood threshold
- Returns:
i – index of live point selected
- Return type:
int
- class ultranest.stepsampler.IslandPopulationRandomLivepointSelector(island_size, exchange_probability=0)[source]
Bases:
object
Mutually isolated live point subsets.
To replace dead points, chains are only started from the same island as the dead point. Island refers to chunks of live point indices (0,1,2,3 as stored, not sorted). Each chunk has size ´island_size´.
If ´island_size´ is large, for example, the total number of live points, then clumping can occur more easily. This is the observed behaviour that a limited random walk is run from one live point, giving two similar points, then the next dead point replacement is likely run again from these, giving more and more similar live points. This gives a run-away process leading to clumps of similar, highly correlated points.
If ´island_size´ is small, for example, 1, then each dead point is replaced by a chain started from it. This is a problem because modes can never die out. Nested sampling can then not complete.
In a multi-modal run, within a given number of live points, the number of live points per mode is proportional to the mode’s prior volume, but can fluctuate. If the number of live points is small, a fluctuation can lead to mode die-out, which cannot be reversed. Therefore, the number of island members should be large enough to represent each mode.
Set up multiple isolated islands.
- Parameters:
island_size (int) – maximum number of members on each isolated live point population.
exchange_probability (float) – Probability that a member from a random island will be picked.
- class ultranest.stepsampler.StepSampler(nsteps, generate_direction, scale=1.0, check_nsteps='move-distance', adaptive_nsteps=False, max_nsteps=1000, region_filter=False, log=False, starting_point_selector=<function select_random_livepoint>)[source]
Bases:
object
Base class for a simple step sampler, staggering around.
Scales proposal towards a 50% acceptance rate.
Initialise sampler.
- Parameters:
scale (float) – initial proposal size
nsteps (int) –
number of accepted steps until the sample is considered independent.
To find the right value, see
ultranest.calibrator.ReactiveNestedCalibrator
generate_direction (function) –
direction proposal function.
Available are:
generate_cube_oriented_direction()
(slice sampling, picking one random parameter to vary)
generate_random_direction()
(hit-and-run sampling, picking a random direction varying all parameters)
generate_differential_direction()
(differential evolution direction proposal)
generate_region_oriented_direction()
(slice sampling, but in the whitened parameter space)
generate_region_random_direction()
(hit-and-run sampling, but in the whitened parameter space)
SequentialDirectionGenerator
(sequential slice sampling, i.e., iterate deterministically through the parameters)
SequentialRegionDirectionGenerator
(sequential slice sampling in the whitened parameter space, i.e., iterate deterministically through the principle axes)
generate_cube_oriented_differential_direction()
(like generate_differential_direction, but along only one randomly chosen parameter)
generate_partial_differential_direction()
(differential evolution slice proposal on only 10% of the parameters)
generate_mixture_random_direction()
(combined proposal)
Additionally,
OrthogonalDirectionGenerator
can be applied to any generate_direction function.When in doubt, try
generate_mixture_random_direction()
. It combines efficient moves along the live point distribution, with robustness against collapse to a subspace.generate_cube_oriented_direction()
works well too.adaptive_nsteps (False or str) –
Strategy to adapt the number of steps. The possible values are the same as for check_nsteps.
Adapting can give usable results. However, strictly speaking, detailed balance is not maintained, so the results can be biased. You can use the stepsampler.logstat property to find out the nsteps learned from one run (third column), and use the largest value for nsteps for a fresh run. The forth column is the jump distance, the fifth column is the reference distance.
check_nsteps (False or str) –
Method to diagnose the step sampler walks. The options are:
False: no checking
’move-distance’ (recommended): distance between start point and final position exceeds the mean distance between pairs of live points.
’move-distance-midway’: distance between start point and position in the middle of the chain exceeds the mean distance between pairs of live points.
’proposal-total-distances’: mean square distance of proposed vectors exceeds the mean distance between pairs of live points.
’proposal-total-distances-NN’: mean distance of chain points from starting point exceeds mean distance between pairs of live points.
’proposal-summed-distances-NN’: summed distances between chain points exceeds mean distance between pairs of live points.
’proposal-summed-distances-min-NN’: smallest distance between chain points exceeds mean distance between pairs of live points.
Each step sampler walk adds one row to stepsampler.logstat. The jump distance (forth column) should be compared to the reference distance (fifth column).
max_nsteps (int) – Maximum number of steps the adaptive_nsteps can reach.
region_filter (bool) – if True, use region to check if a proposed point can be inside before calling likelihood.
log (file) – log file for sampler statistics, such as acceptance rate, proposal scale, number of steps, jump distance and distance between live points
starting_point_selector (func) – function which given the live point positions us, their log-likelihoods Ls and the current log-likelihood threshold Lmin, returns the index i of the selected live point to start a new chain from. Examples:
select_random_livepoint()
, which has always been the default behaviour, or an instance ofIslandPopulationRandomLivepointSelector
.
- plot(filename)[source]
Plot sampler statistics.
- Parameters:
filename (str) – Stores plot into
filename
and data intofilename + ".txt.gz"
.
- property mean_jump_distance
Geometric mean jump distance.
- property far_enough_fraction
Fraction of jumps exceeding reference distance.
- get_info_dict()[source]
Return diagnostics of the step sampler performance.
- Returns:
v –
- the keys are:
num_logs: number of log entries being summarized
rejection_rate: fraction of steps rejected
mean_scale: average value of scale
mean_nsteps: average nsteps
mean_distance: mean jump distance (see Buchner+24)
frac_far_enough: fraction of jumps with sufficient distance (see Buchner+24)
last_logstat: content of the last log entry
- Return type:
dict
- move(ui, region, ndraw=1, plot=False)[source]
Move around point
ui
. Stub to be implemented by child classes.
- adjust_accept(accepted, unew, pnew, Lnew, nc)[source]
Adjust proposal, given that a new point was found after nc calls.
- Parameters:
accepted (bool) – Whether the most recent proposal was accepted
unew (array) – new point (in u-space)
pnew (array) – new point (in p-space)
Lnew (float) – loglikelihood of new point
nc (int) – number of likelihood function calls used.
- adapt_nsteps(region)[source]
Adapt the number of steps.
- Parameters:
region (MLFriends object) – current region
- class ultranest.stepsampler.MHSampler(nsteps, generate_direction, scale=1.0, check_nsteps='move-distance', adaptive_nsteps=False, max_nsteps=1000, region_filter=False, log=False, starting_point_selector=<function select_random_livepoint>)[source]
Bases:
StepSampler
Gaussian Random Walk.
Initialise sampler.
- Parameters:
scale (float) – initial proposal size
nsteps (int) –
number of accepted steps until the sample is considered independent.
To find the right value, see
ultranest.calibrator.ReactiveNestedCalibrator
generate_direction (function) –
direction proposal function.
Available are:
generate_cube_oriented_direction()
(slice sampling, picking one random parameter to vary)
generate_random_direction()
(hit-and-run sampling, picking a random direction varying all parameters)
generate_differential_direction()
(differential evolution direction proposal)
generate_region_oriented_direction()
(slice sampling, but in the whitened parameter space)
generate_region_random_direction()
(hit-and-run sampling, but in the whitened parameter space)
SequentialDirectionGenerator
(sequential slice sampling, i.e., iterate deterministically through the parameters)
SequentialRegionDirectionGenerator
(sequential slice sampling in the whitened parameter space, i.e., iterate deterministically through the principle axes)
generate_cube_oriented_differential_direction()
(like generate_differential_direction, but along only one randomly chosen parameter)
generate_partial_differential_direction()
(differential evolution slice proposal on only 10% of the parameters)
generate_mixture_random_direction()
(combined proposal)
Additionally,
OrthogonalDirectionGenerator
can be applied to any generate_direction function.When in doubt, try
generate_mixture_random_direction()
. It combines efficient moves along the live point distribution, with robustness against collapse to a subspace.generate_cube_oriented_direction()
works well too.adaptive_nsteps (False or str) –
Strategy to adapt the number of steps. The possible values are the same as for check_nsteps.
Adapting can give usable results. However, strictly speaking, detailed balance is not maintained, so the results can be biased. You can use the stepsampler.logstat property to find out the nsteps learned from one run (third column), and use the largest value for nsteps for a fresh run. The forth column is the jump distance, the fifth column is the reference distance.
check_nsteps (False or str) –
Method to diagnose the step sampler walks. The options are:
False: no checking
’move-distance’ (recommended): distance between start point and final position exceeds the mean distance between pairs of live points.
’move-distance-midway’: distance between start point and position in the middle of the chain exceeds the mean distance between pairs of live points.
’proposal-total-distances’: mean square distance of proposed vectors exceeds the mean distance between pairs of live points.
’proposal-total-distances-NN’: mean distance of chain points from starting point exceeds mean distance between pairs of live points.
’proposal-summed-distances-NN’: summed distances between chain points exceeds mean distance between pairs of live points.
’proposal-summed-distances-min-NN’: smallest distance between chain points exceeds mean distance between pairs of live points.
Each step sampler walk adds one row to stepsampler.logstat. The jump distance (forth column) should be compared to the reference distance (fifth column).
max_nsteps (int) – Maximum number of steps the adaptive_nsteps can reach.
region_filter (bool) – if True, use region to check if a proposed point can be inside before calling likelihood.
log (file) – log file for sampler statistics, such as acceptance rate, proposal scale, number of steps, jump distance and distance between live points
starting_point_selector (func) – function which given the live point positions us, their log-likelihoods Ls and the current log-likelihood threshold Lmin, returns the index i of the selected live point to start a new chain from. Examples:
select_random_livepoint()
, which has always been the default behaviour, or an instance ofIslandPopulationRandomLivepointSelector
.
- ultranest.stepsampler.CubeMHSampler(*args, **kwargs)[source]
Gaussian Metropolis-Hastings sampler, using unit cube.
- ultranest.stepsampler.RegionMHSampler(*args, **kwargs)[source]
Gaussian Metropolis-Hastings sampler, using region.
- class ultranest.stepsampler.SliceSampler(nsteps, generate_direction, scale=1.0, check_nsteps='move-distance', adaptive_nsteps=False, max_nsteps=1000, region_filter=False, log=False, starting_point_selector=<function select_random_livepoint>)[source]
Bases:
StepSampler
Slice sampler, respecting the region.
Initialise sampler.
- Parameters:
scale (float) – initial proposal size
nsteps (int) –
number of accepted steps until the sample is considered independent.
To find the right value, see
ultranest.calibrator.ReactiveNestedCalibrator
generate_direction (function) –
direction proposal function.
Available are:
generate_cube_oriented_direction()
(slice sampling, picking one random parameter to vary)
generate_random_direction()
(hit-and-run sampling, picking a random direction varying all parameters)
generate_differential_direction()
(differential evolution direction proposal)
generate_region_oriented_direction()
(slice sampling, but in the whitened parameter space)
generate_region_random_direction()
(hit-and-run sampling, but in the whitened parameter space)
SequentialDirectionGenerator
(sequential slice sampling, i.e., iterate deterministically through the parameters)
SequentialRegionDirectionGenerator
(sequential slice sampling in the whitened parameter space, i.e., iterate deterministically through the principle axes)
generate_cube_oriented_differential_direction()
(like generate_differential_direction, but along only one randomly chosen parameter)
generate_partial_differential_direction()
(differential evolution slice proposal on only 10% of the parameters)
generate_mixture_random_direction()
(combined proposal)
Additionally,
OrthogonalDirectionGenerator
can be applied to any generate_direction function.When in doubt, try
generate_mixture_random_direction()
. It combines efficient moves along the live point distribution, with robustness against collapse to a subspace.generate_cube_oriented_direction()
works well too.adaptive_nsteps (False or str) –
Strategy to adapt the number of steps. The possible values are the same as for check_nsteps.
Adapting can give usable results. However, strictly speaking, detailed balance is not maintained, so the results can be biased. You can use the stepsampler.logstat property to find out the nsteps learned from one run (third column), and use the largest value for nsteps for a fresh run. The forth column is the jump distance, the fifth column is the reference distance.
check_nsteps (False or str) –
Method to diagnose the step sampler walks. The options are:
False: no checking
’move-distance’ (recommended): distance between start point and final position exceeds the mean distance between pairs of live points.
’move-distance-midway’: distance between start point and position in the middle of the chain exceeds the mean distance between pairs of live points.
’proposal-total-distances’: mean square distance of proposed vectors exceeds the mean distance between pairs of live points.
’proposal-total-distances-NN’: mean distance of chain points from starting point exceeds mean distance between pairs of live points.
’proposal-summed-distances-NN’: summed distances between chain points exceeds mean distance between pairs of live points.
’proposal-summed-distances-min-NN’: smallest distance between chain points exceeds mean distance between pairs of live points.
Each step sampler walk adds one row to stepsampler.logstat. The jump distance (forth column) should be compared to the reference distance (fifth column).
max_nsteps (int) – Maximum number of steps the adaptive_nsteps can reach.
region_filter (bool) – if True, use region to check if a proposed point can be inside before calling likelihood.
log (file) – log file for sampler statistics, such as acceptance rate, proposal scale, number of steps, jump distance and distance between live points
starting_point_selector (func) – function which given the live point positions us, their log-likelihoods Ls and the current log-likelihood threshold Lmin, returns the index i of the selected live point to start a new chain from. Examples:
select_random_livepoint()
, which has always been the default behaviour, or an instance ofIslandPopulationRandomLivepointSelector
.
- move(ui, region, ndraw=1, plot=False)[source]
Advance the slice sampling move. see
StepSampler.move()
.
- ultranest.stepsampler.CubeSliceSampler(*args, **kwargs)[source]
Slice sampler, randomly picking region axes.
- ultranest.stepsampler.RegionSliceSampler(*args, **kwargs)[source]
Slice sampler, randomly picking region axes.
- ultranest.stepsampler.BallSliceSampler(*args, **kwargs)[source]
Hit & run sampler. Choose random directions in space.
- ultranest.stepsampler.RegionBallSliceSampler(*args, **kwargs)[source]
Hit & run sampler. Choose random directions according to region.
- class ultranest.stepsampler.SequentialDirectionGenerator[source]
Bases:
object
Sequentially proposes one parameter after the next.
Initialise.
- class ultranest.stepsampler.SequentialRegionDirectionGenerator[source]
Bases:
object
Sequentially proposes one region axes after the next.
Initialise.
- ultranest.stepsampler.RegionSequentialSliceSampler(*args, **kwargs)[source]
Slice sampler, sequentially iterating region axes.
- class ultranest.stepsampler.OrthogonalDirectionGenerator(generate_direction)[source]
Bases:
object
Orthogonalizes proposal vectors.
Samples N proposed vectors by a provided method, then orthogonalizes them with Gram-Schmidt (QR decomposition).
Initialise.
- Parameters:
generate_direction (function) – direction proposal to orthogonalize
- class ultranest.stepsampler.SpeedVariableGenerator(step_matrix, generate_direction=<function generate_region_random_direction>)[source]
Bases:
object
Propose directions with only some parameters variable.
Propose in region direction, but only include some dimensions at a time. Completely configurable.
Initialise sampler.
- Parameters:
step_matrix (matrix or list of slices) –
if a bool matrix of shape (n_steps, n_dims):
Each row of the matrix indicates which parameters should be updated.
Example:
[[True, True], [False, True], [False, True]]
This would update the first parameter 1/3 times, and the second parameters every time. Three steps are made until the point is considered independent.
For a full update in every step, use:
np.ones((n_steps, n_dims), dtype=bool)
if a list of slices:
Each entry indicates which parameters should be updated.
Example:
[Ellipsis, slice(2,10), slice(5,10)]
This would update the first parameter 1/3 times, parameters 2-9 2/3 times and parameter 5-9 in every step. Three steps are made until the point is considered independent.
generate_direction (function) – direction proposal function.
- ultranest.stepsampler.SpeedVariableRegionSliceSampler(step_matrix, *args, **kwargs)[source]
Slice sampler, in region axes.
Updates only some dimensions at a time, completely user-definable.
- ultranest.stepsampler.ellipsoid_bracket(ui, v, ellipsoid_center, ellipsoid_inv_axes, ellipsoid_radius_square)[source]
Find line-ellipsoid intersection points.
For a line from ui in direction v through an ellipsoid centered at ellipsoid_center with axes matrix ellipsoid_inv_axes, return the lower and upper intersection parameter.
- Parameters:
ui (array) – current point (in u-space)
v (array) – direction vector
ellipsoid_center (array) – center of the ellipsoid
ellipsoid_inv_axes (array) – ellipsoid axes matrix, as computed by
WrappingEllipsoid
ellipsoid_radius_square (float) – square of the ellipsoid radius
- Returns:
left (float) – distance to go until ellipsoid is intersected (non-positive)
right (float) – distance to go until ellipsoid is intersected (non-negative)
- ultranest.stepsampler.crop_bracket_at_unit_cube(ui, v, left, right, epsilon=1e-06)[source]
Find line-cube intersection points.
A line segment from ui in direction v from t between left <= 0 <= right will be truncated by the unit cube. Returns the bracket and whether cropping was applied.
- Parameters:
ui (array) – current point (in u-space)
v (array) – direction vector
left (float) – bracket lower end (non-positive)
right (float) – bracket upper end (non-negative)
epsilon (float) – small number to allow for numerical effects
- Returns:
left (float) – new left
right (float) – new right
cropped_left (bool) – whether left was changed
cropped_right (bool) – whether right was changed
ultranest.store module
Storage for nested sampling points
The information stored is a table with
the likelihood threshold drawn from
the likelihood, prior volume coordinates and physical coordinates of the point
- class ultranest.store.FilePointStore[source]
Bases:
object
Base class for storing points in a file.
- class ultranest.store.TextPointStore(filepath, ncols)[source]
Bases:
FilePointStore
Storage in a text file.
Stores previously drawn points above some likelihood contour, so that they can be reused in another run.
The format is a tab separated text file. Through the fmt and delimiter attributes the output can be altered.
Load and append to storage at filepath.
The file should contain ncols columns (Lmin, L, and others).
- class ultranest.store.HDF5PointStore(filepath, ncols, **h5_file_args)[source]
Bases:
FilePointStore
Storage in a HDF5 file.
Stores previously drawn points above some likelihood contour, so that they can be reused in another run.
The format is a HDF5 file, which grows as needed.
Load and append to storage at filepath.
File contains ncols columns in ‘points’ dataset (Lmin, L, and others). h5_file_args are passed on to hdf5.File.
- FILES_OPENED = []
ultranest.utils module
Utility functions for logging and statistics
- ultranest.utils.create_logger(module_name, log_dir=None, level=20)[source]
Set up the logging channel module_name.
Append to
debug.log
in log_dir (if notNone
). Write to stdout with output level level.If logging handlers are already registered for this module, no new handlers are registered.
- Parameters:
module_name (str) – logger module
log_dir (str) – directory to write debug.log file into
level (logging level) – which level (and above) to log to stdout.
- Returns:
logger instance
- Return type:
logger
- ultranest.utils.make_run_dir(log_dir, run_num=None, append_run_num=True, max_run_num=10000)[source]
Generate a new numbered directory for this run to store output.
- Parameters:
log_dir (str) – base path
run_num (int) – folder to add to path, such as prefix/run1/
append_run_num (bool) – If true, set run_num to next unused number
max_run_num (int) – Maximum number of automatic run subfolders
- Returns:
folderpath – dictionary of folder paths for different purposes. Keys are “run_dir” (the path), “info”, “results”, “chains”, “plots”.
- Return type:
dict
- ultranest.utils.resample_equal(samples, weights, rstate=None)[source]
Resample the samples so that the final samples all have equal weight.
Each input sample appears in the output array either floor(weights[i] * N) or ceil(weights[i] * N) times, with floor or ceil randomly selected (weighted by proximity).
- Parameters:
samples (~numpy.ndarray) – Unequally weight samples returned by the nested sampling algorithm. Shape is (N, …), with N the number of samples.
weights (~numpy.ndarray) – Weight of each sample. Shape is (N,).
rstate (~numpy.random.RandomState) – random number generator. If not provided, numpy.random is used.
- Returns:
equal_weight_samples – Samples with equal weights, same shape as input samples.
- Return type:
~numpy.ndarray
Examples
>>> x = np.array([[1., 1.], [2., 2.], [3., 3.], [4., 4.]]) >>> w = np.array([0.6, 0.2, 0.15, 0.05]) >>> nestle.resample_equal(x, w) array([[ 1., 1.], [ 1., 1.], [ 1., 1.], [ 3., 3.]])
Notes
Implements the systematic resampling method described in this PDF. Another way to sample according to weights would be:
N = len(weights) new_samples = samples[np.random.choice(N, size=N, p=weights)]
However, the method used in this function is less “noisy”.
- ultranest.utils.listify(*args)[source]
Concatenate args, which are (made to be) lists.
- Parameters:
*args (iterable) – Lists to concatenate.
- Returns:
Concatenation of the lists in args.
- Return type:
list
- ultranest.utils.quantile(x, q, weights=None)[source]
Compute (weighted) quantiles from an input set of samples.
- Parameters:
x (~numpy.ndarray with shape (nsamps,)) – Input samples.
q (~numpy.ndarray with shape (nquantiles,)) – The list of quantiles to compute from [0., 1.].
weights (~numpy.ndarray with shape (nsamps,), optional) – The associated weight from each sample.
- Returns:
quantiles – The weighted sample quantiles computed at q.
- Return type:
~numpy.ndarray with shape (nquantiles,)
- ultranest.utils.vol_prefactor(n)[source]
Volume constant for an n-dimensional sphere.
for n even: $$ (2pi)^(n /2) / (2 * 4 * … * n)$$ for n odd : $$2 * (2pi)^((n-1)/2) / (1 * 3 * … * n)$$
- Parameters:
n (int) – Dimensionality
- Returns:
Volume
- Return type:
float
- ultranest.utils.is_affine_transform(a, b)[source]
Check if one points a and b are related by an affine transform.
The implementation currently returns False for rotations.
- Parameters:
a (array) – transformed points
b (array) – original points
- Returns:
is_affine
- Return type:
bool
- ultranest.utils.normalised_kendall_tau_distance(values1, values2, i=None, j=None)[source]
Normalised Kendall tau distance between two equally sized arrays.
see https://en.wikipedia.org/wiki/Kendall_tau_distance
You can optionally pass precomputed indices:
i, j = np.meshgrid(np.arange(N), np.arange(N))
- Parameters:
values1 (array of ints) – ranks
values2 (array of ints) – other ranks (same length as values1)
i (array of ints) – 2d indices selecting values1
j (array of ints) – 2d indices selecting values2
- Returns:
distance
- Return type:
float
- ultranest.utils.verify_gradient(ndim, transform, loglike, gradient, verbose=False, combination=False)[source]
Check with numerical differentiation if gradient function is plausibly correct.
Raises AssertError if not fulfilled. All functions are vectorized.
- Parameters:
ndim (int) – dimensionality
transform (function) – transform unit cube parameters to physical parameters, vectorized
loglike (function) – loglikelihood function, vectorized
gradient (function) – computes gradient of loglike to unit cube parameters. Takes a single point and returns a single vector.
verbose (bool) – whether to show intermediate test results
combination (bool) – if true, the gradient function should return a tuple of: (transformed parameters, loglikelihood, gradient) for a given unit cube point.
- ultranest.utils.distributed_work_chunk_size(num_total_tasks, mpi_rank, mpi_size)[source]
Divide tasks uniformly.
Computes the number of tasks for process number mpi_rank, so that num_total_tasks tasks are spread uniformly among mpi_size processes.
- Parameters:
num_total_tasks (int) – total number of tasks to be split
mpi_rank (int) – process id
mpi_size (int) – total number of processes
- Returns:
chunk_size – number of tasks for process number mpi_rank
- Return type:
int
- ultranest.utils.submasks(mask, *masks)[source]
Get indices for a submasked array.
Returns indices, so that a[indices] is equivalent to a[mask][mask1][mask2].
- Parameters:
mask (np.array(dtype=bool)) – selection of some array
*masks (list of np.array(dtype=bool)) – each further mask is a subselection
- Returns:
indices – indices which select the subselection in the original array
- Return type:
np.array(dtype=np.int_)
ultranest.viz module
Live point visualisations
Gives a live impression of current exploration. This is powerful because the user can abort partial runs if the fit converges to unreasonable values.
- ultranest.viz.round_parameterlimits(plo, phi, paramlimitguess=None)[source]
Guess the current parameter range.
- Parameters:
plo (array of floats) – for each parameter, current minimum value
phi (array of floats) – for each parameter, current maximum value
paramlimitguess (array of float tuples) – for each parameter, guess of parameter range if available
- Returns:
plo_rounded (array of floats) – for each parameter, rounded minimum value
phi_rounded (array of floats) – for each parameter, rounded maximum value
formats (array of float tuples) – for each parameter, string format for representing it.
- ultranest.viz.nicelogger(points, info, region, transformLayer, region_fresh=False)[source]
Log current live points and integration progress to stdout.
- Parameters:
points (dict with keys "u", "p", "logl") – live points (u: cube coordinates, p: transformed coordinates, logl: loglikelihood values)
info (dict) –
integration information. Keys are:
paramlims (optional): parameter ranges
logvol: expected volume at this iteration
region (MLFriends) – Current region.
transformLayer (ScaleLayer or AffineLayer or MaxPrincipleGapAffineLayer) – Current transformLayer (for clustering information).
region_fresh (bool) – Whether the region was just updated.
Module contents
UltraNets performs nested sampling to calculate the Bayesian evidence and posterior samples.