Define your background model and source model as usual in sherpa. Freeze the parameters you do not want to investigate. Make sure you set the parameter minimum and maximum values to appropriate (a priori reasonable) values.
# you can use automatic priors (uniform priors everywhere, within the parameters range)
# for this, all scale parameters (those with ampl or norm in their name)
# are converted to logparameters
parameters = auto_reparametrize()
prior = bxa.create_prior_function(parameters)
Alternatively (advanced), define priors manually:
# get parameters
parameters = [param1, param2, param3]
# or just get all that are not linked or frozen
parameters = [for p in get_model().pars if not p.frozen and p.link is None]
priors = []
priors += [bxa.create_jeffreys_prior_for(param1)]
priors += [bxa.create_uniform_prior_for(param2)]
priors += [lambda x: x**2] # custom prior transformation (rarely desired)
priorfunction = bxa.create_prior_function(priors = priors)
Use for scale variables (order of magnitude) The Jeffreys prior gives equal weight to each order of magnitude between the minimum and maximum value. Flat in logarithmic scale.
Parameters:  parameter – Parameter to create a prior for. E.g. xspowerlaw.mypowerlaw.norm 

It is usually easier to create an ancillary parameter, and link the actual parameter, like so:
from sherpa.models.parameter import Parameter
lognorm = Parameter(modelname='mycomponent', name='lognorm', val=5, min=4*2, max=0)
powerlaw.norm = 10**lognorm
Use for location variables (position) The uniform prior gives equal weight in nonlogarithmic scale.
Parameters:  parameter – Parameter to create a prior for. E.g. xspowerlaw.mypowerlaw.PhoIndex 

Combine the prior transformations into a single function for pymultinest.
Parameters: 


You need to specify a prefix, called outputfiles_basename where the files are stored.
# see the pymultinest documentation for all options
bxa.nested_run(prior = priorfunction, parameters = parameters,
resume = True, verbose = True,
outputfiles_basename = "testbxa_")
Run the Bayesian analysis with specified parameters+transformations.
Parameters: 


If prior is None, uniform priors are used on the passed parameters. If parameters is also None, all thawed parameters are used.
The remainder are multinest arguments (see PyMultiNest and MultiNest documentation!) n_live_points: 400 are often enough
For quick results, use sampling_efficiency = 0.8, n_live_points = 50, evidence_tolerance = 5. The real results must be estimated with sampling_efficiency = 0.3, otherwise it is not reliable.
Plot and analyse the results. PyMultiNest comes in handy at this point to produce a number of plots and summaries.
On the shell, run:
$ multinest_marginals.py "testbxa_"
The multinest_marginals.py utility is installed with PyMultiNest, for instance into ~/.local/bin/.
pymultinest.Analyzer.equal_weighted_posterior() provides access to the posterior samples (similar to a Markov Chain). Use these to propagate errors:
This preserves the structure of the uncertainty (multiple modes, degeneracies, etc.)
You can also access the output directly and compute other quantities:
import pymultinest
analyzer = pymultinest.analyse.Analyzer(n_params = len(parameters),
outputfiles_basename = 'testbxa_')
chain = analyzer.get_equal_weighted_posterior()
print chain
BXA also allows you to compute the fluxes corresponding to the parameter estimation, giving the correct probability distribution on the flux. With distance information (fixed value or distribution), you can later infer the correct luminosity distribution.
dist = pyblocxs.mn.get_distribution_with_fluxes(lo=2, hi=10,
parameters = parameters, outputfiles_basename = 'testbxa_')
numpy.savetxt(out + prefix + "dist.txt", dist)
Returns an array of equally weighted posterior samples (parameter values) with two additional columns: the photon fluxes and the energy fluxes.
The values will be correctly distributed according to the analysis run before.
examples/model_compare.py shows an example of model selection. Keep in mind what model prior you would like to use.
Example output:
jbuchner@ds42 $ python model_compare.py absorbed line simplest
Model comparison
****************
model simplest : log10(Z) = 1632.7 XXX ruled out
model absorbed : log10(Z) = 7.5 XXX ruled out
model line : log10(Z) = 0.0 < GOOD
The last, most likely model was used as normalization.
Uniform model priors are assumed, with a cut of log10(30) to rule out models.
jbuchner@ds42 $
Here, the probability of the secondbest model, “absorbed”, is \(10^7.5\) times less likely than the model “line”. As this exceeds our threshold (by a lot!) we can claim the detection of an iron line!
We want to to evaluate whether a planned experiment can detect features or constrain parameters, i.e. determine the discriminatory power of future configurations/surveys/missions.
For this, simulate a few spectra using the appropriate response.
Is the model the right one? Is there more in the data? These questions can not be answered in a statistical way, but what we can do is
For the first point, QuantileQuantile plots provide a unbinned, less noisy alternative to residual plots.
QQ plot example (left), with the corresponding spectrum for comparison (right).
In these plots, for each energy the number of counts observed with lower energy are plotted on one axis, while the predicted are on the other axis. If model and data agree perfectly, this would be a straight line. Deviances are indications of possible misfits.
This example is almost a perfect fit! You can see a offset growing at 67 keV, which remains at higher energies. This indicates that the data has more counts than the model there.
As the growth is in a Sshape, it is probably a Gaussian (see its cumulative density function).
Refer to the appendix of the accompaning paper for more examples.
The qq function in the qq module allows you to create such plots easily, by exporting the cumulative functions into a file.
Export QQ plot into a file for plotting.
Parameters: 


Example:
qq.qq_export('bg', outfile='my_bg_qq', elow=0.2, ehigh=10)
Refer to the accompaning paper, which gives an introduction and detailed discussion on the methodology.