The benchmark runs the implemented algorithms against a number of test problems, and ranks them, after evaluating 10 runs with different seeds, by
Following the Benchmark, one may choose the right algorithm for a problem according to characteristics, such as
The evaluation of various algorithms can be found at https://johannesbuchner.github.io/UltraNest/testsuite/
All algorithms implemented in the python package are available. To add your own algorithm, add a if-clause in testsuite/algorithms/nest.py.
Additionally, the following algorithms are evaluated in the benchmark:
To run all the algorithms against all the problems, run:
$ cd testsuite
$ mkdir -p output && cd output
$ PYTHONPATH=../../ python ../testbase.py
This ensures you are using the current source of the nested sampling python package.
Parallel execution using multiple threads can be enabled by setting the environment variable PARALLEL=1.
Create a algorithm exclusion file named skip_algorithms, which contains regular expressions on the names of algorithms to skip. Example:
cuba.*
multinest.*
svm
Create a problem exclusion file named skip_problems, which contains regular expressions on the names of problems to skip. Example:
gauss
tilted
loggamma[^_]
.*10.*
As before, run the test suite, but with partialtestbase.py
$ cd testsuite
$ mkdir -p output && cd output
$ PYTHONPATH=../../ python ../partialtestbase.py