This document will help you get comfortable using the
This is the TL;DR version to get up and running.
1. Get the data:¶
Instructions will be available here when the paper is accepted. In the meantime there’s a single test object in the spectroscopy directory. If you want more, Write your own HST proposal! :-P
2. Run a fit single threaded:¶
fit_WDmodel --specfile data/spectroscopy/yourfavorite.flm
This option is single threaded and slow, but useful to testing or quick exploratory analysis.
A more reasonable way to run things fast is to use mpi.
3. Run a fit as an MPI process:¶
mpirun -np 8 fit_WDmodel --mpi --specfile=file.flm [--ignorephot]
--mpi MUST be specified in the options to
WDmodel and you must start the process with
Useful runtime options¶
There’s a large number of command line options to the fitter, and most of it’s aspects can be configured. Some options make sense in concert with others, and here’s a short summary of use cases.
The spectrum can be trimmed prior to fitting with the
option. You can also blotch over gaps and cosmic rays if your reduction
was sloppy, and you just need a quick fit, but it’s better to do this
If there is no photometry data for the object, the fitter will barf
--ignorephot is specified explicitly, so you know that the
parameters are only constrained by the spectroscopy.
The fitter runs a MCMC to explore the posterior distribution of the model
parameters given the data. If you are running with the above two options,
chances are you are at the telescope, getting spectra, and doing quick look
reductions, and you just want a rough idea of temperature and surface gravity
to decide if you should get more signal, and eventually get HST photometry. The
MCMC is overkill for this purpose so you can
--skipmcmc, in which case,
you’ll get results using minuit. They’ll be biased, and the errors will
probably be too small, but they give you a ballpark estimate.
If you do want to use the MCMC anyway, you might like it to be faster. You can
choose to use only every nth point in computing the log likelihood with
--everyn - this is only intended for testing purposes, and should probably
not be used for any final analysis. Note that the uncertainties increase as
you’d expect with fewer points.
Setting the initial state¶
The fitter really runs minuit to refine initial supplied guesses for
parameters. Every now at then, the guess prior to running minuit is so far off
that you get rubbish out of minuit. This can be fixed by explicitly supplying a
better initial guess. Of course, if you do that, you might wonder why even
bother with minuit, and may wish to skip it entirely. This can be disabled with
--skipminuit option. If
--skipminuit is used, a dl guess MUST
All of the parameter files can be supplied via a JSON parameter file
supplied via the
--param_file option, or using individual parameter
options. An example parameter file is available in the module directory.
Configuring the sampler¶
You can change the sampler type (
-samptype), number of chain temperatures
--ntemps), number of walkers (
--nwalkers), burn in steps
--nburnin), production steps (
--nprod), and proposal scale for the
--ascale). You can also thin the chain (
--thin) and discard some
fraction of samples from the start (
--discard). The default sampler is the
ensemble sampler from the
emcee package. For a more conservative
approach, we recommend the ptsampler with
nprod=5000 (or more).
Resuming the fit¶
If the sampling needs to be interrupted, or crashes for whatever reason, the
state is saved every 100 steps, and the sampling can be restarted with
--resume. Note that you must have run at least the burn in and 100 steps for
it to be possible to resume, and the state of the data, parameters, or chain
configuration should not be changed externally (if they need to be use
--redo and rerun the fit). You can increase the length of the chain, and
chain the visualization options when you
--resume but the state of
everything else is restored.
You can get a summary of all available options with
There are a few useful routines included in the
WDmodel package. Using
WDmodel itself will do the same thing as
fit_WDmodel. If you need to
look at results from a large number of fits,
print_WDmodel_residual_table will print out tables of results and
make_WDmodel_slurm_batch_scripts provides an example script to
generate batch scripts for the SLURM system used on Harvard’s Odyssey cluster.
Adapt this for use with other job queue systems or clusters.