Pysep Usage Examples


very_basic

This example shows the simpliest way to run a DSEP model with pysep.

The function get_basic_stellar_model returns a pysep stellarModel object. This is the most basic construct of pysep. stellarModel objects abstract the creation of unit files, linking of inputs and outputs, and reading out output data behind just a few methods.

the evolve method will make sure everythink is linked and will then call the dsepX executable. Once dsep has run it will clean up the unit files.

the stash method will save all input data used to generate the model to one json file. The defaule file name is model.desp

If data is passed as True to model.stash then all the output data is also stored in the json file.

Note that, regardless of if you call stash the output data is also stored as the individual files as dsep always does. This simply provides a single file which can easily be read in by pysep latter.

The pysep json format also allows models to (semi)transparently be sent from one computer to another without having to update any paths in input namelist files as pysep keeps track of where input files should be on a computer to computer basis.

>>> from pysep.api.starting import get_basic_stellar_model
>>> model = get_basic_stellar_model("output/simpleSolar/very_basic")
>>> model.evolve()
>>> model.stash(data=True)

change_endage

This is an example of how you can change input parameters of a stellarModel

Any parameters in the control or physics namelist file can be changed before evovle is called.

The way pysep handels namelist files is that there are a few “default” namelist files included as package data (solar, lowmass, highmas, rgb). These (or any other namelist file you have) can be read in as “cnml” (control) or “pnml” (physics) objects. Each of these objects is what the stellarModel actually stores and can be modified. See the docs for how cnml and pnml store data internally (its essentially a set of linked dictionaries). When evolve() is called the stellarModel used f90nml to write out both the pnml and cnml objects to disk and then link them to the appropriate unit files.

Below I have shown an example of how to change the endage of an model. The principal is basically the same for anything.

>>> model = get_basic_stellar_model("output/simpleSolar/change_endage")
>>> model.control[1]['endage'] = 4.56e9
>>> model.evolve()
>>> model.stash(data=True)

Note how I first access the control member of model. That is the cnml object stored by the stellarModel. Then I access the 2nd element of control. cnml objects have 1st level and 2nd level variables. 1st level are things like ltrack, lpulse, and opecalex, variables that are not associated to a run / kindcard. 2nd level variables are things like rsclz, endage, and nmodls, things which are associated to a run / kindcard.

All 1st level variables are accessed as you would access a dictionary on the cnml object. For example if you wanted to set lpulse to True

>>> model.control['lpulse'] = True

Wherease, all 2nd level variables are accessed first through a list, where each elemenet is one of the kindcards / runs dsep will preform. For example to change rsclz in the first, presumably rescaling run

>>> model.control[0]['rsclz'] = 0.03

So here we are changing the endage of the second run which is presumably an evolutionary run. We could of course check the run type

>>> print(model.control[1]['kindrn'])

Complex Examples

use_custom_cnml

This example shows how you are not limited to the default input files included as package data in pysep

Here we load a custom control namelist file from the local directory using the load (aliased to load_cnml) function. Then, when calling get_basic_stellar_model we overwright the default cnml object used with the custom one here.

get_basic_stellar_model allows for 4 of the inputs to be overwritten. The contol namelist, the physics namelist, the high temperature opacity table, and the pre-main sequence model. However, once you start messing around with the inputs it probably makes sense to build your own stellarModel object from scratch.

>>> from pysep.io.nml.control import load as load_cnml
>>> from pysep.api.starting import get_basic_stellar_model
>>> from pysep.dsep import stellarModel as sm
>>> from pysep.io.nml.control.defaults import solar
>>> from pysep.io.nml.physics.defaults import phys1
>>> from pysep.prems.defaults import m100_GS98
>>> from pysep.opac.opal.defaults import GS98hz
>>> from pysep.dm.filetypes import premsf_file
>>> customCnml = load_cnml("customInputFiles/customControl.nml")
>>> model = get_basic_stellar_model("output/customNamelist/use_custom_cnml", control=customCnml)
>>> model.evolve()
>>> model.stash(data=True)

If you also wanted to use a physics namelist file stored on disk

>>> from pysep.io.nml.physics import load as load_pnml
>>> customPnml = load_pnml("customPhysics.nml")
>>> model = get_basic_stellar_model(".", control=customCnml, physics=customPnml)

Sometimes you want more control over the stellar model object. There are basically two routes to go. Load in all of your own inputs using pysep’s datamodel (covered in a latter example) or use the input files packaged along with pysep (or some mix of the two). Here We are showing an example where you’re control namelist file is loaded from disk but the rest of the files are takes from the pysep’s packaged defaults.

The control and physics namelist packaged defaults pysep contains are quite limited. However, pysep contrins a very extenseive library of high temperature opacity tables and pre-main sequence models. A full list of all the packaged defaults can be found in the documentation.

Note that phys1, m100_GS98, and GS98hz were imported at the top of this file. This is the general way that you bring in packaged defaults from pysep, they are importable as objects from an appropriatly placed “defaults” directory.

>>> control = load_cnml("customInputFiles/customControl.nml")
>>> model = sm("output/customNamelist/stellarModel_from_scratch", control, phys1, GS98hz, m100_GS98)
>>> model.evolve()
>>> model.stash(data=True)

If you wanted to evolve a low mass star with only the packaged defaults you could consider the following example

>>> from pysep.io.nml.control.defaults import controlLow
>>> from pysep.io.nml.physics.defaults import phys1Low
>>> from pysep.prems.defaults import m030_GS98
>>> from pysep.opac.opal.defaults import GS98hz
>>> from pysep.desp import stellarModel as sm
>>> model = sm(".", controlLow, phys1Low, GS98hz, m030_GS98)
>>> model.evolve()
>>> model.stash(data=True)

The above example illustrate how you can use namelist files which are stored on your computer. However, the namelist files are a special case within pysep due to how often they might need to be modified at runtime. All of the other input files are part of what is known as the pysep datamodel.

The pysep datamodel is a seriese of classes which all inherit from pysep.dm.generic. Essneitially, they are just pointers to files on disk instead of actual containers of data. Some of them can read in the data, but even for those the idea is that you may want to read it in to visualize it not to actually modify it.

In the example below we read in a premain sequence model, m100.GS98, and use it in the model inititaliztion instead of m100_GS98 which we imported.

You could do a very similar thing with opacity files, conductive opacity files, and atmospheric boundary condition files.

>>> premsf = premsf_file("customInputFiles/m100.GS98")
>>> model = sm("output/customNamelist/custom_non_namelist_files", solar, phys1, GS98hz, premsf)
>>> # set this just so the models don't spend to much time evolving
>>> model.control[1]['endage'] = 4.57e9
>>> model.evolve()
>>> model.stash(data=True)

Output Examples

pysep was originally concived to allow smoother transitions from model evolution to model analysis. To this end it includes a load of parsers for the output files of dsep.

Moreover, pysep allows for models which have been stashed with data to be very easily loaded and analyised from any computer with only one line and with only sending one file from computer to computer.

In the example below we load a model which has been previously evolved (this is a copy of the evolved model from the use_custom_cnml test) and then plot a log(g) Teff diagram. We do with the both the data stored in the iso file and with the data stored in the track file.

>>> from pysep.api import load_model
>>> from pysep.api.starting import get_basic_stellar_model
>>> import matplotlib.pyplot as plt
>>> model, tdir = load_model("customInputFiles/model.dsep")
>>> fig, ax = plt.subplots(1, 1, figsize=(10, 7))
>>> ax.plot(10**model['iso']['Log_T'], model['iso']['Log_g'])
>>> ax.invert_yaxis()
>>> ax.invert_xaxis()
>>> fig.savefig("Figures/readOutput/read_model_from_disk/ExampleLogGTeff_ISO.pdf", bbox_inches="tight")
>>> fig, ax = plt.subplots(1, 1, figsize=(10, 7))
>>> ax.plot(10**model['track']['log_Teff'], model['track']['log_g'])
>>> ax.invert_yaxis()
>>> ax.invert_xaxis()
>>> fig.savefig("Figures/readOutput/read_model_from_disk/ExampleLogGTeff_TRACK.pdf", bbox_inches="tight")

You will note when we read in the model with load_dir we get back both a model and tdir. The second returned value is a temporary directory where files are loaded. This is to allow transparent use of one json file across multiple computers without worrying about file/directory structure.

You can access the iso file, the track file, the short file and — if you have a binary model output file — the binary model output file. These are accessed with the keys iso, track, short, and binmod (trk, log, and mod can also be used in place of track, short, and binmod). Additionally, these can be accessed functionally with model.iso(), model.trk(), model.log(), and model.mod(). Note that the functional access just calls __getitem__ so is equivilent to accessing with the key.

Both iso and track return as pandas dataframes. They have slightly different naming conventions for the columns. The columns of iso are…

  • Age

  • Log_T

  • Log_g

  • Log_L

  • Log_R

  • Y_core

  • Z_core

  • (Z/X)_surf

  • L_H

  • L_He

  • M_He_core

  • M_CO_core

Whereas the columns of the track dataframe are

  • Model_#

  • shells

  • AGE

  • log_L

  • log_R

  • log_g

  • log_Teff

  • Mconv_core

  • Mconv_env

  • Rconv_env

  • M_He_core

  • Xenv

  • Zenv

  • L_ppI

  • L_ppII

  • L_ppIII

  • L_CNO

  • L_triple-alpha

  • L_He-C

  • L_gravity

  • L_neutrinos_old

  • L_%_Grav_eng

  • L_Itot

  • C_log_T

  • C_log_RHO

  • C_log_P

  • C_BETA

  • C_ETA

  • C_X

  • C_Z

  • C_

  • C_shell_midpoint

  • C_H_shell_mass

  • C_T_at_base_of_cz

  • C_rho_at_base_of_c

  • CA_He3

  • CA_C12

  • CA_C13

  • CA_N14

  • CA_N15

  • CA_O16

  • CA_O17

  • CA_O18

  • SA_He3

  • SA_C12

  • SA_C13

  • SA_N14

  • SA_N15

  • SA_O16

  • SA_O1

  • SA_O18

  • N_pp

  • N_pep

  • N_hep

  • N_Be7

  • N_B8

  • N_N13

  • N_O15

  • N_F17

  • Cl37_flux

  • Ga71_flu

You can read in a model after its been written to disk; however, you can also just read its output after its been evolved.

Here we evolve a basic model just as do in other of the examples in this example suite but then, instead of stashing the output we just use it directly.

>>> model = get_basic_stellar_model("output/readOutput/model_output_after_evolution")
>>> model.control[1]['endage'] = 4.57e9 # to save time
>>> model.evolve()
>>> fig, ax = plt.subplots(1, 1, figsize=(10, 7))
>>> ax.plot(10**model['iso']['Log_T'], model['iso']['Log_g'])
>>> ax.invert_yaxis()
>>> ax.invert_xaxis()
>>> fig.savefig("Figures/readOutput/model_output_after_evolution/ExampleLogGTeff_ISO.pdf", bbox_inches="tight")
>>> fig, ax = plt.subplots(1, 1, figsize=(10, 7))
>>> ax.plot(10**model['track']['log_Teff'], model['track']['log_g'])
>>> ax.invert_yaxis()
>>> ax.invert_xaxis()
>>> fig.savefig("Figures/readOutput/model_output_after_evolution/ExampleLogGTeff_TRACK.pdf", bbox_inches="tight")

Parallel Examples

pysep has a job distributor built in to run as many instances of dsep at once as you have threads on your computer. It’s super easy to use.

In the following example we evolve a set of models with a variety of initial metallizities. Note, this is not a scientifically appropriate way to vary metallicity as it does not properly scale for known BBN abundances, but it is technically illustrative of how pysep works.

>>> from pysep.io.nml.control.defaults import solar
>>> from pysep.api.starting import get_basic_stellar_model
>>> from pysep.api.parallel import pStellarModel
>>> Zrange = np.linspace(0.01, 0.03, 8)
>>> X = 0.75
>>> basePath = "output/parallelEvolution/parallel_evolution"
>>> with pStellarModel(name="ZrangeTest") as psm:
>>>     for Z in Zrange:
>>>         newOutput = os.path.join(basePath, f"Model_Z={Z}")
>>>         if os.path.exists(newOutput):
>>>             shutil.rmtree(newOutput)
>>>         os.mkdir(newOutput)
>>>         newControl = solar.copy()
>>>         model = get_basic_stellar_model(newOutput, control=newControl)
>>>         model.control[1]['endage'] = 4.57e9
>>>         model.update_composition(Z=Z, X=X)
>>>         psm.append(model)
>>>     psm.pEvolve(autoStash=True)

This is a very basic example but shows how, as long as you can form a stellarModel object you can evolve it in parallel. Just form all your objects and as you form them append them to the psm object. At the end of that call pEvolve once.

Note how the user is still responsible for setting up each output directory for each model. Pysep will work if you dont but you may run into name collisions between model.dsep files (unless you set those names up manually). All the actual input and output files of dsep will have unique names tho (prepended with pn where n is the job number).

autoStash means that models will stash themselves as they finish. If you don’t want them to do that you can call pStash() to stash them at the end. autoStash will assume you want data=True on stash, pStash will not make that assumption.

As an aside, you also don’t have to use the job distributor with the context manager as I do below. If you have a simple python list of models, say called modelList then the below example will also work. The context manager is just a little clearner in my eyes.

>>> psm = pStellarModel(modelList)
>>> psm.pEvolve(autoStash=True)

See parallelEvolution for a more detailed description of the parallel part, here we will talk about enrolling models in a pysep database.

pysep includes a sqllite3 database model for keeping track of large ensebles of models. This is super helpful when you are evolving more than 10s of models.

You can either enroll models manually (no covered by an example but see the docs for this) or use pStellarModel to autoEnroll Models.

Note that for very large sets of models (100s of thousands) this enrolling process can be non trivial, taking minutes. If it starts taking 10s of minutes something is probably wrong.

Also note that the reason I am using solar.copy() is to prevent underlying references to the same object. This can be tricky when updating parameters. Always use the copy operator liberally when building your models.

Note how, because we are using a managed database now we do not have to worry about the file path more than “whats the root where these models will all live at”. the DB manager will assign each model a unique path within the root. These paths will not be readable as they are random 20 char alphanumeric strings. If you use a database then you have to use the database to retrieve files.

>>> Zrange = np.linspace(0.01, 0.03, 8)
>>> X = 0.75
>>> basePath = "output/parallelEvolution/parallel_evolution_with_database"
>>> with pStellarModel(name="ZrangeTest") as psm:
>>>     for Z in Zrange:
>>>         newControl = solar.copy()
>>>         model = get_basic_stellar_model(basePath, control=newControl)
>>>         model.control[1]['endage'] = 4.57e9
>>>         model.update_composition(Z=Z, X=X)
>>>         psm.append(model)
>>>     DB = psm.swap_into_DB(basePath, pbar=True)
>>>     psm.pEvolve(autoStash=True)

Database Examples

Note this example will only work if you have run the second test (ID=1) from parallelEvolution and have the ZrangeTest.db database in the current working directory. If you do not run that test and the file should be generated.

Once you have a databse you might want to use it! Note that more advanced databse query options will come in the future, but for now this is workable

Below we read the database already generated and point it to the root of where the data is stored. We then use the iter_models method which simply returns one model entry at a time.

Note that these are model entries in so far as they are the model object defined in the database schema, these are not the same models which we have manupulated in the other examples here. That is why the first thing we do is load the model assuming it has been stashed (as is the case in the other exampels).

Then the code is the same as we would use to read output files anytime. See readOutput for more details on that.

>>> from pysep.misc.runDB import model_DB
>>> from pysep.dsep import load_model
>>> DB = model_DB("ZrangeTest.db", "output/parallelEvolution/parallel_evolution_with_database")
>>> for modelEntry in DB.iter_models():
>>>     model, tdir = load_model(os.path.join(modelEntry.path, "model.dsep"))
>>>     fig, ax = plt.subplots(1, 1, figsize=(10, 7))
>>>     ax.plot(10**model['iso']['Log_T'], model['iso']['Log_g'])
>>>     ax.invert_yaxis()
>>>     ax.invert_xaxis()
>>>     Z = model.control[0]['rsclz']
>>>     fig.savefig(f"Figures/readDatabase/generate_logg_teff_from_models_in_database/HR_Z{Z}.pdf", bbox_inches="tight")