HARM
harm and utilities
 All Data Structures Files Functions Variables Typedefs Macros Pages
docs/general_plotting_guide.txt File Reference

Detailed Description

There are a couple plotting tools that go beyond vis5d+'s capability
to deal with more publishable plots.
===============================================================
HARM data:
1) In general, in the harm git, see docs/tousedata.txt for the format
for fieldline, gdump, and maybe dump files. All the plotting begins
starting with using that data, so it's good to know what one is
starting with!
2) Don't forget about http://globusonline.org/ for handling copying of
data to/from various computers. Don't use scp or bbcp or any other
method. Any other method is highly unreliable and slow.
Most of XSEDE has this setup, and virtually any computer you have access
to can have a globusconnect going (even with out you logged in by using
"nohup").
3) One can do most analysis/plotting on supercomputers, although for
just learning one can use a local computer.
===============================================================
SuperMongo (SM):
SM is good for basic 2D plots with ability to quickly appear high
quality with latex (math) mode for axes labels, titles, etc.
1) It requires text dump and gdump files. So one either needs to use
"bin2txt" to convert the binary to text, or one needs to ensure the
files are written as text during the simulation (not good for
supercomputers). To do this, one sets in your
And also do: SET USEROMIO to 0 or 1 in mympi.definit.h (needs to be 0
2) SM Project and Repo (SVN based):
https://harm.unfuddle.com/svn/harm_harmsmmacros/
QuickStartGuide(updated):
https://harm.unfuddle.com/a#/projects/2/notebooks/3/pages/6/latest
Use guide:
https://harm.unfuddle.com/a#/projects/1/notebooks/1/pages/45/latest
3) One should use "jrdp3duentropy" to read-in the latest dump files
(one must have text dump files) and "grid3d" to read in the gdump file
(again text only).
E.g.,
A) install as per guide
B) cd to the "run" directory (i.e. not dumps or images directory)
C) run as per guide: jsm
D) gogrmhd
E) jrdpcf3duentropy dump0000
F) plc 0 lrho # This will plot the log of density
G) help jrdp3duentropy # to see what this macro/script does and what
else you can plot. See all the other macros in ~/sm/ that you
installed during the quick start guide.
H) pl 0 r lrho # 1D plot of all data for radius vs. log(rho0)
===============================================================
Python Scripting:
1) Using SM is easy for simple high quality 2D plots, but not as robust
or general as python.
Python will read-in the binary or binary fieldline file and gdump file
but it's not as easy to make a quick high-quality 2D BW plot. But for
color of >1D (and often >2D) or more complicated analysis using
various packages, python is best.
2) Repo (git based) is part of the HARM project on unfuddle:
git@harm.unfuddle.com:harm/pythontools.git
Switch to branch "jon" and don't use "master": git checkout jon
It should tell you that it's tracking remote branch from origin. If
not, something's wrong.
3) After getting the repo and the directory called "pythontools" (say
downloaded into your home directory), *move* (do not link, so that
scripts below behave normally) to your home directory as:
mv ~/pythontools ~/py/
4a) Make sure you have installed python2.7 (not 2.6 or 3.0 or any
other). Install all the related packages like matplotlib, numpy,
scipy. Also best to install ipython. For some help on that, see
~/py/docs/pythonfullinstall.sh . However, this is for installing on
any arbitrary system, not ubuntu. Try using the package manager and
install all those items mentioned in that install script, and only if
one has problems should one install python directly. This means
looking for ubuntu packages: python, python2.7-setuptools,
python-dateutil, yasm, ipython, python-nose, python2.7-scipy,
python2.7-numpy, python-matplotlib, python-matplotlib-data, dvipng
Maybe also install medibuntu: http://www.medibuntu.org/repository.php
Some things may already be installed (e.g.): python --version # should
be 2.7.x
4b) Ensure PYTHONPATH is set. Put in your .bashrc the following:
source ~/.bashrc.jon
#source /opt/intel/bin/compilervars.sh intel64
export PYTHONPATH=$HOME/py/:$PYTHONPATH
export LESS="-R"
export EDITOR=emacs
alias ipython='ipython --pylab --colors=LightBG'
#export PYTHONPATH=$HOME/lib/python/:$HOME/py/:$PYTHONPATH
PATH=/home/$USER/mpich-install/bin:$PATH ; export PATH
export HOSTNAME
if [ -z $LD_LIBRARY_PATH ]; then
LD_LIBRARY_PATH=/home/$USER/lib/
else
LD_LIBRARY_PATH=/home/$USER/lib:${LD_LIBRARY_PATH}
fi
#for user libs
export LD_LIBRARY_PATH
5) The primary script for HARM analysis is: ~/py/mread/__init__.py .
There are lots of bash script wrappers for python stuff in
~/py/scripts/ .
The __init__.py file is a monster master file used for the tons of
calculations in the MCAF paper.
Along with those bash scripts (the *links* to make a single place of
links for the multiple supercomputer runs, makemovie.sh and
makemallovie.sh for doing batch jobs of analysis over all files for
each supercomputer run, etc.), this is how one can do large-scale
analysis on supercomputers. Indeed, those scripts show exactly
how that __init__.py file is really used under many cases.
I mostly tuned __init__.py to be command-line driven. It doesn't
focus on popping-up pictures.
Note that this __init__.py is quite advanced. It deals with many
issues related to what computer one is one, issues with various
computers, etc. But, python is fairly modular (although many global
variables have been used), so it's not too hard to ignore the advanced
bits and add your own analysis calls or use existing analysis.
6) You should learn python or get a book. I used a lot of google
searching after starting with someone else's template file.
Alot of results will come up as from stackoverflow.org answers (very good stuff)
or direct websites about python (e.g. http://docs.python.org/2.7/
or http://matplotlib.org/ or http://www.scipy.org/
Basically, python is a scripting language, meaning the user doesn't
have to compile the code. One just runs something like:
python <scriptname>
Python scripts are run sequentially as the file appears. Many times
all that's in the script are functions and no commands,
but at least sometimes commands appear like "import <package>" at the top
so that when one runs the script packages one needs are already loaded
before running any functions.
Python scripts have a "def main()" type function that is the only
function ran when running a script. Many times, however, this doesn't
do anything itself unless one passes it arguments. That's how this
__init__.py script is setup.
7) Here's an example use of python with my script, using it to compute various
stages of the calculation used for the MCAF paper. This is based upon how
the script makeallmovie.sh uses makemovie.sh.
preA1) Ensure right files exist in run/dumps and run/
# if created dump0000 and gdump but not .bin versions, convert them is best:
# e.g., for 128x64 simulation, do (do head -1 on original files and see last number as being numcolumns to use below after the "d"):
bin2txt 2 1 0 -1 2 128 64 1 1 dump0000 dump0000.bin d 81
bin2txt 2 1 0 -1 2 128 64 1 1 gdump gdump.bin d 126
NOTE: If you tried compiling bin2txt in other instructions, but it wouldn't work, you can try editing bin2txt.c and setting HDF and V5D to 0 at the top #define's. Then edit makefile and search for v5d and comment out hdf and v5d stuff by putting a # in front of that stuff, like: #-lmfhdf -ldf -ljpeg -lz -lv5dbin2txt . Then do: make superclean ; make prepbin2txt ; make bin2txt ; cp bin2txt ~/bin . Then ensure gdump is converted to gdump.bin as above command for bin2txt does.
preA2) Add your run name (e.g. "run" below, where "run" could be any modelname like "thickdisk7" and other things.) to ~/py/mread/__init__.py as another modelname entry in case defaults aren't good enough (e.g. for defaultfti and defaultftf that need to at least cover the range where any data exists, else avg merge will fail as NOTUSING will be applied to all files.)
A) Run the default torus problem or your problem with harm, so now you
have a "run" directory with "dumps" and "images" directories. Enter
the "run" directory, whatever it's called. I'll assume harm master
has been downloaded to ~/harmgit/ and the python stuff has been linked
to ~/py/
cd run/
cd ../ #really? Yes!
cp ~/py/scripts/makeallmovie.sh .
# open that makeallmovie.sh and set (and remove or commend-out previous such settings):
# "run" here is name of your "model" according to the bash and python scripts.
dircollect="run" # models (here just 1) to collect latex tables over
dirruns="run" # models (here just 1) to run over
numkeep=2 # number of files to really use out of all of them (2 for now to iterate quickly on confusions)
# NOTE: change numkeep to very large number like 500000 to use all fieldline files.
# then edit ~/py/scripts/makemovie.sh :
1) change for "$system -eq 3" part:
numcorespernode=?? # choose ?? as number of cores on your system. Try ??=1 at first to avoid confusion of multiple stderr and stdout output files in case something goes wrong. Make 8 or 12 or 16 for nodes with that many cores available.
2) change itemspergroup=$(( 20 )) and replace 20 with 1 unless one has more than 20 files.
# Note that in general, one should setup makeallmovie.sh and makemovie.sh to have your own "system" sections *and* "modelname" sections. That is so you can control by default which files are actually kept (default is higher half of fieldline files).
# now get ready to run by setting arguments to pass to the script
moviename="test1" # name of directory your analysis will be done in
docleanexist=1 # whether to clean-out old analysis
dolinks=1 # whether to make links to dumps as required
dofiles=1 # whether to copy __init__.py, makemovie.sh, and other files needed
make1d=1 # Compute 1D stuff vs. radius results
makemerge=1 # Merge across all core's 1D calculations
makeplot=1 # Make all plots and latex output and final output for 1D calculations
makemontage=0 # Make montage of plots (can be quite slow for large data)
makepowervsmplots=0 # Make Power vs. m calculations and plots (needed if doing fft or spec). Only makes sense in 3D.
makespacetimeplots=0 # space (r and theta) vs. time plots
makefftplot=0 # FFT specific to MCAF paper (watch out for non-constant spacing in time for dumping or missing files)
makespecplot=0 # Spectrogram specific to MCAF paper
makeinitfinalplot=0 # initial and final 2D plots
makethradfinalplot=0 # theta vs. radius 2D plots
makeframes=1 # make movie frames
makemovie=1 # make movie (or just run ffmpeg yourself)
makeavg=0 # compute 2D averaged time-averaged data (need to do first before made1d stuff for Qnlm)
makeavgmerge=0 # merge avg
makeavgplot=0 # make plots or tables related to averages
collect=0 # collect results from latex info from all models
# run it
rm -rf run/test1/ # in case already exists
chmod a+rx ./makeallmovie.sh
bash ./makeallmovie.sh ${moviename} $docleanexist $dolinks $dofiles $make1d $makemerge $makeplot $makemontage $makepowervsmplots $makespacetimeplots $makefftplot $makespecplot $makeinitfinalplot $makethradfinalplot $makeframes $makemovie $makeavg $makeavgmerge $makeavgplot $collect
# One can do each stage separately to see what's going on.
# e.g., to clean-up the directory, setup links, and copy files, do just:
# rm -rf run/test1/ ; bash ./makeallmovie.sh ${moviename} $docleanexist $dolinks 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
# MOST generally, one has to run the avg stuff first so it's available for all calculations, by doing 2 runs
# This is because the proper Fourier transforms need the time-averaged value subtracted off.
# Do:
rm -rf run/test1/
chmod a+rx ./makeallmovie.sh
bash ./makeallmovie.sh ${moviename} 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0
bash ./makeallmovie.sh ${moviename} 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 0
# NOTE: I avoided doing makeavgplot until the second time, since only the merged avg2d.npy file is required.
# NOTE: One can of course skip certain things that are not needed, but some items require others (e.g. make1d must come before makemerge that must come before makeplot). Indeed, makemontage through makethradfinalplot all require one to do "makeplot"
# E.g., if one already has the 1d and avg files done, one can fixup your plots and redo the plotting stuff like:
bash ./makeallmovie.sh ${moviename} 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 1 0
# Many things are computed and outputted. E.g.
# FROM make1d/makemerge/makeplot:
# 1) python.plot.full.out : All things from "1D" calculations
# 2) fitplot* : Some fitting plots
# 3) aphi.png : vector potential averaged over dumps
# 4) datavs??.txt : Many computations that can be read into SM using existing SM scripts I used to make MCAF paper plots. Those SM scripts are in "~/sm/thickdisk.m"
# FROM makepowervsmplots:
# 1) power*.png power*.txt rbar*.txt # again, see SM scripts in thickdisk.m for looking at this data.
# FROM makefftplot:
# 1) fft1.pdf fft1.png data*fft*.txt
# FROM makeframes:
# 1) lrho*.png
# FROM makemovie:
# 1) *.avi *.mov
# FROM makeavgplot:
# 1) fig2.png aphizoom_avg.png aphizoom_avgfield.png aphialllog_avgfield_avg.png fig4_0.png fig4_1.png dataavgvs*.txt dataavg*.txt python.plot.avg.full.out
# NOTE: Most of the plots won't work for 2D harm data. Indeed, some plots may hang (e.g. makethradfinalplot making of velocity mkstreamplot1).
# Sometimes if you kill a job, you'll have stray python processes still going. Assuming it's all you are doing with python as a user, you should do: killall -s 9 python to get rid of them.
## To get movie files to always be created even if you have numkeep less than number of files in makeallmovie.sh (so it skips some files), you should run:
cd ${moviename} # or whatever to get to directory, like cd run/test1/$moviename
~/py/scripts/makelinkimages.sh
# this will produce links so ffmpeg or avconv or other programs can see a sequential list of files that they need to act upon.
### NOTE: At this point, one is "done" with python scripts called from bash scripts to make automatted stuff. Remaining "steps" are alternative ways to run python or issues related to super computers.
# NOTE: If you want to run with more cores on a multi-core node, you can change numcorespernode=1 to (e.g.) numcorespernode=8 in ~/py/scripts/makemovie.sh . Then for the make1d/makemerge qty2.npy generation phase and the makemovie phase, it will run 8X faster. The makeplot phase where various plots are created won't take any longer or shorter with more fieldline files or more cores -- the "plotting" stage only takes longer if the simulation is higher resolution.
# If you want to do that, then you should probably kill the previously running python in case it's still going. Do "killall -s 9 python" to kill all your python calls. Then you can try with more cores.
# You can check "top" to see if it's running python. Or look at the latest *err* or *.out files in the movie directory to see what's going on.
8) One can run the makemovie script directly once one has already at
least run makeallmovie with the first 1 1 1 options so that the
test1-like directory, files, etc. are all there. As in those scripts,
here's the full raw full command assuming one set these parameters on
the command line with typical choices for a few parameters
cd run/ #i.e. actually be in that run's directory
#
thedir=`pwd`
system=3
parallel=0
make1d=1
makemerge=1
makeplot=1
makemontage=0
makepowervsmplots=1
makespacetimeplots=1
makefftplot=1
makespecplot=1
makeinitfinalplot=1
makethradfinalplot=0
makeframes=1
makemovie=1
makeavg=1
makeavgmerge=1
makeavgplot=1
cmdraw="sh makemovielocal.sh ${thedir} $make1d $makemerge $makeplot $makemontage $makepowervsmplots $makespacetimeplots $makefftplot $makespecplot $makeinitfinalplot $makethradfinalplot $makeframes $makemovie $makeavg $makeavgmerge $makeavgplot ${system} ${parallel}"
# If one is not running on a batch system or in parallel, just run it
# as that command:
$cmdraw
# Otherwise, it's best to do (to avoid output to screen and avoiding
# the job to die if one gets logged out):
rm -rf makemovielocal_${thedir}.stderr.out
rm -rf makemovielocal_${thedir}.out
rm -rf makemovielocal_${thedir}.full.out
echo "((nohup $cmdraw 2>&1 1>&3 | tee makemovielocal_${thedir}.stderr.out) 3>&1 1>&2 | tee makemovielocal_${thedir}.out) > makemovielocal_${thedir}.full.out 2>&1" > batch_makemovielocal_${thedir}.sh
chmod a+x ./batch_makemovielocal_${thedir}.sh
nohup ./batch_makemovielocal_${thedir}.sh &
8b) Additional notes on running on Kraken:
Can use parallel=2 for optimal speed. This uses code makemoviec instead of the bash script makemovie.sh. One still uses the same makeallmovie.sh script/calls, but some edits may be required:
a) makeallmovie.sh : Change dircollect and dirruns to "rad1" or the name of your run/runs.
b) makeallmovie.sh : Change numkeep to number of files want to keep, e.g., 2200
c) makeallmovie.sh : Ensure in system=5 kraken section have set parallel=2 .
d) makeallmovie.sh : Change default "factor" and keepfilesstart and keepfilesend if desired. Or add entry for your model name ("$thedir" section in this bash script).
e) Ensure makemoviec can compile when doing makeallmovie.sh:
i) cd ~/py/scripts
ii) set #define USEPYTHON (0) in jon_makemovie_programstartc.c
iii) make
f) Edit ~/py/scripts/makemovie.sh for kraken section to be how long run should be, how many cores, etc. Can add "$modelname" entry for jobsuffix, needed if have multiple jobs
g) default is currently set to have numtasks equal to total number of field line files, so total number of cores requested is equal to total number of field line files, so fastest possible run. However, can set numtasks directly as number of mpi processes to run, numcorespernode directly as needed for memory issues (i.e. ensure total memory on node is sufficient for whatever number of cores you run on), and the rest should be handled by the makemovie.sh script as far as setting up the chunk list passed onto the makemoviec program.
h)
i) At least edit default for kraken sections. Set timetot, numcorespernode (which will be total cores), memtot (estimating how much memory per core needed as well as overhead), then in kraken's parllel 2 section, only have to set plot part.
j) Edit ~/py/mread/__init__.py :
i) Set USENAUTILUS 0
ii) Set use2dglobal=False near end of file in main()
ii) Probably need OLDQTYMEMMEM=0 in same location in main()
iii) anywhere modelname exists, check if need to add new entry for your model.
k) Now can run makeallmovie as above.
l) I noticed Kraken used to have latex and gs installed, but now doesn't. Only Nautilus has it, and Nautilus can't be used anymore. One has to, as a user, install latex. E.g. from http://www.tug.org/texlive/quickinstall.html . Use wget to get the tarball for unix, follow instructions about untaring (i.e. tar xvzf ...), and run the install program in the directory created by the tarball (./install-tl). One just changes the installation directory to be /lustre/scratch/$USER/... instead of /usr/local/... and change installation paths to be ~/bin ~/share/man ~/share/lib . Like:
wget http://mirror.ctan.org/systems/texlive/tlnet/install-tl-unx.tar.gz
tar xvzf install-tl-unx.tar.gz
./install-tl
# choose paths as user paths as mentioned above.
# *must* choose /lustre/scratch/$USER/... type path, so can access on compute nodes that have no access to $HOME
# wait 1.5-2 hours to complete download/install process.
m) I also noticed gs is not available. Get it from http://www.ghostscript.com/download/gsdnld.html and follow installation instructions at (e.g.) http://ghostscript.com/doc/7.07/Install.htm , like:
#http://www.ghostscript.com/download/gsdnld.html
wget http://downloads.ghostscript.com/public/binaries/ghostscript-9.10-linux-x86_64.tgz
tar xvzf ghostscript-9.10-linux-x86_64.tgz
cd ghostscript-9.10-linux-x86_64/
# run it and/or link it to your home directory
#or for a general system:
wget http://downloads.ghostscript.com/public/ghostscript-9.10.tar.gz
tar xvzf ghostscript-9.10.tar.gz
cd ghostscript-9.10/
./configure --prefix=$HOME/
make
make install
# for gs, copy binary to /lustre/scratch/$USER/... type path, so can access on compute nodes that have no access to $HOME .
Ensure environmentfiles/kraken/setuppython27 has PATH and PYTHONPATH with the path where the binary for "latex" and "gs" are located.
Good idea to tgz-ball the latex and gs stuff and copied to ranch so not purged.
9) Going even simpler (but with less obvious ways to get something to output)
one can use python or ipython directly.
cd run/
ipython ~/py/mread/__init__.py
# This does nothing except run through the script from top to bottom as well as run main()
# No command line parameters were given, so "runtype" was not set as viewed by the script,
# so main() catches no conditionals and the script exits.
# To get a behavior like the makemovie.sh script, one manually runs python and the script
with arguments just like done by makemovie.sh (NOT RECOMMENDED -- just use makeallmovie.sh):
cd run/
ipython
runtype=3 # see main() or makemovie.sh for runtype and what it means and what other parameters need to be passed.
run ~/py/mread/__init__.py $runtype $modelname $makepowervsmplots $makespacetimeplots $makefftplot $makespecplot $makeinitfinalplot $makethradfinalplot
# To use the script in an interactive mode, instead run it without
# arguments. One manually sets those functions run globally during
# the script processing. This includes the functions
# "runglobalsetup()" and "main()". But, the python script relies upon
# command line arguments explicitly, so this is used to do
# stuff *NOT* done by makeallmovie.sh or makemovie.sh, like
# calling the script's more internal functions that don't use argv.
# E.g.:
cd run/
ipython
run ~/py/mread/__init__.py
# now run a tutorial:
tutorial1()
# see how lrho is plotted.
# See also the short tutorial at the end of main() as runtype==8.
# NOTE: One can't run certain functions on the command line because
# variables lose scope immediately, even if chosen to be global in the
# function itself. So best to create a script file separately
# (i.e. not inside the base __init__.py) that can contain the same
# type of lines as tutorial1() and "run" your script file and then
# call your function. So the command sequence would look like:
cd run/
ipython
run ~/py/mread/__init__.py
run ~/py/myscript.py
mytut1()
10) To modify the python script:
There are three modes you can operate in.
Mode 1) insert code during run of normal scripts
Mode 2) command-line use of ipython
Mode 3) New t-r integrated diagnostic
The advantage of #1 is you can eventually compute lots of things and
output whatever you want. The advantage of #2 is you can more easily
play around and you don't have to have all the other script things
going on. I recommend Mode 2 for now until you know exactly what you
are doing and what kind of diagnostic you want.
Mode 3 is an advanced version of Mode 1 when you know exactly what you want to have as a diagnostic.
===
For Mode 1: The best place to insert such calculations is in function
getqtyvstime() near the top (it's a long function) and just after:
======
ts[qindex]=t
#################################
#
# Begin quantities
#
##################################
#
#<INSERT FUNCTION CALL HERE>
#e.g.
sgralla1()
======
Then you create a function (say at bottom of the __init__.py file) that has:
def sgralla1():
#1) verify that B_z=const asymptotically (my definition of Wald problem)
#2) compute the energy and angular momentum flux
#3) understand where the current sheet forms and what supports it numerically
#4) compute B_phi/E_r asymptotically, which I believe will be 1
#5) fit for psi where E_r = partial_r psi
======
e.g.
sigmaks=r**2+(a*np.cos(h))**2
deltaks=r**2-2*r+a**2
Aks=(r**2+a**2)**2-a**2*deltaks*(np.sin(h))**2
#
gvks00=-(1.0-2.0*r/sigmaks)
gvks11=(1.0+2.0*r/sigmaks)
gvks22=sigmaks
gvks33=(np.sin(h))**2*(sigmaks + a**2*(1.0 + 2.0*r/sigmaks)*(np.sin(h))**2)
#
myuu0=uu[0]*dxdxp[0,0]
myuu1=(uu[1]*dxdxp[1,1] + uu[2]*dxdxp[1,2])*np.sqrt(gvks11)/myuu0
myuu2=(uu[1]*dxdxp[2,1] + uu[2]*dxdxp[2,2])*np.sqrt(gvks22)/myuu0
myuu3=(uu[3]*dxdxp[3,3])*np.sqrt(gvks33)/myuu0
myuurot=np.sqrt(myuu2**2+myuu3**2)
myB1=(B[1]*dxdxp[1,1] + B[2]*dxdxp[1,2])*np.sqrt(gvks11)
myB2=(B[1]*dxdxp[2,1] + B[2]*dxdxp[2,2])*np.sqrt(gvks22)
myB3=(B[3]*dxdxp[3,3])*np.sqrt(gvks33)
mybu1=(bu[1]*dxdxp[1,1] + bu[2]*dxdxp[1,2])*np.sqrt(gvks11)
mybu2=(bu[1]*dxdxp[2,1] + bu[2]*dxdxp[2,2])*np.sqrt(gvks22)
mybu3=(bu[3]*dxdxp[3,3])*np.sqrt(gvks33)
#
sgrallaB1=myB1[:,ny/2,0]
print("sgrallaB1");
print(sgrallaB1);
#
======
For Mode2:
Just look at some functions like harmradplot?() at the bottom of the
script, like harmradplot1(). You can also look at the tutorial?() and
tutorial1alt() calls.
Create your own function sg1() that looks similar and sits within the
same file at the bottom. Then follow the template and compute
whatever you want, and you can plot something or save a figure or
anything like that. To run on command line, you do:
ipython
run ~/py/mread/__init__.py
sg1()
To change something in Mode2, you can stay in ipython. Just edit the
script and your function in a different terminal, then rerun by doing:
run ~/py/mread/__init__.py
sg1()
The "run" line just loads the script again.
========
For Mode3:
Follow Mode1, but hijack existing quantities stored in hoverr[qindex]
or other <quantity>[qindex] that you think you don't need.
========
For Mode 4:
To add a new t-r integrated quantity, do:
1) add entry to getqtymem() like the below, at the end of the function before the "return(i) call.
global newvar
newvar=qtymem[i];i+=1
2) Inside getqtyvstime(), add your calculation before the "if dobob==1:" line with lots of qtymem's inside the conditional.
3) add 1 to the getnonbobnqty() setting of "value" before "return(value)"
To add a new time-averaged quantity, do:
1) add item to assignavg2dvars() after the last avgmem assignment and put like (for scalar)
avg_newvar=avgmem[i,:,:,None];i+=1 # i=1
(for tensor):
n=16
avg_newtensor=avgmem[i:i+n,:,:,None].reshape((4,4,nx,ny,1));i+=n
2) add avg_newvar or avg_newtensor to global declaration at top of assignavg2dvars()
3) add avg_newvar or avg_newtensor to global declaration at top of get2davgone()
4) In get2davgone() function inside loop after last avg_<var> += assignment, do like (scalar) before the "itert=itert+1"
n=1
newvar = newvarface()
avg_newvar += ((_dx3*newvar.sum(-1))**2)[:,:,None]*localdt[itert]
5) add 1 to the getnqtyavg() value before the "return(value)"
11) Some notes about functions in tutorial?() or harmradplot?() functions:
functions (see script for arguments to each function):
rfd(): loads field line file
grid3d(): loads grid positions, metric, connection coefficients, etc.
getrhouclean(): Get clean (floor removed) versions of densities in case wanted.
cvel(): compute some velocity-field related things
Tcalcud(): Compute various stress-energy tensor terms/components/etc.
faraday(): compute fdd and fuu, the F^{\mu\nu} and F_{\mu\nu} faradays as well as omegaf's, like the generalized omegaf1b that works for any poloidal field orientation.
fieldcalc(): computes A_\phi
iofreq(): get i for a given radius along the equator
iofrpole(): get i for a given radius along the polar axis
jofh(): get j for a given h = \theta
kofph(): get k for a given ph = \phi
common things loaded/computed by rfd() or grid3d():
r: radius
h: \theta
ph: \phi
rho: rest-mass density
ug: internal energy density
uu[0-4]: fluid frame: u^\mu
ud[0-4]: fluid frame: u_\mu
B[1-3]: *F^{it}
bu[0-4]: comoving field: b^\mu
bd[0-4]: comoving field: b_\mu
t: time
nx,ny,nz: number of cells in the radius,\theta,\phi directions
a: spin
rhor: horizon radius
Rin: Inner grid radius
Rout: Outer grid radius
THETAROT: BH tilt
ti,tj,tk: index position of grid cell in radius,\theta,\phi directions
x1,x2,x3: internal cell positions
========
Remember python is space-based (instead of curly bracket based like
C), and code that matches in indention is at same level/scope.

Definition in file general_plotting_guide.txt.