This area holds Singularity recipes for producing containers which provide run-time or development environments for Wire-Cell Toolkit in some different contexts.
When you see:
n$ echo "this command runs in native environment" c$ echo "this command runs in container environment"
Each Singularity.*
file builds an image. Also offered are the
binary results built at some point in the past.
Download this relatively svelte (717 MB) image: https://www.phy.bnl.gov/~bviren/simg/wctdev.simg
Run this:
n$ singularity exec wctdev.simg /bin/bash --rcfile wctrun.rc c$ wire-cell --help
This image provides:
- minimal Ubuntu 18.04 image
- build environment and run time dependencies needed by WCT as Ubuntu packages
- Jsonnet built from source.
- ROOT binaries from CERN.
- Files from WCT
data
andcfg
packages. - A copy of WCT source from
master
at time of image build. - A built version of above source in
/usr/local/opt/wct/
.
n$ wget https://root.cern.ch/download/root_v6.14.02.Linux-ubuntu18-x86_64-gcc7.3.tar.gz n$ sudo singularity build wctdev.simg Singularity.wctdev
Notes/caveats on the build:
- Produces 717 MB image, takes about 300 seconds
- Pre-download the required ROOT binary because the download can be really slow at times (and fast at others). If you must rebuild multiple times due to problems it gets annoying really fast to wait.
Use the same image for build and run environment but ignore the version of WCT provided and build it yourself..
n$ git clone --recursive [email protected]:WireCell/wire-cell-build.git wct n$ singularity exec wctdev.simg /bin/bash --rcfile wctdev.rc c$ cd wct/ c$ ./wcb configure --prefix=`pwd`/install --with-jsonnet=/usr/local --with-eigen-include=/usr/include/eigen3 c$ ./wcb -p --notests install
Notes:
- Your SSH agent authentication may not be visible from inside the container so do any
git
actions from your native shell. - Unlike running the pre-built
wire-cell
do not setPATH
variables as above in order to avoid any potential of version shear.
In order to run this build or run the build tests the PATH
variables
need to be set to the configured installation location.
c$ export LD_LIBRARY_PATH=`pwd`/install/lib:$LD_LIBRARY_PATH c$ export PATH=`pwd`/install/bin:$PATH c$ ./wcb -p --alltests
In principle, all tests should pass but there may be one or two edge
cases. If test_units.py
mails one probably needs to
source /usr/local/bin/thisroot.sh
WC/LS = Wire-Cell Toolkit + art and LArSoft. It has its own image, binding script and rc helper file.
Download this relatively bloated (4.4 GB) image: https://www.phy.bnl.gov/~bviren/simg/wclsdev.simg
n$ singularity exec wclsdev.simg /bin/bash --rcfile wclsrun.rc c$ wire-cell --help c$ art --help
This provides
- minimal Ubuntu 16.04 image with a few extra system packages
- larsoft as UPS binaries include
larwirecell
- latest production WCT as the
wirecell
UPS product - a copy of latest WCT source from github
- build of this source against the above as a
dev
version of UPS productwirecell
Well, good fscking luck. There are numerous problems getting it to
work, mostly involving around the insane setup
script needed before
any further UPS failure can happen. In the end I could only automate
so much of it.
n$ sudo singularity build --sandbox wclsdev-sandbox Singularity.wcls.dev n$ sudo singularity shell --writable wclsdev-sandbox c$ /usr/local/src/wcls.sh c$ exit n$ sudo singularity build wclsdev.simg wclsdev-sandbox
Notes/caveats:
- Ubuntu 16.04 is used because FNAL does not yet support 18.04 for UPS binaries.
- The Singularity build will take FOREVER due to downloading a gajillion jigglebytes from FNAL’s SciSoft server.
The container is read-only and writable storage is needed to hold any UPS products that want to be built. In principle, you can just use native file system like above. However, instead, we bind some native directory to a well known location to make the helper scripts simpler.
n$ git clone --recursive [email protected]:WireCell/wire-cell-build.git wct n$ ./bind-wcls.sh wclsdev.simg ups-dev-products wclsdev.rc c$ wclsdev-ups-declare c$ cd wct c$ wclsdev-wct-configure c$ ./wcb -p --notests install
Read ./wclsdev.rc.
Build a newer wirecell
UPS product like above and then:
n$ ./bind-wcls.sh wclsdev.simg wct-ups-dev-install wclsdev.rc c$ wclsdev-init lsdev c$ wclsdev-srcs c$ wclsdev-fsck-ups c$ wclsdev-setup lsdev c$ cd build_u16.x86_64/ c$ mrb build
Subsequent fresh sessions can be set up with:
n$ ./bind-wcls.sh wclsdev.simg wct-ups-dev-install wclsdev.rc c$ wclsdev-setup lsdev
Read ./wclsdev.rc.
There’s no special instructions here, other than to use the bind mount
on /usr/local/ups-dev
in which to install any additional UPS products.
Leading up to WCT 0.8.0 a new version of Jsonnet is needed. Here is what was done.
c$ cd $wclsdev_upsdev # <-- defined in the wclsdev.rc file. c$ git clone ssh://[email protected]/cvs/projects/build-framework-jsonnet-ssi-build jsonnet-ssi-build c$ cd jsonnet-ssi-build c$ ./bootstrap.sh $wclsdev_upsdev c$ cd $wclsdev_upsdev/jsonnet/v0_9_3_c/ c$ ./build_jsonnet.sh $wclsdev_upsdev e15 prof c$ ./Linux64bit+4.4-2.23-e15-prof/bin/jsonnet --version Jsonnet commandline interpreter v0.9.3
That just rebuilt the last version. To update, hack the scripts in the shim package. If you are lucky you’ll just need to tweak the UPS spelling of the version in the variables:
origpkgver=v0_11_2 pkgver=${origpkgver}
in files build_jsonnet.sh
and bootstrap.sh
. Repeating the two
scripts as above should hopefully succeed.
c$ ./Linux64bit+4.4-2.23-e15-prof/bin/jsonnet --version Jsonnet commandline interpreter v0.11.2
You must now find mention of the old version in various places, in particular:
n$ emacs $wclsdev_upsdev/wirecell/wclsdev/ups/wirecell.table
Strictly speaking you just need to modify the stanza matching the
quals to be used. If you’ve already set up your environment for that
version of wirecell
you can cycle:
c$ unsetup wirecell c$ setup wirecell wclsdev -q e15:prof c$ which jsonnet c$ /usr/local/ups-dev/jsonnet/v0_11_2/Linux64bit+4.4-2.23-e15-prof/bin/jsonnet
You’ll likely now need to rebuild your wct
and then lsdev
areas.
Besides singularity
you will need to provide local installations of:
debootstrap
- to build Ubuntu images
yum
- to build Scientific Linux images
Substantial disk space is required (especially for the SL/UPS images).
If you are short on disk space for /tmp
, you may want to do something
like:
n$ sudo mkdir -p /srv/singularity/tmp n$ sudo chmod 777 /srv/singularity/tmp n$ export SINGULARITY_TMPDIR=/srv/singularity/tmp n$ export SINGULARITY_LOCALCACHEDIR=/srv/singularity/tmp/$USER
The wclsdev
image requires manual intervention because problems
getting the UPS setup
command to work inside Singularity’s build.
I can’t find out why but it may be due /bin/sh
not being bash.
Downloading from SciSoft takes forever. There’s a hack to pre-seed the UPS tarballs. Inquire if interested.
A ./Singularity.sl7wclsdev script is also provided to build a WC/LS image based Scientific Linux. The resulting image is quite a bit larger so it not distributed. See below for a SL7 base image which is.
This is somewhat involved.
Pick a version of larsoft, (here, I use v07_03_00) and get all the binary tarballs.
n$ mkdir -p /srv/bviren/tmp/ups-sl7-ls703/products n$ cd /srv/bviren/tmp/ups-sl7-ls703 n$ wget http://scisoft.fnal.gov/scisoft/bundles/tools/pullProducts n$ chmod +x pullProducts n$ ./pullProducts products slf7 larsoft-v07_03_00 s70-e17 prof # optional: n$ rm -f products/wirecell/v0_7_0a/Linux64bit+3.10-2.17-e17-prof/wirecell-0.7.0/build/ n$ tar -cf ups-products-slf7-ls703.tar products
Notes:
- The
rm
there gets rid of intermediate build products which
should not have been added to the wirecell
UPS product. It remove 1.1GB.
- The
pullProducts
takes a horribly long time (45 minutes here) which is why we don’t even try to put this inside the Singularity script.
After making this tar file it is okay to remove the products/
directory and all the .tar.gz
files left by pullProducts
. You
will need to set the location of ups-products-slf7-ls703.tar
in your
copy of the ./Singularity.sl7wclsdev script.
n$ sudo singularity build sl7wclsdev.simg Singularity.sl7wclsdev
A more sustainable approach is to use a minimal SL7 image that lacks any UPS products and then leave it up to you, dear reader, to do with it as you wish. This is ./Singularity.sl7 which produces a 370 MB image which is available at https://www.phy.bnl.gov/~bviren/simg/sl7.simg.
You can then maintain your own UPS products area(s) in a native
directory of your choosing. To do the initial set up you can
essentially follow the pullProducts
instructions above and then
assure the resulting products/
directory is visible from inside the
image, either by having it in native home directory or binding its
directory into the image.
To match the pattern used in the wclsdev.simg
image the script
provided requires two native directories. One to bind to
/usr/local/ups
and one to /usr/local/ups-dev
. In principle these
could be the same area. Tweak as desired.
n$ ./bind-wcls-sl7.sh sl7.simg \ /srv/bviren/tmp/ups-sl7-ls703/products \ /srv/bviren/tmp/ups-sl7-dev \ wclsdev.rc c$ ups list -aK+ larsoft "larsoft" "v07_03_00" "Linux64bit+3.10-2.17" "e17:prof" "" c$ wclsdev-ups-version larsoft v07_03_00 c$ wclsdev-ups-version wirecell v0_7_0a c$ wclsdev-<TAB><TAB> wclsdev-fsck-ups wclsdev-setup wclsdev-ups-quals wclsdev-init wclsdev-srcs wclsdev-ups-version wclsdev-path wclsdev-ups-declare wclsdev-wct-configure
The wclsdev-*
functions were written with defaults matching some
release. To use with arbitrary release they will need additional
arguments than what was shown in their use above. When in doubt, use
the shell command type wclsdev-<command>
to see what a command does.
What follows is how to repeat the initial setup for this newer
version.
c$ wclsdev-ups-declare c$ cd wct c$ wclsdev-wct-configure c$ ./wcb -p --notests install
This works as above but Kerberos SSH seems busted. Do any git
stuff
outside the container for now. Because the SL7 image is newer you
will need to adjust qualifiers.
c$ wclsdev-ups-quals jsonnet e17:prof c$ ./build_jsonnet.sh $wclsdev_upsdev e17 prof c$ setup gcc v7_3_0 c$ ./Linux64bit+4.4-2.17-e17-prof/bin/jsonnet --version Jsonnet commandline interpreter v0.9.3
Note the setup of UPS product gcc
. Without it, trying to run the
just-built jsonnet
program will fail as it will try to link against
the system libstdc++
which is of the wrong ABI.
After changing the version strings as above, the latest version can likewise be built.
c$ ./Linux64bit+4.4-2.17-e17-prof/bin/jsonnet --version Jsonnet commandline interpreter v0.11.2
This progresses just like the Jsonnet example above but requires a
upstream WCT release as well as updating the wirecell.table
file to
point to the new release of Jsonnet (or whatever).
t.b.c.
This also proceeds like in wclsdev.simg
but here I make use of the
fact that /usr/local/ups-dev
was bound to my local disk area.
c$ wclsdev-init /usr/local/ups-dev/lsdev c$ wclsdev-srcs c$ wcls-fsck-ups c$ wclsdev-setup lsdev c$ cd build_slf7.x86_64 c$ mrb build
Here, we update for WCT release 0.8.0 (to build UPS product wirecell
with UPS version string v0_8_0
). Start the image as above and go to
where the build shim is cloned
n$ ./bind-wcls-sl7.sh ~/public_html/simg/sl7.simg /srv/bviren/tmp/ups-sl7-ls703/products /srv/bviren/tmp/ups-sl7-dev wclsdev.rc c$ cd /usr/local/ups-dev/wirecell-ssi-build/
Update the WCT version string in bootstrap.sh
and
build_wirecell.sh
and any new versions for external dependencies in
ups/wirecell.table
.
Also, take the opportunity to fix the build_wirecell.sh
so it
doesn’t include the temporary build products, saving a bit more than
1GB.
Then exercise the shim:
c$ ./bootstrap.sh $wclsdev_upsdev c$ cd $wclsdev_upsdev/wirecell/v0_8_0 c$ wclsdev-ups-quals e17:prof c$ ./build_wirecell.sh $wclsdev_upsdev e17 prof
This fails with:
building wirecell for sl7-x86_64-x86_64-e17-prof (flavor Linux64bit+4.4-2.17) Declaring wirecell v0_8_0 for Linux64bit+4.4-2.17 and e17:prof in /usr/local/ups-dev/wirecell/v0_8_0/fakedb + ups declare wirecell v0_8_0 -r /usr/local/ups-dev/wirecell/v0_8_0 -f Linux64bit+4.4-2.17 -m wirecell.table -q +e17:+prof -z /usr/local/ups-dev/wirecell/v0_8_0/fakedb + set +x Error encountered when setting up product: wirecell ERROR: Product 'wirecell' (with qualifiers 'e17:prof'), has no v0_8_0 version (or may not exist) ERROR: fake setup failed
Dunno what’s up with that.
Instead of bundling immediately out-of-date binaries into the image a more flexible way is to provide a base OS and then delegate to CVMFS to provide UPS products or other external software dependencies. A Singularity image may be created to handle CVMFS service but here it’s assumed provided by the host OS.
- CVMFS download instructions
- CVMFS quickstart (highlights given below)
After downloading the RPM/DEB that sets up the CVMFS package repository and installing the packages by following the links below do the following on your host OS:
$ sudo emacs /etc/cvmfs/default.local
Add:
# required CVMFS_REPOSITORIES=larsoft.opensciencegrid.org,uboone.opensciencegrid.org,dune.opensciencegrid.org CVMFS_HTTP_PROXY="http://my.local.proxy:1234;DIRECT" # optional but recomended: # limit how much cache is used on your computer, units in MB CVMFS_QUOTA_LIMIT=25000 # Some default is chosen (/var/lib/cvmfs/shared). # Best to put the cache on your fastest disk which has enough space. CVMFS_CACHE_BASE=/mnt/ssd/cvmfs
If you have a local CVMFS proxy, best to put them in front of DIRECT
.
Check if things work:
# cvmfs_config probe Probing /cvmfs/larsoft.opensciencegrid.org... OK Probing /cvmfs/uboone.opensciencegrid.org... OK Probing /cvmfs/dune.opensciencegrid.org... OK
You can then use a generic SL7 image with the host OS’s /cvmfs
bind-mounted.
n$ wcsing.sh sl7.simg bash.rc /cvmfs c$ source /cvmfs/larsoft.opensciencegrid.org/products/setup c$ ups list -aK+ wirecell c$ setup wirecell v0_9_1 -q c2:prof
Notes:
- Todo is to update ./wclsdev.rc to work in this mode. Above we use the more generic ./bash.rc.
- The first time any command touches parts of
/cvmfs
it will take a
long time as files are downloaded. Subsequent commands go be faster.
After setting up some recent wirecell
one can build WCT from source
by making use of the existing script that configures the build using
information from UPS environment variables.
c$ cd wire-cell-build/ c$ ./waftools/wct-configure-for-ups.sh install-ups-gcc
or, to use Clang
c$ CC=clang CXX=clang++ ./waftools/wct-configure-for-ups.sh install-ups-clang
Then:
c$ ./wcb -p --notests install
That install directory will need to be added to your LD_LIBRARY_PATH
in order to use the build. After which this should let tests pass:
c$ ./wcb --alltests
- wct
- end-user running of provided WCT binaries and all external dependencies.
wire-cell
andWIRECELL_PATH
set ready to go as soon as container is entered- externals provided by container
- wctdev
- wct + develop WCT (modify source and build/install/run)
- WCT src area must be in bind-mount list
- externals provided by container
- wcls
- end-user running of provided WCT + larsoft binaries and all dependencies.
- wclsdev
- wcls + develop WCT and LS (modify either source and build/install/run)
- wct install directory
- if wct install type (UPS or local)
- wct src directory
- wct externals type (UPS or local, determines
./wcb configure
and runtime environment setup) mrb
area or version/packages
/wcdo/src/wc/{build,gen,sigproc}/
/wcdo/src/ls/{build_*,srcs}/
/wcdo/{bin,lib,include,src,share,ups}/
- end-user running of stand-alone WCT (
wire-cell
) - developing of stand-alone WCT (
wire-cell
) on multiple platforms - end-user running of WC/LS
- end-user running of WC/LS and experiment
- developing WC/LS on multiple platforms
- CI WCT on multiple platforms
- CI WC/LS on multiple platforms
Want way to encode intention in native and container scripts so I don’t have to remember all the tedium.
Eg, want:
n$ <ncmd> <activityname>
and immediately be ready to do ”<activity>
”. This needs encoding intention in some user-provided init script.
- (ro) Host-based primary UPS products area
- local file system
- /cvmfs
- (rw) “dev” UPS products area
- (rw) WCT source area(s)
- (rw) LS source area(s) (
mrb
controlled) - (ro) WCT
data
andcfg
areas. - (ro/rw) “data” areas
- (rw) installation target directory for WCT
- For wclsdev, a UPS-declared area is needed