Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add documentation on build nodes for software layer #71

Merged
merged 1 commit into from
Apr 9, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
166 changes: 166 additions & 0 deletions docs/software_layer/build_nodes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
# Build nodes

Any system can be used as a build node to create additional software installations that should be
added to the EESSI CernVM-FS repository.

## Requirements

OS and software:

* GNU/Linux (any distribution) as operating system;
* a recent version of [Singularity](https://sylabs.io/singularity/) (>= 3.6 is recommended);
* check with ``singularity --version``
* ``screen`` or ``tmux`` is highly recommended;

Admin privileges are ***not*** required, as long as Singularity is installed.

Resources:

* 8 or more cores is recommended (though not strictly required);
* at least 50GB of free space on a local filesystem (like `/tmp`);
* at least 16GB of memory (2GB/core or higher recommended);

Instructions to install Singularity and screen (click to show commands):

??? note "CentOS 8 (`x86_64` or `aarch64` or `ppc64le`)"
```
sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo dnf update -y
sudo dnf install -y screen singularity
```

## Setting up the container

!!! warning
**It is highly recommended to start a `screen` or `tmux` session first!**

A container image is provided that includes everything that is required to set up a writable overlay
on top of the EESSI CernVM-FS repository.

First, pick a location on a local filesystem for the temporary directory:

Requirements:

* **Do not use a shared filesystem** like NFS, Lustre or GPFS.
* There should be **at least 50GB of free disk space** in this local filesystem (more is better).
* There should be **no automatic cleanup of old files** via a cron job on this local filesystem.
* Try to make sure the directory is unique (not used by anything else).

We will assume that `/tmp/$USER/EESSI` meets these requirements:

```shell
export EESSI_TMPDIR=/tmp/$USER/EESSI
mkdir -p $EESSI_TMPDIR
```

Create some subdirectories in this temporary directory:

```shell
mkdir -p $EESSI_TMPDIR/{home,overlay-upper,overlay-work}
mkdir -p $EESSI_TMPDIR/{var-lib-cvmfs,var-run-cvmfs}
```

Configure Singularity cache directory, bind mounts, and (fake) home directory:

```shell
export SINGULARITY_CACHEDIR=$EESSI_TMPDIR/singularity_cache
export SINGULARITY_BIND="$EESSI_TMPDIR/var-run-cvmfs:/var/run/cvmfs,$EESSI_TMPDIR/var-lib-cvmfs:/var/lib/cvmfs"
export SINGULARITY_HOME="$EESSI_TMPDIR/home:/home/$USER"
```

Define values to pass to ``--fusemount` in ``singularity`` command:

```shell
export EESSI_CONFIG="container:cvmfs2 cvmfs-config.eessi-hpc.org /cvmfs/cvmfs-config.eessi-hpc.org"
export EESSI_PILOT_READONLY="container:cvmfs2 pilot.eessi-hpc.org /cvmfs_ro/pilot.eessi-hpc.org"
export EESSI_PILOT_WRITABLE_OVERLAY="container:fuse-overlayfs -o lowerdir=/cvmfs_ro/pilot.eessi-hpc.org -o upperdir=$EESSI_TMPDIR/overlay-upper -o workdir=$EESSI_TMPDIR/overlay-work /cvmfs/pilot.eessi-hpc.org"
```

Start the container (which includes Debian 10, [CernVM-FS](https://cernvm.cern.ch/fs/) and
[fuse-overlayfs](https://github.com/containers/fuse-overlayfs)):

```shell
singularity shell --fusemount "$EESSI_CONFIG" --fusemount "$EESSI_PILOT_READONLY" --fusemount "$EESSI_PILOT_WRITABLE_OVERLAY" docker://eessi/fuse-overlay:debian10-$(uname -m)
```

Once the container image has been downloaded and converted to a Singularity image (SIF format),
you should get a prompt like this:

```
...
CernVM-FS: loading Fuse module... done

Singularity>
```

and the EESSI CernVM-FS repository should be mounted:

```
Singularity> ls /cvmfs/pilot.eessi-hpc.org
2020.12 2021.03 latest
```

## Setting up the environment

Set up the environment by starting a Gentoo Prefix session using the ``startprefix`` command.

**Make sure you use the correct version of the EESSI pilot repository!**

```shell
export EESSI_PILOT_VERSION='2021.03'
/cvmfs/pilot.eessi-hpc.org/${EESSI_PILOT_VERSION}/compat/linux/$(uname -m)/startprefix
```

## Installing software

Clone the [software-layer](https://github.com/EESSI/software-layer) repository:

```shell
git clone https://github.com/EESSI/software-layer.git
```

Run the software installation script in `software-layer`:

```shell
cd software-layer
./EESSI-pilot-install-software.sh
```

This script will figure out the CPU microarchitecture of the host automatically (like `x86_64/intel/haswell`).

To build generic software installations (like `x86_64/generic`), use the ``--generic`` option:

```shell
./EESSI-pilot-install-software.sh --generic
```

Once all missing software has been installed, you should see a message like this:

```
No missing modules!
```

## Creating tarball to ingest

Before tearing down the build node, you should create tarball to ingest into the EESSI CernVM-FS repository.

To create a tarball of *all* installations, assuming your build host is ``x86_64/intel/haswell``:

```shell
export EESSI_PILOT_VERSION='2021.03'
cd /cvmfs/pilot.eessi-hpc.org/${EESSI_PILOT_VERSION}/software/linux
eessi_tar_gz="$HOME/eessi-${EESSI_PILOT_VERSION}-haswell.tar.gz"
tar cvfz ${eessi_tar_gz} x86_64/intel/haswell
```

To create a tarball for specific installations, make sure you pick up both
the software installation directories and the corresponding module files:

```shell
eessi_tar_gz="$HOME/eessi-${EESSI_PILOT_VERSION}-haswell-OpenFOAM.tar.gz"
tar cvfz ${eessi_tar_gz} x86_64/intel/haswell/software/OpenFOAM modules/all//OpenFOAM
```

This tarball should be uploaded to the Stratum 0 server for ingestion.
If needed, you can ask for help in the
[EESSI `#software-layer` Slack channel](https://eessi-hpc.slack.com/archives/CNMM6G2RG)
6 changes: 5 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@ nav:
- Overview: filesystem_layer.md
- filesystem_layer/stratum1.md
- Compatibility layer: compatibility_layer.md
- Software layer: software_layer.md
- Software layer:
- Overview: software_layer.md
- software_layer/build_nodes.md
- Pilot repository: pilot.md
- Software testing: software_testing.md
- Project partners: partners.md
Expand All @@ -36,6 +38,8 @@ markdown_extensions:
- pymdownx.superfences
# tabbed contents
- pymdownx.tabbed
# clickable details
- pymdownx.details
- toc:
permalink: true
extra:
Expand Down