Skip to content

Conversation

@hvelab
Copy link
Collaborator

@hvelab hvelab commented Feb 20, 2025

As per this issue: https://gitlab.com/eessi/support/-/issues/119

First batch regarding the cvmfsexec usage. A couple of thoughts:

  • Should we add the cvmfsexec_eessi.sh and the orted wrapper in some repo so people can directly clone and not have to copy-paste it?
  • Worked fine in my local Ubuntu 22.04 and a remote Rocky Linux test environment,. For some colleagues I asked to test in their locals (Ubuntu 23), this error msg occured:
 $ ./test.sh ls /cvmfs/software.essi.io
/tmp/rzarco/tmp.sdsrKCrQH7/cvmfsexec/dist should be rpm2cpio of cvmfs rpm
mountrepo software.eessi.io failed

so seems only works for RHEL :/

@ocaisa
Copy link
Member

ocaisa commented Feb 21, 2025

cvmfsexec only works for RHEL-like systems 😢

@hvelab
Copy link
Collaborator Author

hvelab commented Feb 24, 2025

cvmfsexec only works for RHEL-like systems 😢

True, in my Ubuntu works but seems that its because I have CVMFS already available, I tested it in other systems without CVMFS and the same error message shows :(

clarirfication for cvmfsexec
orted_wrapper_dir=$(dirname $0)
export PATH=$(echo $PATH | tr ':' '\n' | grep -v $orted_wrapper_dir | tr '\n' ':')

~/bin/cvmfsexec_eessi.sh orted "$@"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work? I imagine that you need to load the modules you need in your cvmfsexec_eessi.sh script or it will not find orted inside the container

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the reason this works is that the context calling the orted command already has the module loaded, isn't it? I.e. the typical usage is:

module load EESSI/2023.06
module load GROMACS/whatever
# This mpirun calls `orted` on the remote node, probably exporting the _current_ `$PATH`, on which `orted` is available because the mpi module is loaded through GROMACS. It would fail if we _don't_ wrap that `orted` command in a `~/bin/cvmfsexec_eessi.sh`, because on the remote node the CVMFS repo is _not_ yet mounted. However, mounting the CVMFS repo is _sufficient_, because the `$PATH` is correct - it just points to the CVMFS repo...
mpirun <something>

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this comes directly from https://www.eessi.io/docs/blog/2024/06/28/espresso-portable-test-run-eurohpc/ , so we know this worked when we did this on Deucalion :)


## Via `squashfs` + cvmfs's `shrinkwrap` utility

CernVM-FS provides the Shrinkwrap utility, allowing users to create a portable snapshot of a CVMFS repository. This can be exported and distributed without the need of a CVMFS client or network access.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should link to the CVMFS docs on this early on.

hvelab and others added 4 commits March 20, 2025 10:01
Co-authored-by: ocaisa <alan.ocais@cecam.org>
Co-authored-by: ocaisa <alan.ocais@cecam.org>
Co-authored-by: ocaisa <alan.ocais@cecam.org>
Co-authored-by: ocaisa <alan.ocais@cecam.org>
@ocaisa
Copy link
Member

ocaisa commented May 21, 2025

cvmfs-2.13.0 was released a few minutes ago, so the approach with shrinkwrap should work out-of-the-box if you have that version available. This also means I can start working on a script to manage this once our client containers are updated

```
You can see the original blog post on how they used this solution in Deucalion [here](https://www.eessi.io/docs/blog/2024/06/28/espresso-portable-test-run-eurohpc/#running-espresso-on-deucalion-via-eessi-cvmfsexec).

## Via `squashfs` + cvmfs's `shrinkwrap` utility
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section should probably start by saying that it requires a system that does have access to the repository. Note that running this from the EESSI container is probably also an option (in fact, I think I did that once...), but it might be worth explaining that separately not to confuse people (doing it on a node that has CVMFS natively is definitely easier).

$ cat gid.map
* 1001
```
In addition, you need to create a spec file `software.eessi.io.spec` with the files you want to include and/or exclude in the shrinkwrap. Contents are:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In addition, you need to create a spec file `software.eessi.io.spec` with the files you want to include and/or exclude in the shrinkwrap. Contents are:
In addition, you need to create a spec file `software.eessi.io.spec` with the files you want to include and/or exclude in the shrinkwrap. For example:

!/versions/2023.06/compat/linux/x86_64/var/cache

```

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the spec file requires extra explanation. E.g. we should probably state that the compat layer is always needed, as is the dir that allows initializing the environment. Then, we should probably give some examples: e.g. how to export the full x86_64 tree (with the warning that it requires a massive amount of memory due to a known issue, see https://gitlab.com/eessi/support/-/issues/118), how to export for a single (or a handful) of micro-architectures, and maybe even an approach how to export for a single piece of software (i.e. load the GROMACS module, check all EBROOT variables, and export those - that should be enough...)

Once completed, the contents will be available in /tmp/cvmfs. You can create an squashfs image from it:

```bash
mksquashfs /tmp/cvmfs software.eessi.io.sqsh
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should also make clear that other compression tools can be used. E.g. zstd is typically much faster (because it is multithreaded) than the standard gzip. It also typically achieves better compression.

mksquashfs /tmp/cvmfs software.eessi.io.sqsh -comp zstd

hvelab and others added 4 commits November 14, 2025 09:34
Co-authored-by: Caspar van Leeuwen <33718780+casparvl@users.noreply.github.com>
Co-authored-by: Caspar van Leeuwen <33718780+casparvl@users.noreply.github.com>
Co-authored-by: Caspar van Leeuwen <33718780+casparvl@users.noreply.github.com>
Co-authored-by: Caspar van Leeuwen <33718780+casparvl@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants