Skip to content

Commit

Permalink
Merge branch 'feature/ondemand_kul' into update/ondemand
Browse files Browse the repository at this point in the history
  • Loading branch information
lexming committed Sep 4, 2023
2 parents 9ee68f8 + db0faf5 commit b47947f
Showing 1 changed file with 39 additions and 12 deletions.
51 changes: 39 additions & 12 deletions source/leuven/services/openondemand.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Open OnDemand is available for the Tier-2 Genius and wICE clusters.

You can use this interface by navigating to the `KU Leuven Open OnDemand page`_.
You can log in using your KU Leuven or VSC credentials.
Access is only granted to VSC3 users for the moment.
Access is only granted to VSC3 users at the moment.
Once logged in, you'll notice you are connected with your VSC account.

Use
Expand All @@ -28,8 +28,8 @@ The KU Leuven Open OnDemand page provides a range of functions:
- Opening a shell on one of the login nodes
- Using interactive apps

All of these functionalities can be used by accessing them through the tabs at the top of the page. In the following, we will describe the different parts in some more
detail.
All of these functionalities can be used by accessing them through the tabs at the top of the page. In the following, we will describe the different parts in
some more detail.

Files
=====
Expand Down Expand Up @@ -79,9 +79,9 @@ To enter the Templates menu, you can click on 'Templates' at the top once you ar
Job'-'From Template'. Once in this menu, you should see a table with 3 System Templates. The resources that are requested in these scripts are the default settings.
The templates:

- CPU job template: a template for jobs on the thin nodes (the default `batch` partition). This is also the default template (which you will get when clicking 'From Default Template' under the 'New Job' button in the 'Jobs' menu).
- GPU job template: a template for jobs with GPU resources (`gpu` partition)
- Big memory CPU jobs: a template for jobs with large memory requirements (`bigmem` partition)
- CPU job template: a template for jobs on the thin nodes (the default ``batch`` partition). This is also the default template (which you will get when clicking 'From Default Template' under the 'New Job' button in the 'Jobs' menu).
- GPU job template: a template for jobs with GPU resources (``gpu`` partition)
- Big memory CPU jobs: a template for jobs with large memory requirements (``bigmem`` partition)

You can create your own templates from scratch or by copying one of the existing templates.
In both cases you will be redirected to a page where you can provide a
Expand Down Expand Up @@ -113,7 +113,7 @@ The functioning of creating jobs is a bit similar to how you create new template
Whatever method you choose, you will create a new folder for each job, this time
located at ``$VSC_DATA/ondemand/data/projects/default/``.
The job folders will be numbered in the order you have created them.
**Do not change this folder name as long as you plan on using it from the job menus, as this will break the linking here.**
**Do not change this folder name as long as you plan on using it from the job menus, as this will break the linking.**
When removing a job, the folder will be deleted as well.

To create a job, press the 'New Job' button and choose the option that best suits
Expand Down Expand Up @@ -161,24 +161,51 @@ app will be running.

To launch any of the interactive apps, you need to fill in the resources form. Be aware that you will end up in a regular queue, so requesting a large amount of
resources might result in a long queue time. Between all the apps, most of these options are the same. Some apps require specific information. These will be
explained in the specific paragraph about the app. A general overview of the others can be found here:
explained in the specific paragraph about the app. A general overview of the others can be found below. A more detailed guide on how to choose your resources
is available in the next chapter.

- Account: the credit account you want to deduct the credits from. The accounts associated with your VSC number will be displayed in a dropdown.
- Partition: you can choose any of the existing partitions. We recommend using the ``interactive`` partition for most interactive work.
- Partition: you can choose any of the existing partitions on both clusters. We recommend using the ``interactive`` partition for most interactive work on wICE. Be aware that this partition is not available on Genius. There it is recommended to just request the regular ``batch`` partition (see the :ref:`Choosing your resources<choosing_your_resources>` section for more detail on how to choose your partition).
- Numbers of hours: your walltime (min 1h).
- Number of cores: the amount of cores per node. This defaults to 1.
- Required memory per core in megabytes. This defaults to 3400 MB.
- Number of GPUs. If you request a GPU of the `gpu` partition you will get a full A100 GPU. For the `interactive` partition, every GPU is a virtual GPU slice of the available A100 GPUs. One GPU is the same as 1/7th of a A100 GPU. The default is 0. You can specify the type of GPU as well: [Type]:<number> (e.g. A100:2). You can also just request a number of GPUs as <number>. Then you will be appointed the first available GPU types. In practice, both methods are the same for now. This might change if we would decide to add extra GPU types.
- Number of GPUs. If you request a GPU of the ``gpu`` partition you will get a full A100 GPU. For the ``interactive`` partition on wICE, every GPU is a virtual GPU slice of the available A100 GPUs. One GPU slice is the same as 1/7th of a A100 GPU. The default is 0. You can specify the type of GPU as well: [Type]:<number> (e.g. A100:2). You can also just request a number of GPUs as <number>. Then you will be appointed the first available GPU types. In practice, both methods are the same for now. This might change if we would decide to add extra GPU types. **The interactive partition only allows you to request max 1 GPU (slice) though.**
- Reservation: if you are part of a reservation, you can also use these nodes with Open Ondemand by specifying your reservation name here.
- Pre-run scriptlet: this allows you to add bash commands to your job before launching the app. This can be used for example for loading extra modules that you need within the app. **Be aware that this feature is still somewhat experimental, and its functionality also depends on the app you are running (mainly RStudio Server has some issues here). If you would like to use this feature, but you run into problems, please contact our helpdesk.**

Once you've selected all your resources, just press 'Launch' and your job will be queued. In the next part, you find an overview of the currently supported apps.
Once you've selected all your resources, just press 'Launch' and your job will be queued.

.. _choosing_your_resources:
Choosing your resources
=======================

Choosing the correct resources for your interactive session is mostly the same as selecting them when launching regular batch jobs. For this reason we strongly
recommend you to have a look at how to specify your resources both on `Genius <https://docs.vscentrum.be/en/latest/leuven/genius_quick_start.html#running-jobs-on-genius>`_ and `wICE <https://docs.vscentrum.be/en/latest/leuven/wice_quick_start.html#running-jobs-on-wice>`_.

As mentioned above, in most cases we recommend using the ``interactive`` partition on wICE for your interactive apps. This partition is meant for lighter work, like
visualisations, testing and pre- and postprocessing. Using this partition is also free, mainly to encourage you to request these resources for such work, instead
of using any of the other partitions. There are however some limitations on the amount of resources you can request here:

- Max 1 node
- Max 8 cores
- Max 1 virtual GPU slice
- Max 16h of walltime

This is put in place to ensure that these resources are kept for their original purpose, namely the interactive work.

If for some reason some of these limitations are too strict for you, or you need resources that are not available on the interactive nodes (e.g. a full GPU, big
memory nodes), you can always request nodes from another partition. Remember however that these interactive apps are not meant for running full jobs. If you
indeed need multiple nodes or full GPUs to test your code/program, go ahead and request the resources for your interactive app. In the case that you have passed
the testing phase and you want to start conducting experiments, we recommend that you make the switch to batch jobs instead, as they will not require
your presence to start your code.

.. _interactive_shell:

Interactive shell
-----------------

This app will launch a shell on (one of the) requested nodes, allowing you to use these compute resources from within a Linux terminal. This is different
than the shell you get in the "Clusters" menu, as this is a login shell.
than the shell you get in the "Clusters" menu, which directs you towards one of the login nodes.

Jupyter Lab
-----------
Expand Down

0 comments on commit b47947f

Please sign in to comment.