This
2-day (4-part) intensive workshop provides an introduction to accessing and
computing on remote servers such as UC Davis’ HPC (high performance
cluster) to elevate your research.
The
series covers everything you need to know to get started. We’ll provide
an overview of HPC terminology, architecture, and general workflows,
and guide learners through how to set up and use SSH to log in and
transfer files, how to install software with miniconda, how to reserve
computing time and run programs with SLURM, and execute shell commands
that are especially useful for working with servers. As we learn we’ll
practice accessing and computing on remote servers using the UC Davis
HPC Core Facility’s new “Hive” cluster.
The material builds and thus learners must attend both days. Sessions run from 10 AM - 4 PM each day, with a break for lunch.
After this workshop series, learners should be able to:
- Use SSH to log in to a server;
- Transfer files to and from a server;
- Set up and use conda/mamba to install software on a server;
- Use SLURM to run interactive and non-interactive software on a server;
- Explain etiquette for using a server cluster such as Hive.
Prerequisites: Participants
must have taken DataLab’s “Introduction to the Unix Command Line workshop, or have equivalent prior experience using command line.
Participants must be comfortable with programming in a language such as R
or Python, and with basic Linux shell syntax (i.e., able to navigate
and create, modify and move files via command line).
Learners will register for a HPC account during the session. CAS authentication is required.
The copyright on
this video is owned by the Regents of the University of California and is
licensed for reuse under the Creative Commons Attribution 4.0 International (CC
BY 4.0) License.