1. Get Duo
DUO two-factor authentication is required to access the majority of U-M services and all HPC services. If you need to set up DUO please visit this page.
2. Get a Lighthouse user login
You must establish a user login on Lighthouse by filling out this form.
3. Get an SSH Client & Connect to Lighthouse
You must be on campus or on the VPN to connect to Lighthouse. If you are trying to log in from off campus, or using an unauthenticated wireless network such as MGuest, you have a couple of options:
Mac or Linux:
Open Terminal and type:
ssh uniqname@lighthouse.arc-ts.umich.edu
You will be required to enter your Kerberos level-1 password to log in. Please note that as you type your password, nothing you type will appear on the screen; this is completely normal. Press “Enter/Return” key once you are done typing your password.
When you’re connecting for the first time, it’s not uncommon to see a message like this one:
The authenticity of host 'lighthouse.arc-ts.umich.edu (141.211.102.84)' can't be established.
RSA key fingerprint is 6f:8c:67:df:43:4f:e0:fc:80:5b:49:1a:eb:81:cc:54.
Are you sure you want to continue connecting (yes/no)?
This is normal. By saying “yes” you’re accepting the public SSH key for the system. This key will be stored in a local known_hosts file on your system so you won’t be prompted in the future. The keys from Lighthouse will NOT change. So, for example, if you get a new computer and SSH to Lighthouse, you’ll be prompted to add the key again.
We encourage you to compare the fingerprint you’re presented with, when connecting for the first time, to one of the fingerprints below. The format of the fingerprint you’re presented could be dictated by the SSH client on your machine.
RSA |
6f:8c:67:df:43:4f:e0:fc:80:5b:49:1a:eb:81:cc:54 |
ECDSA |
Dae1G3gu0mtro2Rm15U6l8aQg4bGFnDQJhmGH3k+fKs |
ED25519 |
9ho43xHw/aVo4q5AalH0XsKlWLKFSGuuw9lt3tCIYEs |
In the example message given above, we are presented with the RSA key fingerprint and its MD5 value, which is the same value as in the above table.
If you’re NOT seeing one of these fingerprints, submit a ticket to arcts-support@umich.edu and do NOT connect to the server via SSH until discussing with an ARC staff member to determine if there is a security issue.
To avoid being prompted to accept the key on a new system you may choose to pre-populate your SSH known_hosts file with the pub keys from Lighthouse. The keys can be found in the FAQ
Windows (using PuTTY):
Download and install PuTTY here.
Launch PuTTY and enter lighthouse.arc-ts.umich.edu as the host name, then click open.

If you receive a “PuTTY Security Alert” pop-up, this is completely normal, click the “Yes” option. This will tell PuTTY to trust the host the next time you want to connect to it. From there, a terminal window will open; you will be required to enter your UMICH uniqname and then your Kerberos level-1 password in order to log in. Please note that as you type your password, nothing you type will appear on the screen; this is completely normal. Press “Enter/Return” key once you are done typing your password.
All Operating Systems:
At the “Enter a passcode or select one of the following options:” prompt, type the number of your preferred choice for DUO authentication.

4. Get files
You can use SFTP (best for simple transfers of small files) or Globus (best for large files or a commonly used endpoint) to transfer data to your /home directory.
SFTP: Mac or Windows using FileZilla
- Open FileZilla and click the “Site Manager” button
- Create a New Site, which you can name “Lighthouse” or something similar
- Select the “SFTP (SSH File Transfer Protocol)” option
- In the Host field, type lighthouse-xfer.arc-ts.umich.edu
- Select “Interactive” for Logon Type
- In the User field, type your uniqname
- Click “Connect”
- Enter your Kerberos password
- Select your Duo method (1-3) and complete authentication
- Drag and drop files between the two systems
- Click “Disconnect” when finished

On Windows, you can also use WinSCP with similar settings, available alongside PuTTY here.
SFTP: Mac or Linux using Terminal
To copy a single file, type:
scp localfile uniqname@lighthouse-xfer.arc-ts.umich.edu:./remotefile
To copy an entire directory, type:
scp -r localdir uniqname@lighthouse-xfer.arc-ts.umich.edu:./remotedir
These commands can also be reversed in order to copy files from Lighthouse to your machine:
scp -r uniqname@lighthouse-xfer.arc-ts.umich.edu:./remotedir localdir
You will need to authenticate via Duo to complete the file transfer.
Globus: Windows, Mac, or Linux
Globus is a reliable high performance parallel file transfer service provided by many HPC sites around the world. It enables easy transfer of files from one system to another, as long as they are Globus endpoints.
- The Globus endpoint for Lighthouse is “umich#lighthouse”.
How to use Globus
Globus Online is a web front end to the Globus transfer service. Globus Online accounts are free and you can create an account with your University identity.
- Set up your Globus account and learn how to transfer files using the Globus documentation. Select “University of Michigan” from the dropdown box to get started.
- Once you are ready to transfer files, enter “umich#lighthouse” as one of your endpoints.
Globus Connect Personal
Globus Online also allows for simple installation of a Globus endpoint for Windows, Mac, and Linux desktops and laptops.
- Follow the Globus instructions to download the Globus Connect Personal installer and set up an endpoint on your desktop or laptop.
Batch File Copies
A non-standard use of Globus Online is that you can use it to copy files from one location to another on the same cluster. To do this use the same endpoint (umich#lighthouse as an example) for both the sending and receiving machines. Setup the transfer and Globus will make sure the rest happens. The service will email you when the copy is finished.
Command Line Globus
There are Command line tools for Globus that are intended for advanced users. If you wish to use these, contact HPC support.
5. Submit a job
This is a simple guide to get your jobs up and running. For more advanced Slurm features, see the Slurm User Guide for Lighthouse. If you are familiar with using the resource manager Torque, you may find the migrating from Torque to Slurm guide useful.
Resources
On Lighthouse, like the Flux Operating Environment, you can run on hosts owned by a researcher who has granted you access. Unless the researcher has placed additional limits, the only default limit is a 2-week maximum runtime (wallclock). You will submit your jobs to a partition (queue) named after the PI’s uniqname.
To help in the transition, we have included 2 test nodes in your partition. These 2 test nodes are shared with all FOE/Lighthouse researchers, so availability will depend on current demand. This is similar to the environment you previously tested on the Beta cluster (there is a short limit on run times for jobs). You can submit jobs to that partition to validate your workflows. Once the PI has moved their nodes from FOE to Lighthouse, the test nodes will be removed from the PI’s partition and the maximum runtime will be set to 2 weeks.
Batch Jobs
Most work will be queued to be run on Lighthouse and is described through a batch script. The sbatch command is used to submit a batch script to Slurm. To submit a batch script simply run the following from a shared file system; those include your home directory, /scratch, and any directory under /nfs that you can normally use in a job on Flux. Output will be sent to this working directory (jobName-jobID.log). Do not submit jobs from /tmp or any of its subdirectories.
$ sbatch myJob.sh
The batch job script is composed of three main components:
- The interpreter used to execute the script
- #SBATCH directives that convey submission options
- The application(s) to execute along with its input arguments and options
Example:
#!/bin/bash
# The interpreter used to execute the script
#“#SBATCH” directives that convey submission options:
#SBATCH --job-name=example_job
#SBATCH --mail-type=BEGIN,END
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --mem-per-cpu=1000m
#SBATCH --time=10:00
#SBATCH --account=test
#SBATCH --partition=standard
# The application(s) to execute along with its input arguments and options:
/bin/hostname
sleep 60
How many nodes and processors you request will depend on the capability of your software and what it can do. There are four common scenarios: