Researchers in the College of Literature, Science, and the Arts have four options for using the Great Lakes High-Performance Computing cluster:
|Cost||Wait for jobs to start||Special usage limits||Notes|
|Public LSA accounts||Free
(paid for by LSA)
|••••||Yes (cores per user, jobs per user, job size, maximum walltime)||Only for researchers who do not have access to another account on Great Lakes.
Resources are shared by researchers College-wide and so there may frequently be waits for jobs to start.
|Department or multiple-group account||$
(paid for by department or cooperating research groups)
|••||Optional (up to purchaser)||Resources are shared only between researchers within the department or groups; whether a job waits to start depends on how the account has been sized relative to the needs of the researchers.|
(paid for by researcher)
|•||Optional (up to purchaser)||Resources are shared only to whoever is specified as an authorized user by the account owner. This could be a single individual, several collaborators, or a larger research group.|
(paid for by researcher)
|•||Optional (up to purchaser)||Typically used with external grants that require the purchase of computing hardware rather than services. Researchers purchase specific hardware for their exclusive use for 4 years. Custom hardware configurations (e.g., amount of memory per node) are possible when approved by ARC-TS.|
The College of Literature, Science, and the Arts provides three public Great Lakes accounts to LSA researchers at no cost. A researcher can use one of the public accounts if they do not have access to another account on Great Lakes.
|Account name||Great Lakes Partition||Size||Usage limits|
||standard||120 cores and 600 GB memory||Size (maximums, per person): Up to 24 cores and 120 GB RAM.
Runtime: maximum 5 core*months remaining across all running jobs (per person)
||gpu||2 GPUs||Only for jobs that require use of a GPU.
Size: 1 GPU, 2 cores, 10 GB RAM per person
Walltime: maximum 1 day per job
||largemem||36 cores and 180 GB memory||Only for jobs that require more memory or cores per node than possible under
Size: 36 cores / 180 GB per person
Walltime: maximum 1 week per job
The ARC-TS ThunderX cluster, which provides an Apache Spark/Hadoop stack is also available to everyone in LSA as a no-cost pilot platform. More information is available on the ARC-TS ThunderX page.
Uses of these accounts include but are not limited to:
- Running jobs for individual researchers who fit within the usage limits for the LSA accounts, particularly for individual researchers or students who do not have access to funding that could be used to purchase their own Great Lakes account (for example, graduate students and undergraduates doing their own research).
- Testing Great Lakes to determine whether to set up a Great Lakes paid account. (Note that PIs can also request a one-time two week trial account for this purpose by contacting firstname.lastname@example.org; trial accounts are 16 cores but are for the exclusive use of the PI’s research group).
- Experimentation and exploration on an ad hoc basis of questions not necessarily tied to any particular research project, without needing to obtain funding and provisioning of a Great Lakes paid account first.
The LSA public accounts are neither intended to replace nor supplement other Great Lakes accounts. Research groups who need more computation than is provided under the public account usage limits, or who need their jobs to start running faster than under the public accounts, should obtain their own Great Lakes paid accounts. Shared accounts can also be obtained for use of multiple research groups across departments, centers, institutes, or other units. Graduate students in particular may want to use Rackham Graduate Student Research Grants to provision their own, private Great Lakes paid accounts.
The LSA public accounts (
, lsa2, lsa3) are not meant for use by anyone who has their own Great Lakes account, nor by those who have access to another shared account such as a departmental account or an account for their center or institute.
LSA has imposed additional usage limits on its public accounts in order to avoid a single user (or a small group of users) monopolizing the accounts for extended periods of time to the detriment of other researchers who want to use the accounts.
LSA HPC support staff will periodically monitor jobs which are running under the LSA public accounts. Users who have running jobs in contradiction with the usage limits or policies will receive an email asking them to remove any inappropriate jobs. Users who receive four or more such emails within 120 days may be temporarily or permanently removed from the accounts, at the discretion of LSA HPC support staff.
To check the list of accounts you have access to run the following:
If you are listed on no other accounts than
lsa3, then you are eligible to use them within the limits specified.
lsa1 can use up to 24 cores or up to 120 GB of memory across all of their running jobs at any point in time.
Additionally, individual users are restricted to having no more than 4 core*months (2,880 core*hours) worth of jobs running at any one time. This limit is calculated by summing the product of the remaining walltime and number of cores for all of a given users’ running jobs, as shown by the command “
squeue -u $USER -t r"). 4 core*months are sufficient to run a 4 core job for 30 days, a 8 core job for 15 days, a 16 core job for 7 days, and many other combinations.
Each user of
lsa2 can run one job at a time, using a single GPU, up to 2 cores, and up to 10 GB RAM for a maximum of 1 day (24 hours). Use of
lsa2 is restricted to jobs that require use of a GPU.
lsa3 account is for running jobs on the largemem partition and there is a limit per user of 36 cores and 180GB of memory.
The requested walltime for each job under lsa3 must be no more than 1 week (168 hours). This permits a single researcher to use the full lsa3 account (all 36 cores / 180 GB RAM) in a single job, but it can also result in very long waits for jobs to start. Researchers who need jobs to start more quickly should either open their own paid account, use lsa1 (if they need 120 GB RAM or less), or use XSEDE.
lsa3 is restricted to jobs that require more memory or more cores per node than is possible under
Run the command
my_accounts on a Great Lakes login node. If you see the LSA public account names listed, and no other accounts listed, then you are able to run jobs under them.
If you are a member of LSA but the public account names do not show up in the
my_accounts output for your Great Lakes user login, please contact email@example.com and ask to be added to the accounts.
lsa1, include the following in your Slurm script:
#SBATCH --account=lsa1 #SBATCH --partition=standard
lsa2, include the following in your Slurm script:
#SBATCH --account=lsa2 #SBATCH --partition=gpu
lsa3, include the following in your Slurm script:
#SBATCH --account=lsa3 #SBATCH --partition=largemem
For more information about Slurm scripts, see the Slurm User Guide.
Run the following commands on a Great Lakes login node to see what jobs are running or pending under
squeue -A lsa1 -t running
squeue -A lsa1 -t pending
lsa1” above with “
lsa2” or “
lsa3” as desired.
Because the LSA accounts are public resources, they can often be in high demand, resulting in jobs taking hours or even days to start, even if each individual user is under the usage limits. Options for getting jobs to start more quickly include:
- Request a Great Lakes paid account. LSA provides cost-sharing for Great Lakes accounts for qualified LSA faculty, students, and staff. Graduate students are encouraged to apply for Rackham Graduate Student Research Grants which may be (relatively) quick and easy to obtain.
- Ask your department, center, or institute about the possibility of a shared account funded by department discretionary funds, individual researcher contributions, or other sources.
- PIs can apply for a one-time trial account on Great Laks that lasts two weeks; contact firstname.lastname@example.org for more information.
- Use XSEDE. XSEDE has relatively large (up to 200,000 service unit) startup allocations that are fairly easy to obtain, requiring a CV and 1-page description of the research and how the research will utilize XSEDE. Research allocation requests are reviewed four times a year and are awarded based on the results shown from the startup and previous research allocations; research allocations can be larger than startup allocations. For more information, contact email@example.com.
The usage limits for the LSA public accounts are automatically enforced wherever possible. In rare cases when this does not occur, exceptions which are identified will be handled by LSA HPC staff.
A variety of options are available to manage your usage:
- You can submit a large number of jobs at once, but use Slurm job dependencies to divide the jobs into smaller groups, so that each group is under the usage limit for the account.
- If you are using Slurm job arrays, you can specify a “slot limit” to limit how many of the individual jobs in the array can run simultaneously. Array jobs are particularly useful if you are doing parameter sweeps or otherwise running the same code many times with different input files or parameters. To use Slurm job arrays with slot limits, add a percent sign followed by the slot limit to the end of the job array specification in your PBS script. For example, “#SBATCH –array 1-100%4” will submit the job 100 times, but will ensure that only four of them will be running at any point in time.
- Submit only a few jobs at a time, staying under the usage limit for concurrently running jobs. Wait for the jobs to complete before submitting additional jobs.
Yes. You can send email to firstname.lastname@example.org and ask to be removed from the other accounts in order to become eligible to use one or more of the LSA public accounts.
Please send any questions or requests to email@example.com.