The national HPC resource known as XSEDE has now fully transitioned to Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS).
ACCESS is an open scientific discovery infrastructure combining leadership class resources at partner sites across the United States to create an integrated, persistent computational resource. The National Science Foundation’s ACCESS program builds upon the successes of the 11-year XSEDE project, while also expanding the ecosystem with capabilities for new modes of research and further democratizing participation.
ARC staff is here to answer questions and to help you take advantage of the national resources available through the ACCESS program. Contact arc-support@umich.edu for assistance from ARC staff, Shelly Johnson and Todd Raeker, or visit the ACCESS website for more information.
OVERVIEW This workshop will cover some more advanced topics in computing on the U-M Great Lakes Cluster. Topics to be covered include a review of common parallel programming models and…
Audience: Those who have never used Stata before but wish to learn. By the end of the workshop, participants will be able to: Work with Stata, including using Do-files and…
This workshop is an introduction to GPU programing for scientific and engineering applications. The basics of GPU architecture will be presented. Parallel programing strategies will be discussed followed by actual…
OVERVIEW This course will familiarize the student with the basics of accessing and interacting with Linux computers using the GNU/Linux operating system’s Bash shell, also generically referred to as “the…
The Great Lakes Slurm cluster is a campus-wide computing cluster that serves the broad needs of researchers across the university. The Great Lakes HPC Cluster has replaced Flux, the shared research computing cluster that currently serves over 300 research projects and 2,500 active users.
The Great Lakes HPC Cluster is available to all researchers on campus for simulation, modeling, machine learning, data science, genomics, and more. The platform provides a balanced combination of computing power, I/O performance, storage capability, and accelerators.
Based on extensive input from faculty and other stakeholders across campus, the Great Lakes HPC Cluster is designed to deliver similar services and capabilities as Flux, including access to GPUs and large-memory nodes and improved support for emerging uses such as machine learning and genomics. The Great Lakes HPC Cluster consists of approximately 13,000 cores.
U-M Research Computing Package
The University of Michigan Research Computing Package (UMRCP) is an investment into the U-M research community via simple, dependable access to several ITS-provided high-performance computing clusters and data storage resources. CPU credits are allocated on the Great Lakes cluster and can be used for standard, larger memory, or GPU resources.
Grant Support
See the Grant Resources page for information regarding grant proposals using the Great Lakes HPC Cluster.
LSA-specific Information
See the LSA funding page for information on funding courses at the College of Literature, Science, and the Arts. LSA researchers who do not have access to any other account may be eligible to use the accounts provided centrally by LSA. The usage policy and restrictions on these accounts is described in detail on the LSA’s public Great Lakes accounts page.
LSA increased their cost-sharing for the rest of the 2021-2022 fiscal year. Read about the details here.
Questions about access or use of these accounts should be sent to arc-support@umich.edu.
OVERVIEW This workshop will cover some more advanced topics in computing on the U-M Great Lakes Cluster. Topics to be covered include a review of common parallel programming models and…
OVERVIEW This workshop will introduce you to high performance computing on the Great Lakes cluster. After a brief overview of the components of the cluster and the resources available there, the main…
OVERVIEW This workshop will cover some more advanced topics in computing on the U-M Great Lakes Cluster. Topics to be covered include a review of common parallel programming models and…
If you would like to create a Great Lakes Cluster account or have any questions, contact arc-support@umich.edu with lists of users, admins, and a Shortcode. UMRCP accounts are also available to eligible researchers. For more information, please visit our UMRCP page.
The 2021-22 rates for the Lighthouse Cluster have been approved. These rates represent cost recovery for the Lighthouse Cluster, and do not include any support your unit may choose to provide.
The Lighthouse cluster is meant to support researchers with grants that require the purchase of computing hardware or require hardware configurations not available on Great Lakes. Lighthouse allows researchers to work with ARC to purchase their own hardware and have it within the Lighthouse Slurm environment, with ARC providing the data center, staff, networking, storage and software. More information is available on the Lighthouse webpage. In addition to the hardware purchase cost, there is monthly charge to support the hardware in the Lighthouse environment.
Lighthouse Service Cost
$743.07/node/year ($61.92/node/month)
College of Literature, Science, & the Arts Lighthouse Support
To support research using high performance computing, the College of Literature, Science, and the Arts (LSA) will cost-share the monthly operating charge for researchers paying on LSA unit shortcodes. For fiscal years 2022-23 and 2023-24, LSA will provide a 88% subsidy on the monthly service charge. The table below provides an estimate of Lighthouse Operating Expenses using published ITS rates for FY2021-22. The rates for LSA researchers will change, and the table updated, if ITS rates change. LSA does not anticipate continuing the LSA subsidy for Lighthouse nodes after FY2023 and FY2024 due to the small number of people who use service across U-M and LSA. Please plan accordingly for the change in cost that would occur at the start of FY2025 if you continue to use Lighthouse.
Due to maintenance, the high-performance computing (HPC) clusters and their storage systems (/home and /scratch) will be unavailable:
Great Lakes: Monday, August 8 to Wednesday, August 10 Turbo: Monday, August 8 to Wednesday, August 10 Armis2 and Lighthouse: Tuesday, August 9 to Wednesday, August 10
No jobs will run that cannot be completed by the beginning of maintenance.
Great Lakes maintenance begins on Monday, August 8.
Armis2 and Lighthouse maintenance begins on Tuesday, August 9.
Make a copy of any files that you might need during maintenance that are in /home or /scratch, which will not be available during maintenance, to your local drive prior to maintenance using Globus File Transfer or other file transfer (see the User Guide for the cluster).
All running and queued jobs will be deleted at the start of maintenance.
ARC recommends that you test if you can
Testing is an important component of the maintenance process. Discovering broken code early will ensure that there is enough time to put a fix in place so that your research is not disrupted.
Module names and the available versions of installed software have changed, so we recommend that you log in to check on the availability of software you are accustomed to using. It may be important to determine whether existing code will run with the versions that will be available after maintenance.
SPECIAL NOTICE: If you plan to test your software and/or workflows prior to maintenance on the preview clusters, directories under /home and /nfs on all clusters are shared between the preview and the current production cluster.
Because directories are shared, we highly recommend you test using copies of data and programs.
Software in the following categories are especially susceptible to change in software version.
Python users: Python versions will be upgraded. Check that your Python programs run and reinstall any libraries that may be needed with newer versions. Note that Python version 2.x is no longer provided. Additional details are on the Python webpage on the ARC website.
No GPU nodes available. We recommend testing on Great Lakes or using one of your team’s nodes (see next bullet)
If you would like to test your codes on one or more of your compute nodes, email arc-support@umich.edu and we can migrate those nodes to the Lighthouse preview cluster for team’s testing.
Limits
1 running job per user
Standard jobs: 6 hour max job time, 18 cpus, 90G memory
100 hours of compute for the root account (please only use this for testing)
Attend a virtual, drop-in office hour session (CoderSpaces) to get hands-on help from experts, available 9:30-11 a.m. and 2-3:30 p.m. on Tuesdays and Thursdays.
Changes
This maintenance period will include significant updates including a change to the operating system, OFED drivers, NVIDIA drivers, and software and Slurm versions on all three clusters: Great Lakes, Armis2, and Lighthouse. See details below.
System software changes
NEW version in BOLD
OLD version
Red Hat 8.4
Kernel 4.18.0-305.45.1.el8_4.x86_64
glibc 2.28-151
ucx 1.12.0-1.55103 (OFED provided)
gcc-8.4.1-1.el8
CentOS 7.9
kernel 3.10.0-1160.45.1
glibc 2.17-325.el7_9
ucx 1.11.1-1.54303 (OFED provided)
gcc-4.8.5-44.el7
Mlnx-ofa_kernel-modules
OFED 5.5.1.0.3.1.
kver.4.18.0_305.25.1.el8_4
Mlnx-ofa_kernel-modules
OFED 5.4-3.0.3.0
kver.3.10.0-1160.45.1
Slurm 21.08.8-2 copiles with:
PMIx
/opt/pmix/2.2.5
/opt/pmix/3.2.3
/opt/pmix/4.1.2
hwloc 2.2.0-1 (OS provided)
ucx 1.12.0-1.55103 (OFED provided)
slurmrestd
slurm-libpmi
slurm-contribs
Slurm 21.08.4 compiles with:
PMIx
/opt/pmix/2.2.5
/opt/pmix/3.2.3
/opt/pmix/4.1.0
hwloc 1.11.8-4 (OS provided)
ucx 1.11.1 (OFED provided)
slurmrestd
slurm-libpmi
slurm-contribs
PMIx LD config /opt/pmix/2.2.5/lib
PMIx LD config /opt/pmix/2.2.5/lib
PMIx versions available in /opt :
2.2.5
3.2.3
4.1.2
PMIx versions available in /opt :
2.2.5
3.2.3
4.1.0
Singularity CE (Sylabs.io)
3.9.8
Singularity (Sylabs.io)
3.7.3
3.8.4
NVIDIA driver510.73.08
NVIDIA driver 495.44
Open OnDemand 2.0.23-1
Open OnDemand 2.0.20
User software changes
New Software Versions
Deprecated versions (RIP)
Python
Version 3.6.8 is system provided; ARC will provide newer versions via modules
Python
2.x
FAQ
Q: When is summer maintenance?
A: Summer 2022 maintenance is happening August 8-10. The high-performance computing (HPC) clusters and their storage systems (/home and /scratch) will be unavailable:
Great Lakes: Monday, August 8 to Wednesday, August 10
Armis2 and Lighthouse: Tuesday, August 9 to Wednesday, August 10
Q: How should I prepare for the summer 2022 maintenance?
A: There are a number of actions you can take ahead of maintenance, including:
Use the preview clusters to test the code.
Make a copy of any files that you might need during maintenance that are in /home or /scratchto your local drive prior to maintenance using Globus File Transfer or other file transfer (see the User Guide for the cluster).
Q: Can I run any of my jobs during maintenance?
A: No. You can submit your jobs anytime but jobs that did not complete prior to maintenance will need to be resubmitted.
Q: Will I have access to the clusters during the maintenance?
A: No. The clusters and their storage systems will be unavailable during maintenance, including files, jobs, and the command line.
Q: Will there be any changes to my jobs after maintenance?
A: Use the preview cluster to recompile and test your code. You may need to recompile your code if you didn’t get a chance to recompile or test on the preview cluster prior to maintenance.
Q: Where can I get help?
A: You can send an email to arc-support@umich.edu. Or attend a virtual, drop-in office hour session (CoderSpaces) to get hands-on help from experts, available 9:30-11 a.m. and 2-3:30 p.m. on Tuesdays and Thursdays.
Q: How do I access the preview clusters?
Great Lakes:
Command line: ssh gl8-login.arc-ts.umich.edu
Open OnDemand: https://greatlakes8.arc-ts.umich.edu
Armis2:
Command line: ssh a28-login.arc-ts.umich.edu
Open OnDemand: https://armis28.arc-ts.umich.edu
Lighthouse:
Command line: ssh lh8-login.arc-ts.umich.edu
Open OnDemand: https://lighthouse8.arc-ts.umich.edu
Q: What are the technical changes happening during maintenance?
A: See the chart above.
Q: Will Python version 2.x work after maintenance?
A: No. ARC is updating Python to version 3.6.8. Be sure to test and/or update your processes, codes, and/or libraries.
The University of Michigan Research Computing Package (UMRCP), provided by ITS, is an investment into the U-M research community via simple, dependable access to several ITS-provided high-performance computing clusters and data storage resources.
Current UMRCP allocation holders: Check your UMRCP credits in the ARC Resource Management Portal (RMP). Use your remaining credits by June 30 because they will not rollover. Automatic renewals for most begin July 1. Watch your inbox for additional information.
Visit the ARC Resource Management Portal to request UMRCP services (you will need to log in with your UMICH [Level-1] credentials and Duo Two-Factor authentication). To ensure that we can best meet your needs, it is important that researchers sign up directly for the allocations. It is not possible at this time to sign up on behalf of someone else. Please note that it may take up to one week to fulfill your request.
The U-M Research Computing Package was developed by ITS to meet needs across a diversity of disciplines and to provide options for long-term data management, sharing and protecting sensitive data, and more competitive cost structures that give faculty and research teams more flexibility to procure resources on short notice.
ALLOCATIONS PROVIDED BY ITS
The U-M Research Computing Package provides:
80,000 CPU Hours of high-performance computing
CPU credits are allocated on the Great Lakes cluster or the Armis2 protected data cluster; they can be used for standard, larger memory, or GPU resources, and renewed every July. Please note – If you have an existing high-performance computing account it cannot be included under the UMRCP. Any request for computing hours through the UMRCP requires the creation of a new account.
10 TB of replicated, high-performance storage
Capacity can be split across sensitive and non-sensitive science data. The Turbo Storage can be used on many ARC/ITS, unit, Globus.org, and laboratory systems.
100 TB of archive storage
Replicated, durable storage for massive data volumes. The Data Den system also supports reliable data transfer and international data sharing via Globus.org.
The U-M Research Computing Package provides base allocations that meet the needs of approximately 75% of current high-performance computing users and 90% of current research storage users. These allocations are at no cost to researchers, thanks to an additional investment from ITS.
Any services that are needed beyond the base allocation provided by the UMRCP are available at significantly reduced rates (the average rate reduction is 40%). The rates are applied automatically. View the rates page for more information.
ELIGIBILITY
The U-M Research Computing Package is available to all researchers on all campuses (Ann Arbor, Dearborn, Flint, and Michigan Medicine), including:
Faculty and Staff
All professors and lecturers with an active appointment
Emeritus faculty
Guest faculty who are visiting for one to two years
Principal investigators (e.g., research scientists) on funded research activities
PostDocs who have their own research grant
PhD Students
PhD students may qualify for their own UMRCP resources depending on who is overseeing their research and their advisor relationship. Students should consult with their PhD program administrator to determine their eligibility. ITS will confirm this status when a UMRCP request is submitted.
Undergraduates and Masters Students
Undergraduate and Masters students do not currently qualify for their own UMRCP. They can be added as users or administrators of another person’s UMRCP. Students can also access other ITS programs such as Great Lakes for Course Accounts, and Student Teams.
GETTING STARTED
Sign-up anytime for no-cost allocations.
Be sure to use all of your allocations – they expire on June 30 each year.
Visit the ARC Resource Management Portal to request UMRCP services (you will need to log in with your UMICH [Level-1] credentials and Duo Two-Factor authentication). To ensure that we can best meet your needs, it is important that researchers sign up directly for the allocations. It is not possible at this time to sign up on behalf of someone else. Please note that it may take up to one week to fulfill your request.
Any services that are needed beyond the base allocation provided by the UMRCP are available at significantly reduced rates (the average rate reduction is 40%). The rates are applied automatically. View the rates page for more information.
Automatic renewals begin July 1
Faculty members and PhD students with their own funding who already have the UMRCP will automatically have their no-cost allocations automatically renew on July 1. No action is needed; however, if you need help or have questions, contact the ARC Help Desk, arc-support@umich.edu.
GET HELP
Meet with ARC: ARC leadership can meet with your unit to discuss unit-specific questions. Send an email to arc-support@umich.edu to get started.
Watch a how-to demo videos: These videos are available on the ARC Channel of the MiVideo website. Topics include an introduction to machine learning, data sharing and archiving, and more. UMICH (Level-1) login required.
I already have some of the ARC services listed. Do I still need to apply?
Yes. Existing ARC resources are not migrated to UMRCP support until the leader of the resources applies. Existing resources can be migrated to the UMRCP once created and do not require movement of data or changes in workflow.
Can I sign someone else up for the UMRCP?
No. To ensure that we can best meet your needs, it is important that researchers sign up directly for the allocations. It is not possible at this time to sign up on behalf of someone else.
I have multiple students/collaborators in my research team. Can they share resources?
Yes. All UMRCP and ARC resources are designed to be collaborative and multidisciplinary capable. UMRCP recipients can delegate both administrative and user access to anyone with a valid campus uniquename.
I used up / need more resources than UMRCP provides, can I get more?
I’m a faculty member who already has the UMRCP. How do I renew?
No action is required. Your no-cost allocations automatically renew on July 1.
I’m a PhD student with my own funded research. How do I renew my UMRCP?
No action is required. Your no-cost allocations automatically renew on July 1.
I’m a PhD, Master’s, or undergraduate student, and my advisor has left the university. My research is continuing to finish my dissertation. What should I do?
If your faculty advisor has left the university (or are about to), but you remain: an alternative administrator (Staff or Faculty who can verify the ongoing need of the allocation for wrapping up activities related to the former research group) must be assigned to the account so that the allocations can continue uninterrupted. See How do I delegate administrative management of my UMRCP or other ARC resources?
I’m an active faculty member but received a UMRCP off boarding message warning that my resources won’t be renewed. What do I need to do to keep them?
Likely your MCommunity entry is private or another status (e.g., Retired/Emeritus) which makes verifying your active research status ambiguous and needs to be confirmed. You can reply to the automated message or contact arc-support@umich.edu to confirm your active status and renew your resources for the upcoming year.
How do I make changes if I have already submitted the UMRCP request form?
If you have already submitted the UMRCP request form and you want to modify it, open a ticket by sending an email to ARC Support, arc-support@umich.edu. Be sure to include the name of your project.
Visit the ARC Resource Management Portal. On your project overview page look for Project Admins. Select the Add button and add a trusted university member by inputting their uniqname. This is helpful when the project owner is leaving the university but requires the resources to allow students to finish their dissertation work, as the new delegated admin can confirm the ongoing need each June for UMRCP renewal. Admins can make any change to your project so be sure to only add members you trust.
The ARC-TS 2020 winter maintenance will start on March 9, 2020. You can read more about it here. https://arc-ts.umich.edu/event/winter-2020-maintenance/ Great Lakes, Lighthouse, and Armis2…
Billing for Great Lakes will begin on January 6, 2020 at 9am. Rates for using Great Lakes can be found here: https://arc-ts.umich.edu/greatlakes/rates . No…
All HPC accounts, users, and workflows must be migrated to either Great Lakes (Standard, GPU, Large-Memory, and On-demand Flux) or Lighthouse (Flux Operating Environment nodes)…
The ARC-TS annual summer maintenance will begin. The details, services impacted, and length of the maintenance will be determined when we get closer to the date. …
Great Lakes primary installation complete and ARC-TS begins loading and configuration
May 30, 2019
The Great Lakes installation is primarily complete other than waiting for final HDR firmware testing for the new InfiniBand system. The current InfiniBand system is…
The Beta HPC cluster was introduced to enable HPC users to begin migrating their Torque job scripts to Slurm and test their workflows on a Slurm-based…
Dell, Mellanox, and DDN will be delivering and installing the hardware to deliver the new Great Lakes service. These teams will be working along side the…
Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.
If you would like to create a Great Lakes Cluster account or have any questions, contact arcts-support@umich.edu with lists of users, admins, and a shortcode. Trial accounts are also available for new PIs.
Advanced Research Computing (ARC), a division of ITS, is pleased to offer a pilot called Scientific Computing and Research Consulting Services to help researchers implement data analytics and workflows within their research projects. This includes navigating technical resources like high-performance computing and storage.
The ARC Scientific Computing and Research Consulting Services team will be your guide to navigating the complex technical world: from implementing intense data projects, to teaching you how the technical systems work, to assist in identifying proper tools, to guiding you on how to hire a programmer.
If you have any questions or wish to setup a consult, please contact us at arc-consulting@umich.edu. Be sure to include as much information as possible from the “Get started” section noted above.
If you have more general questions about ARC services or software please contact us at arc-support@umich.edu
We are available to assist researchers along the entire lifecycle of the data workflow, from the conceptual stage to ingest, preprocessing, cleansing, and storage solutions. We can advise in the following areas:
Establishing and troubleshooting dataflows between systems
Selecting the appropriate systems for short-term and long-term storage
Transformation of raw data into structured formats
Data deduplication and cleansing
Conversion of data between different formats to aide in analysis
Automation of dataflow tasks
Analytics
The data science consulting team can assist with data analytics to support research:
Choosing the appropriate tools and techniques for performing analysis
Development of data analytics in a variety of frameworks
Cloud-based (Hadoop) analytic development
Machine Learning
Machine learning is an application of artificial intelligence (AI) that focuses on the development of computer programs to learn information from data.
We are available to consult on the following. This includes a general overview of concepts, discussion into what tools and architectures best fit your needs, or technical support on implementation.
Language
Tools/Architectures
Models
Python
Python data tools (scikit, numpy, etc)
Neural networks
C++
TensorFlow
Decision trees
Java
Jupyter notebooks
Support vector machines
Matlab
Programming
We also provide consulting on programming in a variety of programming languages (including but not limited to: C++, Java, and Python) to support your data science needs. We can assist in algorithm design and implementation, as well as optimizing and parallelizing code to efficiently utilize high performance computing (HPC) resources where possible/necessary. We can help identify available commercial and open-source software packages to simplify your data analysis.
The Great Lakes HPC Cluster is the university-wide, shared computational discovery and high-performance computing (HPC) service. It is designed to support both compute- and data-intensive research.
Great Lakes is available without charge for student teams and organizations who need HPC resources. This program aims to enable students access to high-performance computing to enhance their team’s mission; it is not meant to be used for faculty-led research. Jobs submitted from a student team account will have lower priority and will be able to run when there are sufficient resources available. Currently, we are limiting the resources available to each team (see below), but in the future, we expect to increase the available resources. If your team or organization needs more resources or access to the large memory nodes, an administrator for your team can also request a paid Slurm account which will not have any of the restrictions mentioned below and can work in conjunction with the no-cost team account.
Access
Your student team/organization must be registered as aSponsored Student Organizations (SSO). If your team is an SSO and would like an account on Great Lakes, please have a sponsoring administrator email us at arcts-support@umich.edu with your team name, the uniqnames of the users who are allowed to use the account, and a list of the uniqnames of the account administrators who can make decisions about the account.
If you are a member of a team or organization that isn’t registered as an SSO, please email us at arc-support@umich.edu with the details of your organization and what your HPC needs are.
Limits
Once your account is created, your team will be able to use up to 100 CPUs in the standard partition and 1 GPU in the gpu partition. Jobs will be limited to 24 hours. If members of your team do not have Linux experience at the command-line, they can also use Great Lakes through their browser via Open OnDemand.
Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.
If you would like to create a Great Lakes Cluster account or have any questions, contact arc-support@umich.edu with lists of users, admins, and a shortcode. Trial accounts are also available for new PIs.
LSA researchers who are leading courses may have course accounts set up for them through ARC. ARC offers a stipend of $60.91 per student which can be applied to the account as a whole or on a user level. For more information about course accounts please visit this page.
If you have any questions regarding course accounts please send a note to arc-support@umich.edu.
If you would like to create a Great Lakes Cluster account or have any questions, contact arc-support@umich.edu with lists of users, admins, and a Shortcode. You may also be eligible for the University of Michigan Research Computing Package (UMRCP). For more information about the UMRCP and instructions on signing up please visit this page.