Great Lakes

By | Systems and Services

Part of UMRCP provided by ITS

The Great Lakes Slurm cluster is a campus-wide computing cluster that serves the broad needs of researchers across the university. The Great Lakes HPC Cluster has replaced Flux, the shared research computing cluster that currently serves over 300 research projects and 2,500 active users.

The Great Lakes HPC Cluster is available to all researchers on campus for simulation, modeling, machine learning, data science, genomics, and more. The platform provides a balanced combination of computing power, I/O performance, storage capability, and accelerators.

Based on extensive input from faculty and other stakeholders across campus, the Great Lakes HPC Cluster is designed to deliver similar services and capabilities as Flux, including access to GPUs and large-memory nodes and improved support for emerging uses such as machine learning and genomics. The Great Lakes HPC Cluster consists of approximately 13,000 cores.

U-M Research Computing Package

The University of Michigan Research Computing Package (UMRCP) is an investment into the U-M research community via simple, dependable access to several ITS-provided high-performance computing clusters and data storage resources. CPU credits are allocated on the Great Lakes cluster and can be used for standard, larger memory, or GPU resources.

Grant Support

See the Grant Resources page for information regarding grant proposals using the Great Lakes HPC Cluster.

LSA-specific Information

See the LSA funding page for information on funding courses at the College of Literature, Science, and the Arts. LSA researchers who do not have access to any other account may be eligible to use the accounts provided centrally by LSA. The usage policy and restrictions on these accounts is described in detail on the LSA’s public Great Lakes accounts page.

LSA increased their cost-sharing for the rest of the 2021-2022 fiscal year.  Read about the details here.

Questions about access or use of these accounts should be sent to arc-support@umich.edu.

Student Teams

See the Great Lakes Student Teams and Organizations page if your team requires HPC resources.

Use Within Courses

Learn how you can use Great Lakes in your course.

To establish a Slurm account for a class please contact us at
arc-support@umich.edu with the following information:

  • Students to be put on the account
  • List of individuals to administer the account
  • Any limits to be placed on the either the users or the account as a whole
  • The unit abbreviation and course and section numbers for the course you are leading (i.e., eecs498 section 400)

Please note: all students will need to have a user login to use the account and can request one online.

For technical support, email arc-support@umich.edu.

Upcoming Events

October 5 @ 1:00 pm - 4:00 pm

Advanced Research Computing on the Great Lakes Cluster

OVERVIEW This workshop will cover some more advanced topics in computing on the U-M Great Lakes Cluster. Topics to be covered include a review of common parallel programming models and…

November 2 @ 1:00 pm - 4:00 pm

Advanced Research Computing on the Great Lakes Cluster

OVERVIEW This workshop will cover some more advanced topics in computing on the U-M Great Lakes Cluster. Topics to be covered include a review of common parallel programming models and…

Order Service

Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arc-support@umich.edu with lists of users, admins, and a Shortcode. UMRCP accounts are also available to eligible researchers. For more information, please visit our UMRCP page.

Lighthouse Cluster Rates

By | Great Lakes, HPC

The 2021-22 rates for the Lighthouse Cluster have been approved. These rates represent cost recovery for the Lighthouse Cluster, and do not include any support your unit may choose to provide.

The Lighthouse cluster is meant to support researchers with grants that require the purchase of computing hardware or require hardware configurations not available on Great Lakes. Lighthouse allows researchers to work with ARC to purchase their own hardware and have it within the Lighthouse Slurm environment, with ARC providing the data center, staff, networking, storage and software. More information is available on the Lighthouse webpage.  In addition to the hardware purchase cost, there is monthly charge to support the hardware in the Lighthouse environment.

Lighthouse Service Cost $743.07/node/year ($61.92/node/month)

College of Literature, Science, & the Arts Lighthouse Support

To support research using high performance computing, the College of Literature, Science, and the Arts (LSA) will cost-share the monthly operating charge for researchers paying on LSA unit shortcodes. For fiscal years 2022-23 and 2023-24, LSA will provide a 88% subsidy on the monthly service charge. The table below provides an estimate of Lighthouse Operating Expenses using published ITS rates for FY2021-22. The rates for LSA researchers will change, and the table updated, if ITS rates change.  LSA does not anticipate continuing the LSA subsidy for Lighthouse nodes after FY2023 and FY2024 due to the small number of people who use service across U-M and LSA.  Please plan accordingly for the change in cost that would occur at the start of FY2025 if you continue to use Lighthouse.

ITS Rate / Node LSA Researcher Rate LSA / College Subsidy
Annual $743.07/node/yr $89.16 (12%) $653.90 (88%)
Monthly $61.92/node/month $6.81 (12%) $55.11 (88%)

If you have questions, please send email to arc-support@umich.edu.

Order Service

If you have any questions about Lighthouse, contact arc-support@umich.edu .

Summer 2022 Maintenance

By | Armis2, Great Lakes

Summer 2022 Maintenance

Due to maintenance, the high-performance computing (HPC) clusters and their storage systems (/home and /scratch) will be unavailable:

Great Lakes: Monday, August 8 to Wednesday, August 10
Turbo: Monday, August 8 to Wednesday, August 10
Armis2 and Lighthouse: Tuesday, August 9 to Wednesday, August 10 

More detail is available on the ITS website:

Attention

  • No jobs will run that cannot be completed by the beginning of maintenance. 
    • Great Lakes maintenance begins on Monday, August 8. 
    • Armis2 and Lighthouse maintenance begins on Tuesday, August 9.
  • Make a copy of any files that you might need during maintenance that are in /home or /scratch, which will not be available during maintenance, to your local drive prior to maintenance using Globus File Transfer or other file transfer (see the User Guide for the cluster).
  • All running and queued jobs will be deleted at the start of maintenance.

ARC recommends that you test if you can

Testing is an important component of the maintenance process. Discovering broken code early will ensure that there is enough time to put a fix in place so that your research is not disrupted.

Module names and the available versions of installed software have changed, so we recommend that you log in to check on the availability of software you are accustomed to using. It may be important to determine whether existing code will run with the versions that will be available after maintenance.

SPECIAL NOTICE: If you plan to test your software and/or workflows prior to maintenance on the preview clusters, directories under /home and /nfs on all clusters are shared between the preview and the current production cluster.

Because directories are shared, we highly recommend you test using copies of data and programs.

Software in the following categories are especially susceptible to change in software version.

Accessing the preview clusters

Great Lakes:

Hardware  

  • 10 standard compute nodes (36 cores, 192G memory)
  • 2 standard on-campus nodes
  • 1 GPU node with 3 V100 GPUs
  • The large memory partition is there for testing scripts, but is using a standard compute node (is not a large memory node)
Limits  

  • 1 running job per user
  • Standard jobs: 6 hour max job time, 18 cpus, 90G memory
  • GPU jobs: 2 hour max job time and 1 GPU per root account
  • 100 hours of compute for the root account (please only use this for testing)

Armis2:

Hardware  

  • 10 standard compute nodes (24 cores, 120G memory)
  • 1 GPU node with 4 Titan V GPUs
  • The large memory partition is there for testing scripts, but is using a standard compute node (is not a large memory node)
Limits  

  • 1 running job per user
  • Standard jobs: 6 hour max job time, 18 cpus, 90G memory
  • GPU jobs: 2 hour max job time and 1 GPU per root account
  • 100 hours of compute for the root account (please only use this for testing)

Lighthouse:

Hardware  

  • 2 standard compute nodes (20 cores,  62G memory)
  • No GPU nodes available. We recommend testing on Great Lakes or using one of your team’s nodes (see next bullet)
  • If you would like to test your codes on one or more of your compute nodes, email arc-support@umich.edu and we can migrate those nodes to the Lighthouse preview cluster for team’s testing.
Limits  

  • 1 running job per user
  • Standard jobs: 6 hour max job time, 18 cpus, 90G memory
  • 100 hours of compute for the root account (please only use this for testing)

How to get help 

  • Send an email to arc-support@umich.edu.
  • Attend a virtual, drop-in office hour session (CoderSpaces) to get hands-on help from experts, available 9:30-11 a.m. and 2-3:30 p.m. on Tuesdays and Thursdays.  

Changes

This maintenance period will include significant updates including a change to the operating system, OFED drivers, NVIDIA drivers, and software and Slurm versions on all three clusters: Great Lakes, Armis2, and Lighthouse. See details below. 

System software changes

NEW version in BOLD OLD version
Red Hat 8.4

  • Kernel 4.18.0-305.45.1.el8_4.x86_64
  • glibc 2.28-151
  • ucx 1.12.0-1.55103 (OFED provided)
  • gcc-8.4.1-1.el8
CentOS 7.9

  • kernel 3.10.0-1160.45.1
  • glibc 2.17-325.el7_9
  • ucx 1.11.1-1.54303 (OFED provided)
  • gcc-4.8.5-44.el7
Mlnx-ofa_kernel-modules 

  • OFED 5.5.1.0.3.1.
  • kver.4.18.0_305.25.1.el8_4
Mlnx-ofa_kernel-modules

  • OFED 5.4-3.0.3.0
    • kver.3.10.0-1160.45.1
Slurm 21.08.8-2 copiles with:

    • PMIx
      • /opt/pmix/2.2.5
      • /opt/pmix/3.2.3
      • /opt/pmix/4.1.2
  • hwloc 2.2.0-1 (OS provided)
  • ucx 1.12.0-1.55103 (OFED provided)
  • slurmrestd
  • slurm-libpmi
  • slurm-contribs
Slurm 21.08.4 compiles with:

  • PMIx
    • /opt/pmix/2.2.5
    • /opt/pmix/3.2.3
    • /opt/pmix/4.1.0
  • hwloc 1.11.8-4 (OS provided)
  • ucx 1.11.1 (OFED provided)
  • slurmrestd
  • slurm-libpmi
  • slurm-contribs
PMIx LD config /opt/pmix/2.2.5/lib PMIx LD config /opt/pmix/2.2.5/lib
PMIx versions available in /opt :

  • 2.2.5
  • 3.2.3
  • 4.1.2
PMIx versions available in /opt :

  • 2.2.5
  • 3.2.3 
  • 4.1.0
Singularity CE (Sylabs.io)

  • 3.9.8
Singularity (Sylabs.io)

  • 3.7.3
  • 3.8.4
NVIDIA driver 510.73.08 NVIDIA driver 495.44
Open OnDemand 2.0.23-1 Open OnDemand 2.0.20

 

User software changes

New Software Versions  Deprecated versions (RIP)
Python

  • Version 3.6.8 is system provided; ARC will provide newer versions via modules
Python

  • 2.x

 

FAQ

Q: When is summer maintenance? 

A: Summer 2022 maintenance is happening August 8-10. The high-performance computing (HPC) clusters and their storage systems (/home and /scratch) will be unavailable:

  • Great Lakes: Monday, August 8 to Wednesday, August 10 
  • Armis2 and Lighthouse: Tuesday, August 9 to Wednesday, August 10 

 

Q: How should I prepare for the summer 2022 maintenance? 

A: There are a number of actions you can take ahead of maintenance, including: 

  • Use the preview clusters to test the code. 
  • Make a copy of any files that you might need during maintenance that are in /home or /scratchto your local drive prior to maintenance using Globus File Transfer or other file transfer (see the User Guide for the cluster).

 

Q: Can I run any of my jobs during maintenance? 

A: No. You can submit your jobs anytime but jobs that did not complete prior to maintenance will need to be resubmitted.

 

Q: Will I have access to the clusters during the maintenance?

A: No. The clusters and their storage systems will be unavailable during maintenance, including files, jobs, and the command line. 

 

Q: Will there be any changes to my jobs after maintenance?

A: Use the preview cluster to recompile and test your code. You may need to recompile your code if you didn’t get a chance to recompile or test on the preview cluster prior to maintenance.

 

Q: Where can I get help?

A: You can send an email to arc-support@umich.edu. Or attend a virtual, drop-in office hour session (CoderSpaces) to get hands-on help from experts, available 9:30-11 a.m. and 2-3:30 p.m. on Tuesdays and Thursdays. 

 

Q: How do I access the preview clusters? 

Great Lakes:

  • Command line: ssh gl8-login.arc-ts.umich.edu
  • Open OnDemand: https://greatlakes8.arc-ts.umich.edu

Armis2:

  • Command line: ssh a28-login.arc-ts.umich.edu
  • Open OnDemand: https://armis28.arc-ts.umich.edu

Lighthouse:

  • Command line: ssh lh8-login.arc-ts.umich.edu
  • Open OnDemand: https://lighthouse8.arc-ts.umich.edu

 

Q: What are the technical changes happening during maintenance?

A: See the chart above. 

 

Q: Will Python version 2.x work after maintenance?

A: No. ARC is updating Python to version 3.6.8. Be sure to test and/or update your processes, codes, and/or libraries.

U-M Research Computing Package

By | Armis2, Great Lakes, HPC, Research, Systems and Services

Sign up for the U-M Research Computing PackageThe University of Michigan Research Computing Package (UMRCP), provided by ITS, is an investment into the U-M research community via simple, dependable access to several ITS-provided high-performance computing clusters and data storage resources. 

Sign-up for the U-M Research Computing Package (UMRCP)

Current UMRCP allocation holders: Check your UMRCP credits in the ARC Resource Management Portal (RMP). Use your remaining credits by June 30 because they will not rollover. Automatic renewals for most begin July 1. Watch your inbox for additional information.

Check out these video tutorials:

Visit the ARC Resource Management Portal to request UMRCP services (you will need to log in with your UMICH [Level-1] credentials and Duo Two-Factor authentication). To ensure that we can best meet your needs, it is important that researchers sign up directly for the allocations. It is not possible at this time to sign up on behalf of someone else. Please note that it may take up to one week to fulfill your request. 

The U-M Research Computing Package was developed by ITS to meet needs across a diversity of disciplines and to provide options for long-term data management, sharing and protecting sensitive data, and more competitive cost structures that give faculty and research teams more flexibility to procure resources on short notice. 

ALLOCATIONS PROVIDED BY ITS

The U-M Research Computing Package provides:

80,000 CPU Hours of high-performance computing

  • CPU credits are allocated on the Great Lakes cluster or the Armis2 protected data cluster; they can be used for standard, larger memory, or GPU resources, and renewed every July. Please note – If you have an existing high-performance computing account it cannot be included under the UMRCP. Any request for computing hours through the UMRCP requires the creation of a new account. 

10 TB of replicated, high-performance storage

  • Capacity can be split across sensitive and non-sensitive science data. The Turbo Storage can be used on many ARC/ITS, unit, Globus.org, and laboratory systems.

100 TB of archive storage

  • Replicated, durable storage for massive data volumes. The Data Den system also supports reliable data transfer and international data sharing via Globus.org.

Allocations for coursework

REDUCED RATES

The U-M Research Computing Package provides base allocations that meet the needs of approximately 75% of current high-performance computing users and 90% of current research storage users. These allocations are at no cost to researchers, thanks to an additional investment from ITS.

Any services that are needed beyond the base allocation provided by the UMRCP are available at significantly reduced rates (the average rate reduction is 40%). The rates are applied automatically. View the rates page for more information.

ELIGIBILITY

The U-M Research Computing Package is available to all researchers on all campuses (Ann Arbor, Dearborn, Flint, and Michigan Medicine), including:

Faculty and Staff 

  • All professors and lecturers with an active appointment 
  • Emeritus faculty
  • Guest faculty who are visiting for one to two years
  • Principal investigators (e.g., research scientists) on funded research activities
  • PostDocs who have their own research grant 

PhD Students

PhD students may qualify for their own UMRCP resources depending on who is overseeing their research and their advisor relationship. Students should consult with their PhD program administrator to determine their eligibility. ITS will confirm this status when a UMRCP request is submitted.

Undergraduates and Masters Students

Undergraduate and Masters students do not currently qualify for their own UMRCP. They can be added as users or administrators of another person’s UMRCP. Students can also access other ITS programs such as Great Lakes for Course Accounts, and Student Teams.

GETTING STARTED

Sign-up anytime for no-cost allocations.

Be sure to use all of your allocations – they expire on June 30 each year.

Visit the ARC Resource Management Portal to request UMRCP services (you will need to log in with your UMICH [Level-1] credentials and Duo Two-Factor authentication). To ensure that we can best meet your needs, it is important that researchers sign up directly for the allocations. It is not possible at this time to sign up on behalf of someone else. Please note that it may take up to one week to fulfill your request. 

Any services that are needed beyond the base allocation provided by the UMRCP are available at significantly reduced rates (the average rate reduction is 40%). The rates are applied automatically. View the rates page for more information.

Automatic renewals begin July 1

Faculty members and PhD students with their own funding who already have the UMRCP will automatically have their no-cost allocations automatically renew on July 1. No action is needed; however, if you need help or have questions, contact the ARC Help Desk, arc-support@umich.edu.

GET HELP

Meet with ARC: ARC leadership can meet with your unit to discuss unit-specific questions. Send an email to arc-support@umich.edu to get started.

Take a course about ARC services: ARC offers a number of training sessions geared for researchers on how to use ARC services. View the Teaching and Technology Collaborative website to see which courses are available

Watch a how-to demo videos: These videos are available on the ARC Channel of the MiVideo website. Topics include an introduction to machine learning, data sharing and archiving, and more. UMICH (Level-1) login required.

 

FREQUENTLY ASKED QUESTIONS

I already have some of the ARC services listed. Do I still need to apply?

Yes. Existing ARC resources are not migrated to UMRCP support until the leader of the resources applies. Existing resources can be migrated to the UMRCP once created and do not require movement of data or changes in workflow.

Can I sign someone else up for the UMRCP?

No. To ensure that we can best meet your needs, it is important that researchers sign up directly for the allocations. It is not possible at this time to sign up on behalf of someone else.

I have multiple students/collaborators in my research team. Can they share resources?

Yes. All UMRCP and ARC resources are designed to be collaborative and multidisciplinary capable. UMRCP recipients can delegate both administrative and user access to anyone with a valid campus uniquename.

I used up / need more resources than UMRCP provides, can I get more?

Yes. All services have options to purchase additional capacity. Ways to get help are shown on the ARC Help Page; be sure to include the service name and your needs when requesting additional resources. In addition, consumable resources like High Performance Computing hours are replenished each fiscal year.

I’m a faculty member who already has the UMRCP. How do I renew?

No action is required. Your no-cost allocations automatically renew on July 1.

I’m a PhD student with my own funded research. How do I renew my UMRCP?

No action is required. Your no-cost allocations automatically renew on July 1.

I’m a PhD, Master’s, or undergraduate student, and my advisor has left the university. My research is continuing to finish my dissertation. What should I do?

If your faculty advisor has left the university (or are about to), but you remain: an alternative administrator (Staff or Faculty who can verify the ongoing need of the allocation for wrapping up activities related to the former research group) must be assigned to the account so that the allocations can continue uninterrupted. See How do I delegate administrative management of my UMRCP or other ARC resources?

I’m an active faculty member but received a UMRCP off boarding message warning that my resources won’t be renewed. What do I need to do to keep them?

Likely your MCommunity entry is private or another status (e.g., Retired/Emeritus) which makes verifying your active research status ambiguous and needs to be confirmed. You can reply to the automated message or contact arc-support@umich.edu to confirm your active status and renew your resources for the upcoming year.

How do I make changes if I have already submitted the UMRCP request form?

If you have already submitted the UMRCP request form and you want to modify it, open a ticket by sending an email to ARC Support, arc-support@umich.edu. Be sure to include the name of your project.

How do I delegate administrative management of my UMRCP or other ARC resources?

Visit the ARC Resource Management Portal. On your project overview page look for Project Admins. Select the Add button and add a trusted university member by inputting their uniqname. This is helpful when the project owner is leaving the university but requires the resources to allow students to finish their dissertation work, as the new delegated admin can confirm the ongoing need each June for UMRCP renewal. Admins can make any change to your project so be sure to only add members you trust.

Great Lakes Past Events Timeline

By | Great Lakes, HPC
Back to Future Events

Great Lakes /scratch Maintenance

March 14, 2022

Great Lakes Outage: /scratch Maintenance

Read more

Winter 2022 Maintenance

January 10, 2022

ARC Winter 2022 Maintenance

Read more

Summer 2021 Maintenance

August 9, 2021

See Summer 2021 maintenance information here: https://arc.umich.edu/summer-2021-maintenance/

Read more

Winter 2021 Maintenance

January 11, 2021

ARC-TS Winter 2021 Maintenance

Read more

Great Lakes Summer 2020 Maintenance

August 3, 2020

Great Lakes Summer 2020 Maintenance

Read more

Open OnDemand on Great Lakes being upgraded

May 18, 2020

1:00 PM – 5:00 PM: We are migrating Open OnDemand on Great Lakes from version 1.4 to 1.6 to fix a security issue. Users will…

Read more

Winter 2020 Maintenance

March 9, 2020

The ARC-TS 2020 winter maintenance will start on March 9, 2020.  You can read more about it here.  https://arc-ts.umich.edu/event/winter-2020-maintenance/   Great Lakes, Lighthouse, and Armis2…

Read more

Great Lakes Billing Begins

January 6, 2020

Billing for Great Lakes will begin on January 6, 2020 at 9am. Rates for using Great Lakes can be found here: https://arc-ts.umich.edu/greatlakes/rates   . No…

Read more

Beta Retires

November 29, 2019

Beta will be retired after both Flux and Armis have been retired, as the purpose of Beta is to assist users to transition to the…

Read more

Great Lakes Migration Complete

November 25, 2019

All HPC accounts, users, and workflows must be migrated to either Great Lakes (Standard, GPU, Large-Memory, and On-demand Flux) or Lighthouse (Flux Operating Environment nodes)…

Read more

Great Lakes Open for General Availability

August 19, 2019

Assuming all initial testing is successful, we expect that Great Lakes will become available for University users after the ARC-TS summer maintenance.

Read more

Great Lakes Early User Testing Ends

August 14, 2019

The Great Lakes early user period for testing will end. Great Lakes will transition into production and users will be able to submit work as…

Read more

2019 Summer Maintenance

August 12, 2019

The ARC-TS annual summer maintenance will begin.   The details, services impacted,  and length of the maintenance will be determined when we get closer to the date. …

Read more

Great Lakes Open for Early User Testing

July 15, 2019

We will be looking for sets of friendly users who will be able to test different aspects of the system to submit their workloads to…

Read more

Great lakes firmware updates complete

July 8, 2019

This date is a rough estimate.  Great Lakes, as of June, is running a GA-candidate HDR-100 InfiniBand firmware.   We will schedule an update to the…

Read more

Great Lakes primary installation complete and ARC-TS begins loading and configuration

May 30, 2019

The Great Lakes installation is primarily complete other than waiting for final HDR firmware testing for the new InfiniBand system.  The current InfiniBand system is…

Read more

HPC OnDemand Available for Beta

March 8, 2019

The replacement for ARC Connect, called HPC OnDemand, will be available for users.  This will allow users to submit jobs via the web rather than…

Read more

Beta Cluster testing continues

January 8, 2019

If you are a current HPC user on Flux or Armis and have not used Slurm before, we highly recommend you login and test your…

Read more

Great Lakes Beta (GLB) created

November 1, 2018

Great Lakes Beta is installed for HPC support staff to build and test software packages on the same hardware in Great Lakes.

Read more

Beta HPC cluster available

October 2, 2018

The Beta HPC cluster was introduced to enable HPC users to begin migrating their Torque job scripts to Slurm and test their workflows on a Slurm-based…

Read more

Great Lakes Installation Begins

October 1, 2018

Dell, Mellanox, and DDN will be delivering and installing the hardware to deliver the new Great Lakes service.  These teams will be working along side the…

Read more

If you have questions, please send email to arcts-support@umich.edu.

Order Service

Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arcts-support@umich.edu with lists of users, admins, and a shortcode. Trial accounts are also available for new PIs.

Consulting Services

By | Uncategorized

Advanced Research Computing (ARC), a division of ITS, is pleased to offer a pilot called Scientific Computing and Research Consulting Services to help researchers implement data analytics and workflows within their research projects. This includes navigating technical resources like high-performance computing and storage.

The ARC Scientific Computing and Research Consulting Services team will be your guide to navigating the complex technical world: from implementing intense data projects, to teaching you how the technical systems work, to assist in identifying proper tools, to guiding you on how to hire a programmer.

Areas of Expertise

  • Data Science
    • Data Workflows
    • Data Analytics
    • Machine Learning
    • Programming
  • Grant Proposals
    • Compute Technologies
    • Data Storage and Management
    • Budgeting cost for computing and storage
  • Scientific Computing/Programming
    • Getting started with advanced Computing
    • Code optimization
    • Parallel computing
    • GPU/Accelerator Programing
  • Additional Resources
    • Facilitating Collaborations/User Communities
    • Workshops and Training

Who can use this service?

  • All researchers and their collaborators from any of the university’s three campuses, including faculty, staff, and students
  • Units that want help including technical information when preparing grants
  • Anyone who has a need for HPC services and needs help navigating resources

How much does it cost?

  • Initial consultation, grant pre-work, and short term general guidance/feedback on methods and code are available at no cost.
  • For protracted longer engagements, research teams will be asked to contribute to the cost of providing the service.

Partnership

The ARC Scientific Computing and Research Consulting Services team works in partnership with the Consulting for Statistics, Computing, and Analytics Research team (CSCAR), Biomedical Research Core Facilities, and others. ARC may refer or engage complimentary groups as required by the project.

Get Started

Send an email to arc-consulting@umich.edu with the following information:

  • Research topic and goal
  • What you would like ARC to help you with
  • Any current or future data types and sources
  • Current technical resources
  • Current tools (programs, software)
  • Timeline – when do you need the help or information?

Check out current or past projects

Get Help

If you have any questions or wish to setup a consult, please contact us at arc-consulting@umich.edu. Be sure to include as much information as possible from the “Get started” section noted above.

If you have more general questions about ARC services or software please contact us at arc-support@umich.edu

Virtual office hours are also available on Tuesdays, Wednesdays, and Thursdays. Get help with machine learning, algorithms, modeling, coding, computing on a cluster, survey sampling, using records across multiple surveys, and more. Anyone doing any type of research at any skill level is welcome!

Data Science Consulting

By | Uncategorized

Data Workflows

We are available to assist researchers along the entire lifecycle of the data workflow, from the conceptual stage to ingest, preprocessing, cleansing, and storage solutions. We can advise in the following areas:

  • Establishing and troubleshooting dataflows between systems
  • Selecting the appropriate systems for short-term and long-term storage
  • Transformation of raw data into structured formats
  • Data deduplication and cleansing
  • Conversion of data between different formats to aide in analysis
  • Automation of dataflow tasks

Analytics

The data science consulting team can assist with data analytics to support research:

  • Choosing the appropriate tools and techniques for performing analysis
  • Development of data analytics in a variety of frameworks
  • Cloud-based (Hadoop) analytic development

Machine Learning

Machine learning is an application of artificial intelligence (AI) that focuses on the development of computer programs to learn information from data.

We are available to consult on the following. This includes a general overview of concepts, discussion into what tools and architectures best fit your needs, or technical support on implementation.

Language Tools/Architectures Models
Python Python data tools (scikit, numpy, etc) Neural networks
C++ TensorFlow Decision trees
Java Jupyter notebooks Support vector machines
Matlab

Programming

We also provide consulting on programming in a variety of programming languages (including but not limited to: C++, Java, and Python) to support your data science needs. We can assist in algorithm design and implementation, as well as optimizing and parallelizing code to efficiently utilize high performance computing (HPC) resources where possible/necessary. We can help identify available commercial and open-source software packages to simplify your data analysis.

 

If you have any questions or wish to setup a consult please contact us at arcts-consulting@umich.edu

Great Lakes For Student Teams and Organizations

By | Great Lakes, HPC

The Great Lakes HPC Cluster is the university-wide, shared computational discovery and high-performance computing (HPC) service. It is designed to support both compute- and data-intensive research. 

Great Lakes is available without charge for student teams and organizations who need HPC resources. This program aims to enable students access to high-performance computing to enhance their team’s mission; it is not meant to be used for faculty-led research. Jobs submitted from a student team account will have lower priority and will be able to run when there are sufficient resources available. Currently, we are limiting the resources available to each team (see below), but in the future, we expect to increase the available resources. If your team or organization needs more resources or access to the large memory nodes, an administrator for your team can also request a paid Slurm account which will not have any of the restrictions mentioned below and can work in conjunction with the no-cost team account. 

Access

Your student team/organization must be registered as a Sponsored Student Organizations (SSO). If your team is an SSO and would like an account on Great Lakes, please have a sponsoring administrator email us at arcts-support@umich.edu with your team name, the uniqnames of the users who are allowed to use the account, and a list of the uniqnames of the account administrators who can make decisions about the account.

If you are a member of a team or organization that isn’t registered as an SSO, please email us at arc-support@umich.edu with the details of your organization and what your HPC needs are.

Limits

Once your account is created, your team will be able to use up to 100 CPUs in the standard partition and 1 GPU in the gpu partition. Jobs will be limited to 24 hours. If members of your team do not have Linux experience at the command-line, they can also use Great Lakes through their browser via Open OnDemand.

If you have any questions, please send an email to arc-support@umich.edu.

Order Service

Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arc-support@umich.edu with lists of users, admins, and a shortcode. Trial accounts are also available for new PIs.

LSA Funding for Instructional Use of the Great Lakes Cluster

By | Great Lakes

LSA researchers who are leading courses may have course accounts set up for them through ARC. ARC offers a stipend of $60.91 per student which can be applied to the account as a whole or on a user level. For more information about course accounts please visit this page.

 

If you have any questions regarding course accounts please send a note to arc-support@umich.edu.

Order Service

Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arc-support@umich.edu with lists of users, admins, and a Shortcode. You may also be eligible for the University of Michigan Research Computing Package (UMRCP). For more information about the UMRCP and instructions on signing up please visit this page.

Great Lakes Cluster Rates

By | Great Lakes, HPC

Part of UMRCP provided by ITSThe 2021-22 rates for the Great Lakes Cluster have been approved. These rates represent cost recovery for the Great Lakes Cluster, and do not include any support your unit may choose to provide.

Partition Rate Per Minute Rate Per Month CPU Cores Memory Unit GPU Unit
standard/debug/viz $0.000250463 $10.82 1 7 gigabytes N/A
largemem $0.000770370 $33.28 1 41.75 gigabytes N/A
gpu $0.002739120 $118.33 20 90 gigabytes 1
spgpu $0.001807870 $78.10 4 48 1

The monthly rate is based on a 30-day month; per-minute rates are included for reference. Charges are based on the percentage of the machine your job requests (in terms of the amount of cores, memory, and GPUs), and its actual runtime.

Estimating Maximum Job Costs

The command can estimate the maximum cost of job based on cpu, memory, gpu, and partition selection. The amount returned is the most a job could cost and does not account for any unit or other subsidies you have access to.

my_job_estimate --help 
my_job_estimate --cores 20 --memory 10gb --partition standard

College of Engineering Funding

The College of Engineering (CoE) cost shares a portion of the use of the Great Lakes cluster for externally-sponsored research conducted by CoE researchers who are paying to use the Great Lakes cluster on CoE shortcodes.  For more information, see:

Medical School Funding

The Medical School will cost-share 80% of the Great Lakes Cluster rates for sponsored research accounts for researchers in these units who are paying to use the Great Lakes Cluster on unit Shortcodes.

College of Literature, Science, & the Arts Funding

The College of Literature, Science, and the Arts provides a cost-share of 80% for researchers in College units paying for their use of the Great Lakes cluster on unit shortcodes. LSA researchers who do not have access to any other account may be eligible to use the accounts provided centrally by LSA. The usage policy and restrictions on these accounts is described in detail on the LSA’s public Great Lakes accounts page. Questions about access or use of these accounts should be sent to arc-support@umich.edu.

See the Course Accounts page for information on funding courses at the College of Literature, Science, and the Arts.

School of Public Health Funding

The School of Public Health will cost-share 80% of the Great Lakes Cluster rates for sponsored research accounts for researchers in these units who are paying for their use of Great Lakes Cluster on unit Shortcodes. NOTE: The SPH cost-sharing is limited to a finite budget. Therefore, SPH faculty and staff must check with the Assistant Dean for Finance to verify if/how much cost-sharing is available.

Getting a Slurm Account

To establish a Slurm account for a class please contact us at arcts-support@umich.edu with the following information:

  • Students to be put on the account
  • Shortcode to be used for billing
  • List of individuals to administer the account

Any limits to be placed on the either the users or the account as a whole

Please note: courses are currently free of charge through the end of winter term 2020. All students will need to have a user login to use the account. Use the Login Request form to get an account.

Example jobs and their charges¹

To help illustrate how the job charges work, here are some examples of differently-sized jobs.

Partition Total CPU Cores Used Total Memory Used Total GPUs Used Cost Per Minute
standard 1 1 GB N/A $0.000250
standard 1 10 GB N/A $0.000500
standard 36 5 GB N/A $0.009000
standard 1 50 GB N/A $0.002000
largemem 1 180 GB N/A $.003852
gpu 1 20 GB 1 $0.002739

¹ The charges above have been rounded for readability.

If you have questions, please send email to arcts-support@umich.edu.

Order Service

Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arcts-support@umich.edu with lists of users, admins, and a shortcode. UMRCP accounts are also available to eligible researchers. For more information, please visit our UMRCP page.