UMRCP Requesting Resources Guide

The University of Michigan Research Computing Package (UMRCP) provides faculty and research teams with:

  • 80,000 CPU hours of high-performance computing
  • 10TB of replicated, high-performance storage
  • 100TB of archive storage

These resources are provided at no cost, but must be requested through the Advanced Research Computing (ARC) Resource Management Portal (RMP). Faculty, staff, researchers, and qualifying PhD students who have existing ARC resources must submit a request before those resources can be migrated to the UMRCP. Please note – If you have an existing high-performance computing account it cannot be included under the UMRCP. Any request for computing hours through the UMRCP requires the creation of a new account. 

Individuals new to ARC resources may want to review the HPC concepts & terminology section of this document before initiating a request in order to better understand the questions on the request form.

Submitting a Request – New Projects

All UMRCP requests are submitted by completing an intake form on the Resource Management Portal (RMP). Click + Enroll in UMRCP on the RMP page to begin, then complete the form as explained below. A walkthrough of requesting a new project is also available. 

Project Name

The Project Name defaults to your uniqname. This is also the name given to the MCommunity group used to administer the project as described below.

Project Admins

Enter the names of the individuals who will be administrators on the project. Administrators can add/remove users, adjust permissions to files, etc. After the project is created, administrators can be added/removed using the RMP or MCommunity. 

Enter an individual’s name or uniqname in the Enter name or uniqname field and click the name in the search results to add the person to the project admins list. Repeat until all administrators are added. 

Note: Adding an administrator to the project does not automatically grant them access to the requested clusters/storage volumes. They must each be added individually to those resources to be granted access. 

Project Resources

UMRCP provides 80,000 hours of high-performance computing (HPC), 10TB of storage and 100TB of archive storage. However, you need to define how these allocations are configured in the Project Resources section. 

The primary decision point when deciding on resources is if you will use the UMRCP to work with Non-Sensitive data, Sensitive Data, or a combination of both. Once you know which type of resource(s) you need, follow the instructions in the appropriate section below.

For more information on sensitive data, see the following documents: 

If working with Non-Sensitive Data only

Click the NonSensitive Data Only tile. The following tiles will become available: Great Lakes, Turbo, Data Den

  1. Click the Great Lakes tile to request HPC resources. All 80,000 CPU hours of compute time will be allocated on the Great Lakes cluster. 

Any individuals, including administrators, who should have access to the compute cluster (e.g., research assistants, students), can be added using the Enter name or uniqname field in the Add Users to Great Lakes HPC section. Users can be added/removed by administrators at any time after the project is created. 

  1. Click the Turbo tile. All 10TB of storage will be allocated on the Turbo volume. The Turbo volume name will be your uniqname.

Enter a name for the Turbo volume. This can be the same as the project name or you can enter a different name, if desired. 

The Turbo storage volume can be made accessible to various systems. If you selected the Great Lakes cluster in Step 1, the volume is marked as accessible to your Great Lakes cluster by default. If you want the volume accessible to Lighthouse, Globus, or Non-ARC Systems, check the appropriate boxes. 

Any individuals who should have access to the Turbo volume, including administrators, should be added using the Enter name or uniqname field in the Add Users to Turbo Nonsensitive Storage section. Users can be added/removed by administrators at any time after the project is created.

  1. Click the Data Den tile. All 100TB of archive storage will be allocated on the Data Den volume. The Data Den volume name will be your uniqname.

Any individuals who should have access to the Data Den volume, including administrators, should be added using the Enter name or uniqname field in the Add Users to Data Den Archive section. Users can be added/removed by administrators at any time after the project is created.

If working with Sensitive Data only

Click the Sensitive Data Only tile. The following tiles become available: Armis2 and Turbo

Note: Data Den cannot house sensitive data. Therefore, if requesting only Sensitive Data resources, you will not receive the 100TB archive storage. 

  1. Click the Armis2 tile to request HPC resources. All 80,000 CPU hours of compute time will be allocated on the Armis2 cluster. 

Any individuals, including administrators, who should have access to the compute cluster (e.g., research assistants, students), can be added using the Enter name or uniqname field in the Add Users to Armis2 HPC section. Users can be added/removed by administrators at any time after the project is created.

  1. Click the Turbo tile. All 10TB of storage will be allocated on the Turbo volume. The Turbo volume name will be your uniqname.

The Turbo storage volume can be made accessible to various systems. If you selected the Armis2 cluster in Step 1, the volume is marked as accessible to your Armis2 cluster by default.If you want the volume accessible to Globus, or Non-ARC Systems, check the appropriate boxes.

Any individuals who should have access to the Turbo volume, including administrators, should be added using the Enter name or uniqname field in the Add Users to Turbo Nonsensitive Storage section. Users can be added/removed by administrators at any time after the project is created.

If working with BOTH Sensitive and Non-Sensitive Data

Click the Mixed Nonsensitive and Sensitive Data tile. The following tiles become available: Great Lakes, Armis2, Turbo, and Data Den

When requesting mixed resources, you must define how you want the 80,000 hours HPC and 10TB storage split. Evaluate your research project and other anticipated use of the UMRCP resources in order to determine the following:

  • HPC hours – You are given 80,000 hours HPC only. You need to define how many of the 80,000 hours you want to use on the Great Lakes non-sensitive data cluster and how many you want to use on the Armis2 sensitive data cluster. There are no restrictions on how to split the hours, as long as it totals 80,000. 

For example: If your project heavily, but not exclusively, uses sensitive data, you could allocate 10,000 hours to Great Lakes and 70,000 hours to Armis 2. If your project works evenly with both, you could split the hours 40,000 on each. If the compute portion of your project does not use any sensitive data (i.e., you only need to store, not process sensitive data), you can allocate all 80,000 hours to the Great Lakes cluster.

  • Storage space – The Turbo storage volume can support both sensitive and non-sensitive data, however a separate volume is created for each. Therefore, you need to define the size of each volume. You are given 10TB storage only. There is no restriction on how large or small each volume can be, as long as the combined total size is no more than 10TB.

Once you have determined you desired HPC and storage split, complete the following steps:

  1. Click the Great Lakes tile and Armis2 tile. By default, the HPC hours are split evenly. If you need a different split, enter the appropriate hours in the Enter allocation for Great Lakes HPC and Enter Allocation for Armis2 HPC fields.  

Note: If your HPC hours should be fully allocated for non-sensitive data or fully allocated for sensitive data, click either the Great Lakes (non-sensitive) OR the Armis2 (sensitive) tile only.

Any individuals, including administrators, who should have access to one or both of the compute clusters (e.g., research assistants, students), can be added using the Enter name or uniqname fields in the Add Users to Great Lakes HPC/Add User to Armis2 HPC sections. Users can be added/removed by administrators at any time after the project is created. Note that these are separate user lists. Adding a user to the Great Lakes cluster does not give that user rights to the Armis2 cluster, and vice versa. 

  1. Click the Turbo tile. 
    1. If you need a non-sensitive data AND a sensitive data storage volume, click both the Turbo Nonsensitive Volume and Turbo Sensitive Volume tiles. By default, each volume is 5TB. If you need different volume sizes, enter the desired size of the NON-SENSITIVE volume in the Enter Amount for Nonsensitive Turbo Storage field. The size of the sensitive volume automatically adjusts so the combined size is 10TB. 
    2. If you need only non-sensitive data storage OR only sensitive data storage, click the applicable tile. 
  2.  Complete the settings for each Turbo volume:
    1. The Turbo storage volumes can be made accessible to various systems. If you selected the Great Lakes cluster in Step 1, the non-sensitive data volume is marked as accessible to your Great Lakes cluster by default. If you want the non-sensitive volume accessible to Lighthouse, Globus, or Non-ARC Systems, check the appropriate boxes.

If you select the Armis2 cluster in Step 1, the sensitive data volume is accessible to your Armis2 cluster by default. If you want the sensitive data volume accessible to Globus or Non-ARC Systems, check the appropriate boxes.

  1. Any individuals who should have access to a Turbo volume, including administrators, should be added using the Enter name or uniqname field in the Add Users to Turbo Nonsensitive Storage/Add Users to Turbo Sensitive Storage sections. Users can be added/removed by administrators at any time after the project is created.
  1. Click the Data Den tile. All 100TB of archive storage will be allocated on the Data Den volume for NON-SENSITIVE DATA ONLY.  The Data Den volume name will be your uniqname.

Any individuals who should have access to the Data Den volume, including administrators, should be added using the Enter name or uniqname field in the Add Users to Data Den Archive section. Users can be added/removed by administrators at any time after the project is created.

Notes

This optional text field is used to provide any additional information about your needs that you feel are unique or the ARC team needs to know to properly configure your request. If the ARC team has any questions, they will follow up with you. 

Description of Research/Course Development

Enter a brief description of the research and/or course development that will be supported by the UMRCP resources. 

Terms of Use Agreements

Read the general terms of usage for UMRCP and HIPAA terms of usage.  Once these have been reviewed, you must check the I Agree buttons for each to confirm you understand and will abide by these terms. 

Submit Request

When the form is complete, click the Submit UMRCP Request button. A TeamDynamix ticket is created and assigned to the ARC team. When the request is complete, you will be notified via the ticket and provided with instructions to begin using the UMRCP resources.

Submitting a Request – Existing Projects

If you have existing ARC resources, you need to submit a request to have them converted to UMRCP resources. A walkthrough of requesting UMRCP for an existing project is also available. Please note – If you have an existing high-performance computing account it cannot be included under the UMRCP. Any request for computing hours through the UMRCP requires the creation of a new account. 

The UMRCP provides 80,000 hours HPC, 10TB storage, and 100TB archive storage:

  • If your current project is for less than these values (e.g., it has an allocation of 20,000 hours, the storage volume is 2TB, etc.), they will be increased to the UMRCP allocations at no cost. 
  • If your current project uses more than these values (e.g., it has an allocation of 100,000 hours, the storage volume is 15TB, etc.), the allocations will not change. Instead, the first 80,000 hours of HPC and 10TB of storage will be free. Amounts above those thresholds will be charged at standard ARC rates.

The request for UMRCP resources is submitted by completing an intake form on the Resource Management Portal (RMP). Locate the project that needs to be converted and click its tile. A message about requesting UMRCP resources displays at the top of the page. Click the Request UMRCP link in the message to access the request form. 

Information about your existing project should pre-populate the form, including the project name and its current resources. Review all information to verify it is correct. 

If your project does not currently use all UMRCP resources (HPC, storage, archive), you can request it be added to your project by clicking the appropriate tile on the page and completing the related information. If needed, see the Submitting a Request – New Projects section for more information on these tiles.

Once you have completed the form and agreed to the terms of use, click the  Submit UMRCP Request button. A TeamDynamix ticket is created and assigned to the ARC team. When the request is complete, you will be notified via that ticket.

HPC/Storage Concepts & Terminology

High Performance Computing (HPC)

Using a cluster of Linux computer servers to perform complex and time-consuming computing tasks which require more computational power than a single computer can provide. A cluster can be hundreds to thousands of networked computers. These computers are referred to as nodes. Individuals access the cluster via the internet using their personal computers, therefore interacting with the cluster as if it were a regular, single computer.

HPC may also be referred to as Advanced Research Computing, high-throughput computing, distributed computing or (in some cases) cloud computing. 

Cluster

The name given to a group of networked computers used in HPC.

Nodes

The individual computers within a cluster. A Login Node is the computer an individual uses to connect to a cluster in order to manage files, schedule computational jobs on the cluster, edit scripts, run tests, download results, etc. Login nodes have limited computational resources; no large-scale processing is done on a login node. Compute Nodes are the individual computers within a cluster responsible for performing the actual computational work. 

Central Processing Unit (CPU)

The primary hardware component responsible for executing the instructions of a computer program. The more powerful the CPU, the faster the computer can typically process data and run programs. 

CPU Cores

Individual processing units that allow the CPU to efficiently perform multiple tasks simultaneously. In general, the more cores a CPU contains, the more powerful it is. Personal computers generally have only two to eight cores, while each node (or computer) within a typical HPC cluster contains many more. For example, the CPU in each node of U-M’s Great Lakes cluster has 36 cores. 

Serial and Parallel Computing

How tasks are executed on a computer. A computer program that executes tasks one after another in a single CPU core is called serial computing. CPUs with multiple cores can execute multiple tasks simultaneously, which is much faster than executing them one at a time. This is called parallel computing

Graphical Processing Units (GPUs)

Specialized hardware within a computer originally designed to efficiently process image data using parallel computing. GPUs contain multiple processing cores, which are similar to CPU cores. However, GPUs can contain thousands of cores, compared to the 2-20 cores contained in CPUs. GPU cores are generally slower than CPU cores, but are able to complete complex calculations much faster due to their large number. 

Random Access Memory (RAM)

A type of computer memory used to temporarily store currently running programs as well as the data those programs are manipulating. HCP provides access to enormous amounts of RAM, enabling complex computation work that would otherwise not be possible on a personal computer. For example, while a personal computer may have from 4 to 16 gigabytes (GB) of RAM, U-M’s Great Lakes cluster can provide up to 180GB of RAM on standard nodes and 1500GB in largemem.

Storage

The semi-permanent memory where data is stored on a computer, such as a hard disk drive or solid state drive. Clusters are configured to provide multiple storage options for different data needs. All nodes of a cluster are connected to a separate storage system via the network, allowing individuals to store much larger amounts of data than the storage capacity provided by the cluster nodes. Individuals can interact with this data from the cluster nodes even though the data is physically located on a separate storage system. 

Home Directory

A directory on a cluster’s local storage used by an individual to store personal files for research, such as programs and scripts, input files to be processed or stored, output files generated from compute jobs, etc. All files in the home directory are only visible to the individual owner. A user’s Home is always located at /home/<username>.

Scratch Space

Scratch is a very large (many TeraBytes) fast temporary storage space used to store data as part of current work but for periods measured in days. Unlike Turbo Storage or Home Directory, Scratch on all ARC systems is automatically purged of data not accessed in 60 days. Scratch is located at /scratch/<account_root>/<account>. For example, an account for msmith would be /scratch/msmith_root/msmith

Additional Resources

User resources for working with ARC resources are available online: 

Still not sure what you need?