Spack

By | HPC

spack_logoThe Advanced Research Computing (ARC) high-performance computing (HPC) cluster has integrated the Scientific Package Manager Spack (https://spack.io) to streamline software development for users. Spack allows users to build software from its source code or install pre-built binaries directly into their environments. The ARC-provided Spack is configured to utilize existing softwares installed on our HPC system as a foundation for handling build dependencies, and simplifying the process of managing various software components required for HPC tasks.

Moreover, users have the flexibility to customize the already existing configuration to their unique needs, such as using custom compilers, or software management for labs, or groups of individuals. This level of customization ensures that users can optimize their computing environment according to their unique requirements, enhancing efficiency and performance for their HPC workflows.

To get started, first thoroughly study and review the official Spack documentation (https://spack.readthedocs.io/en/latest/).

Once familiar with the official Spack documentation, login to our HPC cluster. ARC Spack is provided as a cluster-wide software, available through our module system.

$ module load spack

Once loaded, that’s it! Spack’s setup script is sourced into your linux environment, allowing you to use Spack through the terminal. You can test that Spack is available by entering $ spack or $ echo $SPACK_ROOT

ARC Spack is configured save all custom configurations, environments, software installations (binaries, scripts, source code), and generated module files into your home directories (~/.spack) folder. If this folder does not exist, it’s automatically created. Deletion of this folder results in all custom configurations, environments, and software’s installed being permanently deleted. 

The spack command has many subcommands. You’ll only need a small subset of them for typical usage. To view a list of out-of-box software available for installation run the command

$ spack list

To get more information on a particular package from the Spack list, use Spack info and Spack spec to show the dependencies, and variants enabled by default. The example below illustrates with gmake.

$ spack info gmake


Package: gmake
Description:
GNU Make is a tool which controls the generation of executables and
other non-source files of a program from the program's source files.
Homepage: https://www.gnu.org/software/make/
Preferred version:
4.4.1 https://ftpmirror.gnu.org/make/make-4.4.1.tar.gz
Safe versions:
4.4.1 https://ftpmirror.gnu.org/make/make-4.4.1.tar.gz
4.4 https://ftpmirror.gnu.org/make/make-4.4.tar.gz
4.3 https://ftpmirror.gnu.org/make/make-4.3.tar.gz
4.2.1 https://ftpmirror.gnu.org/make/make-4.2.1.tar.gz
4.0 https://ftpmirror.gnu.org/make/make-4.0.tar.gz
Deprecated versions:
None
Variants:
build_system [generic] generic
Build systems supported by the package
guile [false] false, true
Support GNU Guile for embedded scripting
Build Dependencies:
guile pkgconfig
Link Dependencies:
guile
Run Dependencies:
None
Licenses:
GPL-3.0-only

$ spack spec gmake


Input spec
--------------------------------
- gmake
Concretized
--------------------------------
[+] gmake@4.4.1%gcc@13.2.0~guile build_system=generic arch=linux-rhel8-skylake_avx512
[+] ^gcc-runtime@13.2.0%gcc@13.2.0 build_system=generic arch=linux-rhel8-skylake_avx512

The spec command shows that gmake has only one dependency, and that is the gcc-runtime libraries.

To install $ spack install gmake. To uninstall $ spack uninstall gmake

All Spack installed software automatically integrates with the system’s Lmod module system. Which means you can load Spack software in one of two ways.

$ spack load gmakeor $ module load gmake. Strongly preferred to use $ module load

To verify the software is loaded correctly, and the correct (spack installed version is being used), you can run the following commands:

$ which gmake
~/.spack/opt/spack/gmake-4.4.1-qpr6-gcc-13.2.0/bin/gmake

Note: Every time you intend to load Spack installed software, the Spack module must be loaded as well. Spack installed software are appended to your linux environment only If Spack is available, therefore it’s imperative and important that Spack be loaded as a module dependency. 

That’s it! gmake is now available for use.

Spack simplifies the process of finding available software packages through two primary methods:

  1. Command Line: Users can utilize the command $ spack list within the Spack environment. This command generates a comprehensive list of all available software packages accessible through Spack. It provides a convenient way for users to view the catalog of software available for installation or customization within their computing environment.
  2. Online Repository: Alternatively, users can access the Spack package repository through the website https://packages.spack.io/. This web-based repository serves as an online catalog of software packages compatible with Spack. Users can browse through the repository to explore various available packages, their versions, dependencies, and related information. This web interface provides an intuitive way to search and discover software packages that can be easily integrated into their computational environment using Spack’s package management capabilities.

Once package(s) to be installed are identified, activate the desired Spack environment. $ spack env activate ENV-NAME. Verify you have enough space in your home directory for the installation. $ home-quota will tell you how much space you have remaining. Typically should reserve at least 10GB or more, depending on the size of the installation.

To create a Spack environment (https://spack.readthedocs.io/en/latest/environments.html), run $ spack env create ENV-NAME. A comprehensive discussion is provided in the official Spack documentation.

Spack commands are environment sensitive. For example, the find command shows only the specs in the active environment if an environment has been activated. Similarly, the install and uninstallcommands act on the active environment.

$ spack find
==> In environment myenv
==> No root specs
==> 0 installed packages

Adding Abstract Specs

An Abstract spec is the user-specified spec before Spack has applied any defaults or dependency information. Users can add abstract specs to an environment using the spack add command.

$ spack env activate myenv
$ spack add gmake

The spack addcommand is environment aware. It adds to the currently active environment. All environment aware commands can also be called using the spack -eflag to specify the environment.

Concretizing

A complete detailed discussion on Spack’s concretizing command is here (https://spack.readthedocs.io/en/latest/environments.html#environments-concretization-config). Once specs have been added, they can be concretized.

$ spack concretize

You can also check the programs, and packages to be installed using $ spack spec -lI. The first letter after the hyphen is a lowercase “L”, and the second letter is an uppercase “i”.
This command identifies what dependencies Spack needs in order to install your desired packages along with how it will obtain them. Assuming their are no problems, then it will print a list of the packages and their dependencies, where entries that begin with a green [+] have already been installed somewhere in your local Spack installation, while those that begin with a green [^] are referencing the system installation, and those beginning with a gray - will need to be downloaded and installed.

Installing an Environment

In addition to installing individual specs into an environment, these specs can be installed collectively as the entire environment to your local installation directory (~/.spack) or an alternative directory If you’ve specified in your custom configuration.

$ spack install

Once installed, that’s it! The software packages are available via $ spack load or $ module load.

Generating Lmod modulefiles  for an  Environment

After software has installed successfully, to generate Lmod modulefiles (if not automatically generated), run $ spack modules lmod refresh

The ARC Spack is a cluster-wide software module, preconfigured for a general use case. It comes with preconfigured compilers, flags, software libraries, and Spack specific settings to suit typical usage scenarios across the cluster for scientific research computing. However, we understand that every research is unique, and that can sometimes require customization beyond what is the default. Custom Spack configurations are supported and are discussed in section 1.7.

The following commands can be useful in viewing the details of ARC Spack’s configurations. A full discussion can be found here (https://spack.readthedocs.io/en/latest/configuration.html)

 

Spack Configuration $ spack config get config Display’s general Spack configuration, such as installation directories (reference)
Spack packages $ spack config get packages Display’s package configurations (reference)
Spack compilers $ spack config get compilers Display’s compiler configurations, and availability (reference)
Spack repos $ spack config get repos Display’s available repositories (reference)
Spack Upstreams $ spack config get upstreams Display’s chained Spack installations, package, and module branches (reference)

Multiple compilers and compiler versions are available for building software using Spack, on Great Lakes, Armis2 and Lighthouse. Compilers can be made available to Spack by specifying them manually in compilers.yaml (or spack.yaml for custom environments), or automatically by running $ spack compiler find. To view the list of compilers available, run $ spack compilers

$ spack compilers or $ spack compiler list

==> Available compilers
-- gcc rhel8-x86_64 ---------------------------------------------
gcc@8.4.1 gcc@13.2.0 gcc@11.2.0 gcc@10.3.0
-- intel rhel8-x86_64 -------------------------------------------
intel@2022.0.2

Compiler Info

To view the specific compiler information, run $ spack compiler info. For example, $ spack compiler info gcc@13.2.0

gcc@13.2.0:
paths:
----- cc = /sw/pkgs/arc/gcc/13.2.0/bin/gcc
----- cxx = /sw/pkgs/arc/gcc/13.2.0/bin/g++
----- f77 = /sw/pkgs/arc/gcc/13.2.0/bin/gfortran
----- fc = /sw/pkgs/arc/gcc/13.2.0/bin/gfortran
modules = []
operating system = rhel8

Selecting a Compiler when Installing Software

A compiler and its version can be specified when using the spack install <pkg-name> command, a compiler is specified with % and its version using @, for example, to install gmake using GCC 10.3.0, run: $ spack install gmake%gcc@10.3.0

Default Spack Installations

If you specified a view, you can point your $PATH, $LD_LIBRARY_PATH, etc to the view. If you did not, you can use  $ module load to load individual packages into your environment, ready to use. Note, have to load the Spack module first as a dependency.

$ module load spack boost

To learn more about Spack views see (https://spack.readthedocs.io/en/latest/packaging_guide.html#views). Spack modules are made available through Lmod (https://lmod.readthedocs.io/en/latest) which is an environment management system. For example, by default, Spack software is installed to ~/.spack/opt and the module files are saved in ~/.spack/share/lmod. Therefore, configure Lmod such that it can see module files located at this location, you have to prepend to your $MODULEPATH the exact path where your module files are saved.
This prepending is already done automatically when you load the Spack module. Below is an explanation of how this is done. If you’re install Spack software at it’s default location, then you DO NOT have to perform these steps. However, If you’re installing software in a custom location, such as a Turbo volume, then the following guide could be useful.

Configuring Lmod to View Spack Modules Installed in Non-Default Locations

For example, consider software installed at <code>/turbo/myproject1/software/my_project</code> Given a directory tree structure as:

$ tree /turbo/myproject1/software/my_project/share/spack/lmod/linux-rhel8-x86_64
/turbo/myproject1/software/my_project/share/spack/lmod/linux-rhel8-x86_64
├── gcc
│   ├── 10.3.0
│   │   ├── gcc-runtime
│   │   │   └── 10.3.0-pglmvn4.lua
│   │   └── gmake
│   │   └── 4.4.1-tdqh6jn.lua
│   └── 13.2.0
│   ├── boost
│   │   └── 1.83.0-uejfb3g.lua
│   └── gcc-runtime
│   └── 13.2.0-msqwczb.lua

To make packages built by the gcc compiler, version 10.3.0 available through the module system, prepend to $MODULEPATH /turbo/myproject1/software/my_project/share/spack/lmod/linux-rhel8-x86_64/gcc/10.3.0

In addition, make packages built by the gcc compiler, version 13.2.0 available through the module system, prepend to $MODULEPATH /turbo/myproject1/software/my_project/share/spack/lmod/linux-rhel8-x86_64/gcc/13.2.0

$ export MODULEPATH=/turbo/myproject1/software/my_project/share/spack/lmod/linux-rhel8-x86_64/gcc/10.3.0:$MODULEPATH
$ export MODULEPATH=/turbo/myproject1/software/my_project/spack/lmod/linux-rhel8-x86_64/gcc/13.2.0:$MODULEPATH

Verify Result:
$ module avail

-------------------- /turbo/myproject1/software/my_project/share/spack/lmod/linux-rhel8-x86_64/gcc/13.2.0 --------------------
boost/1.83.0-uejfb3g gcc-runtime/13.2.0-msqwczb (D)

-------------------- /turbo/myproject1/software/my_project/share/spack/lmod/linux-rhel8-x86_64/gcc/10.3.0 --------------------
gcc-runtime/10.3.0-pglmvn4 gmake/4.4.1-tdqh6jn

Customized Spack Configuration & Environment

The ARC Spack configuration aims to make installing software simple, so spack install gmakeshould, in many cases, just work!, and install software under your $HOME/.spack/ directory. However in most cases it is better to create and use a Spack environment fine-tuned to your preferences.

  • You want to install different software to different locations (for example, you build software for two different projects, or research groups, and want to install some software to /turbo/project1 and some to /turbo/project2
  • You want to install a collection of software into a single tree, where said three has all the binaries, and alternative tree has all the libraries, etc.
  • You have software built outside of Spack, and need Spack to build software on your external build, rather than its default approach for building each dependency package.
  • You want to use different conventions for directory structure or module file names than the defaults set by ARC.

A Spack environment (https://spack.readthedocs.io/en/latest/environments.html) is a directory with a spack.yamlfile that outlines software to build, and the configuration modifications to use while building it. To create and activate an environment.

$ module load spack
$ spack env create my-env
$ spack env activate my-env

By default, this environment will be identical to the ARC Spack configuration, i.e it will install software to your default location, but you could change this in the spack.yaml file. For example:


spack:
config:
install_tree:
# install things into the `install` subdirectory of this environment:
root: /path/to/install
# alternatively: install things into my project's /turbo/myproject1/software:
# root: /turbo/myproject1/software/my_project
# put modulefiles generated by Spack in this environment directory:
module_roots:
tcl: /path/to/install/modules
lmod: /path/to/install/lmod
view: /path/to/my_tree
specs: []

Additionally, If your software stack needs something that was installed external to Spack you can add a packages section, to account for it:

spack:
# [snip]
packages:
# use the ncl from "module load ncl"
ncl:
externals:
- spec: ncl@6.5.0
modules: [ncl/6.5.0]
# use my local install of fftw 3.3.8, as the preferred fftw version:
fftw:
version: [3.3.8]
externals:
- spec: fftw@3.3.8
prefix: /path/to/my/fftw/install

For complete, comprehensive and further details about Spack environments, see https://spack.readthedocs.io/en/latest/environments.html

Architecture

Package

“Source code” describing a piece of software and how to build it (actually a Python class, defined in package.py), in a directory with any patches that might need to be applied first. The Spack Packaging Guide has details about creating and maintaining packages (https://spack.readthedocs.io/en/latest/packaging_guide.html).

Repo

A collection (“repository” of packages. Almost everything is in the “builtin” repo, but Spack has a “repos” config section where you can specify locations and order of repos to search. See the Spack package repository documentation (https://spack.readthedocs.io/en/latest/repositories.html).

Spec

Spack defines a domain-specific-language for describing the build parameters and dependencies with which a package was, or should be, built. The Spack specs and dependencies documentation describes this in detail (https://spack.readthedocs.io/en/latest/basic_usage.html#specs-dependencies).

Variant

A variant is a selectable build option for a package. Available variants for a package are defined in the Spack package. You can, in spec, enable (+) or disable(- or ~) or set (name=value) a variant. Variants usually correspond to a ./configure or cmake setting.

Installation tree

A directory for Spack-installed software. Spack allows specifications of a directory structure and naming convention for installed software. At ARC, this defaults to your $HOME directories .spack folder (~/.spack). This can be modified through Spack environments.

Upstream

Spack uses Upstream to satisfy dependencies in other locations, called “upstream” locations. At ARC there are two upstream locations. To view about them, run $ spack config get upstreams

Buildcache

As a way to avoid repeatedly building the same software, Spack supports packaging up built software for reuse. These pre-built packages are held in a “buildcache”, and a subsequent spack installcan simply unpack the cached package, instead of rebuilding it.

U-M Research Computing Package – Eligibility

By | Armis2, Great Lakes, HPC, Research, Systems and Services

Sign up for the U-M Research Computing Package

The U-M Research Computing Package is available to all researchers on all campuses (Ann Arbor, Dearborn, Flint, and Michigan Medicine), including:

Faculty and Staff 

  • All professors and lecturers with an active appointment 
  • Emeritus faculty
  • Guest faculty who are visiting for one to two years
  • Principal investigators (e.g., research scientists) on self-funded research activities
  • PostDocs who have their own research grant 

PhD Students

PhD students may qualify for their own UMRCP resources depending on who is overseeing their research and their advisor relationship. Students should consult with their PhD program administrator to determine their eligibility. ITS will confirm this status when a UMRCP request is submitted.

Undergraduate and Masters Students

Undergraduate and Masters students do not currently qualify for their own UMRCP. They can be added as users or administrators of another person’s UMRCP. Students can also access other ITS programs such as Great Lakes for Course Accounts, and Student Teams.

U-M Research Computing Package – FAQ

By | Armis2, Great Lakes, HPC, Research, Systems and Services

Sign up for the U-M Research Computing Package

I already have some of the ARC services listed. Do I still need to apply?

Yes. Existing ARC resources are not migrated to UMRCP support until the leader of the resources applies. Existing resources can be migrated to the UMRCP once created and do not require movement of data or changes in workflow.

Can I sign someone else up for the UMRCP?

No. To ensure that we can best meet your needs, it is important that researchers sign up directly for the allocations. It is not possible at this time to sign up on behalf of someone else.

I have multiple students/collaborators in my research team. Can they share resources?

Yes. All UMRCP and ARC resources are designed to be collaborative and multidisciplinary capable. UMRCP recipients can delegate both administrative and user access to anyone with a valid campus uniquename.

I used up / need more resources than UMRCP provides, can I get more?

Yes. All services have options to purchase additional capacity. Ways to get help are shown on the ARC Help Page; be sure to include the service name and your needs when requesting additional resources. In addition, consumable resources like High Performance Computing hours are replenished each fiscal year.

I’m a faculty member who already has the UMRCP. How do I renew?

No action is required. Your no-cost allocations automatically renew on July 1.

I’m a PhD student with my own funded research. How do I renew my UMRCP?

No action is required. Your no-cost allocations automatically renew on July 1.

I’m a PhD, Master’s, or undergraduate student, and my advisor has left the university. My research is continuing to finish my dissertation. What should I do?

If your faculty advisor has left the university (or are about to), but you remain: an alternative administrator (Staff or Faculty who can verify the ongoing need of the allocation for wrapping up activities related to the former research group) must be assigned to the account so that the allocations can continue uninterrupted. See How do I delegate administrative management of my UMRCP or other ARC resources?

I’m an active faculty member but received a UMRCP off boarding message warning that my resources won’t be renewed. What do I need to do to keep them?

Likely your MCommunity entry is private or another status (e.g., Retired/Emeritus) which makes verifying your active research status ambiguous and needs to be confirmed. You can reply to the automated message or contact arc-support@umich.edu to confirm your active status and renew your resources for the upcoming year.

How do I make changes if I have already submitted the UMRCP request form?

If you have already submitted the UMRCP request form and you want to modify it, open a ticket by sending an email to ARC Support, arc-support@umich.edu. Be sure to include the name of your project.

How do I delegate administrative management of my UMRCP or other ARC resources?

Visit the ARC Resource Management Portal. On your project overview page look for Project Admins. Select the Add button and add a trusted university member by inputting their uniqname. This is helpful when the project owner is leaving the university but requires the resources to allow students to finish their dissertation work, as the new delegated admin can confirm the ongoing need each June for UMRCP renewal. Admins can make any change to your project so be sure to only add members you trust.

Access: advancing innovation and decorative image

ACCESS

By | Systems and Services

XSEDE is now ACCESS. 

The national HPC resource known as XSEDE has now fully transitioned to Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS). 

ACCESS is an open scientific discovery infrastructure combining leadership class resources at partner sites across the United States to create an integrated, persistent computational resource. The National Science Foundation’s ACCESS program builds upon the successes of the 11-year XSEDE project, while also expanding the ecosystem with capabilities for new modes of research and further democratizing participation.

ARC staff is here to answer questions and to help you take advantage of the national resources available through the ACCESS program. Contact arc-support@umich.edu for assistance from ARC staff, Shelly Johnson and Todd Raeker, or visit the ACCESS website for more information.

Allocations

Visit ACCESS Allocations for information on getting an allocation for ACCESS resources.

Related Events

There are no upcoming events at this time.

Great Lakes

By | Systems and Services

Part of UMRCP provided by ITS

The Great Lakes Slurm cluster is a campus-wide computing cluster that serves the broad needs of researchers across the university. The Great Lakes HPC Cluster has replaced Flux, the shared research computing cluster that currently serves over 300 research projects and 2,500 active users.

The Great Lakes HPC Cluster is available to all researchers on campus for simulation, modeling, machine learning, data science, genomics, and more. The platform provides a balanced combination of computing power, I/O performance, storage capability, and accelerators.

Based on extensive input from faculty and other stakeholders across campus, the Great Lakes HPC Cluster is designed to deliver similar services and capabilities as Flux, including access to GPUs and large-memory nodes and improved support for emerging uses such as machine learning and genomics. The Great Lakes HPC Cluster consists of approximately 13,000 cores.

U-M Research Computing Package

The University of Michigan Research Computing Package (UMRCP) is an investment into the U-M research community via simple, dependable access to several ITS-provided high-performance computing clusters and data storage resources. CPU credits are allocated on the Great Lakes cluster and can be used for standard, larger memory, or GPU resources.

Grant Support

See the Grant Resources page for information regarding grant proposals using the Great Lakes HPC Cluster.

LSA-specific Information

See the LSA funding page for information on funding courses at the College of Literature, Science, and the Arts. LSA researchers who do not have access to any other account may be eligible to use the accounts provided centrally by LSA. The usage policy and restrictions on these accounts is described in detail on the LSA’s public Great Lakes accounts page.

LSA increased their cost-sharing for the rest of the 2021-2022 fiscal year.  Read about the details here.

Questions about access or use of these accounts should be sent to arc-support@umich.edu.

Student Teams

See the Great Lakes Student Teams and Organizations page if your team requires HPC resources.

Use Within Courses

Learn how you can use Great Lakes in your course.

To establish a Slurm account for a class please contact us at
arc-support@umich.edu with the following information:

  • Students to be put on the account
  • List of individuals to administer the account
  • Any limits to be placed on the either the users or the account as a whole
  • The unit abbreviation and course and section numbers for the course you are leading (i.e., eecs498 section 400)

Please note: all students will need to have a user login to use the account and can request one online.

For technical support, email arc-support@umich.edu.

Upcoming Events

There are no upcoming events at this time.

Order Service

Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arc-support@umich.edu with lists of users, admins, and a Shortcode. UMRCP accounts are also available to eligible researchers. For more information, please visit our UMRCP page.

Lighthouse Cluster Rates

By | Great Lakes, HPC

The 2021-22 rates for the Lighthouse Cluster have been approved. These rates represent cost recovery for the Lighthouse Cluster, and do not include any support your unit may choose to provide.

The Lighthouse cluster is meant to support researchers with grants that require the purchase of computing hardware or require hardware configurations not available on Great Lakes. Lighthouse allows researchers to work with ARC to purchase their own hardware and have it within the Lighthouse Slurm environment, with ARC providing the data center, staff, networking, storage and software. More information is available on the Lighthouse webpage.  In addition to the hardware purchase cost, there is monthly charge to support the hardware in the Lighthouse environment.

Lighthouse Service Cost $743.07/node/year ($61.92/node/month)

COLLEGE OF LITERATURE, SCIENCE, & THE ARTS LIGHTHOUSE SUPPORT

To support research using high performance computing, the College of Literature, Science, and the Arts (LSA) will cost-share the monthly operating charge for researchers paying on LSA unit shortcodes. For fiscal years 2022-23, 2023-24, and 2024-25 LSA will provide a 88% subsidy on the monthly service charge of existing LSA Lighthouse nodes. Replacement nodes will also be cost-shared for this period of time given the replacement is a 1:1 replacement with similar or lower physical space, power, and cooling requirements.  Any additional Lighthouse nodes will not be cost-shared with the College. The table below provides an estimate of Lighthouse Operating Expenses using published ITS rates for FY2021-22. The rates for LSA researchers will change, and the table updated, if ITS rates change.  LSA does not anticipate continuing the LSA subsidy for Lighthouse nodes after FY2025 due to the small number of people who use service across U-M and LSA.  Please plan accordingly for the change in cost that would occur at the start of FY2026 if you continue to use Lighthouse.

ITS Rate / Node LSA Researcher Rate LSA / College Subsidy
Annual $743.07/node/yr $89.16 (12%) $653.90 (88%)
Monthly $61.92/node/month $6.81 (12%) $55.11 (88%)

 

If you have questions, please send email to arc-support@umich.edu.

Order Service

If you have any questions about Lighthouse, contact arc-support@umich.edu .

U-M Research Computing Package

By | Armis2, Great Lakes, HPC, Research, Systems and Services

Sign up for the U-M Research Computing Package

The University of Michigan Research Computing Package (UMRCP), provided by ITS, is an investment into the U-M research community via simple, dependable access to several ITS-provided high-performance computing clusters and data storage resources.

On this page:

What is the UMRCP

As a research faculty member with the University of Michigan, it’s very likely you are eligible for Advanced Research Computing (ARC) resources through the University of Michigan Research Computing Package program.

This program makes research computing resources widely available for eligible faculty and researchers. Every UMRCP is customized:

  • The specific, and total, UMRCP allocations that would be available to you depend on the selections you make in the UMRCP request form.
  • After making selections in the request form, and once the requests have been approved and resources are created: then each research project enrolled in our UMRCP program has one or more cost-covered annual allocations of research computing or storage resources from ARC.

Some research scientists and PhD students are also eligible for U-M Research Computing Package allocations with ARC. The UMRCP program is designed so that most U-M researchers will have access to at least one set of UMRCP resources.

What is included with the UMRCP

The U-M Research Computing Package provides:

80,000 CPU HOURS OF HIGH-PERFORMANCE COMPUTING

  • CPU credits are allocated on the Great Lakes cluster or the Armis2 protected data cluster; they can be used for standard, larger memory, or GPU resources, and renewed every July 1.  Please note – If you have an existing high-performance computing account it cannot be included under the UMRCP. Any request for computing hours through the UMRCP requires the creation of a new account. 

10 TB OF REPLICATED, HIGH-PERFORMANCE STORAGE

  • The Turbo Research Storage service can be used on many ARC/ITS, unit, Globus.org, and laboratory systems. Turbo is best for short-term, active storage that researchers access regularly. Capacity can be split across sensitive and non-sensitive science data.

100 TB OF ARCHIVE STORAGE

  • Replicated, durable storage for massive data volumes. The Data Den Research Archive storage service also supports reliable data transfer and international data sharing via Globus.org. Capacity can be split across sensitive and non-sensitive science data. Data Den is best for storing data that researchers do not need regularly, but need to retain because of NIH and NSF grant requirements.

16 GB of RAM for Secure Enclave Services

  • The Secure Enclave Services provides U-M researchers with high-performance, secure, and flexible computing environments enabling the analysis of sensitive data sets restricted by federal privacy laws, proprietary access agreements, or confidentiality requirements. The Secure Enclave Services are a virtual server environment designed for the secure hosting and analysis of restricted research data.The UMRCP provides researchers up to 16GB of RAM (memory) per machine, per month at no cost. Beginning July 1, 2023, each active virtual machine will be charged a provisional rate of $7.00 per GB of RAM (memory) per month for any services consumed greater than 16 GB of RAM.

ADDITIONAL HPC ALLOCATIONS FOR COURSEWORK

Additional allocations are available for instructors and student teams. This is not part of the UMRCP, but these allocations are covered by ITS.

How to get ARC resources through the UMRCP

The UMRCP request form allows you to choose the ARC resource options that will best meet your research needs.

Each completed set of UMRCP resources will be a customized set of research computing allocations. If you decide to submit a request for UMRCP resources, the first step will be determining which resources to select, based on your research needs. The UMRCP request form includes multiple options to choose from for each type of resource.

Please note that of the ARC resource allocations available through the UMRCP, only those that are specifically requested through selection in the form will be created and allocated for your project.

  • The request form allows you to create a customized UMRCP by opting-in to the ARC services that will best meet your research needs.
  • Only the resources selected in the form will be included in your request.

Determine research needs: HPC, HPC storage, longer-term data retention requirements, data sensitivity

To make sure all ARC resource options that would best meet your research needs are selected in the UMRCP request form, please review the information on this site:

In addition to a form option to select an annual high-performance computing allocation — with the option of selecting either (or both) the Great Lakes cluster or Armis2 (HIPAA-aligned) cluster for this annual allocation — there are also high-performance computing storage and archival storage options available.

The UMRCP Requesting Resources Guide includes ARC resource overviews, as well as additional information through links, that describe the differences between the options that are available.

Depending on your research needs, you can plan on at least 30 minutes to review the information in the Requesting Resources Guide and complete the request form.

Complete the UMRCP request form

Only the eligible PI can submit the UMRCP request form:

  • The form validation process creates groups, and if the request is approved these groups and all allocated ARC resources will be linked to the person logged in — who can only be the eligible primary faculty member or researcher.

After submitting the UMRCP form an approval process is initiated. You will be contacted if any additional information is required from you for determining your eligibility.

Once approved, the creation and allocation process for each ARC resource involves multiple steps for each resource, and coordination between multiple teams.

  • As each ARC resource allocation that was requested is completed, a notification is sent to let you know that the resource is ready to be accessed.
  • The notification you receive will include information about your ARC resource.

If after reviewing the information provided in one of these notifications you have any questions, or there are any issues, please feel free to contact ARC directly.

Manage access to ARC resources provided through the UMRCP

Once UMRCP resource allocations have been created, then project owners (PIs), or any additional project administrators who have been designated, can grant any person who has a fully active U-M account access to those ARC resources.

Access to ARC resources is restricted:

  • Only project owners, and any designated ARC project administrators, are permitted to manage user access.

Most ARC projects can be managed by project owners, and any designated ARC project administrators, in the Resource Management Portal application.

Great Lakes Past Events Timeline

By | Great Lakes, HPC
Back to Future Events

ARC 2024 Winter Maintenance

December 6, 2023

2024 ARC Winter Maintenance

Read more

Great Lakes /scratch Maintenance

March 14, 2022

Great Lakes Outage: /scratch Maintenance

Read more

Winter 2022 Maintenance

January 10, 2022

ARC Winter 2022 Maintenance

Read more

Summer 2021 Maintenance

August 9, 2021

See Summer 2021 maintenance information here: https://arc.umich.edu/summer-2021-maintenance/

Read more

Winter 2021 Maintenance

January 11, 2021

ARC-TS Winter 2021 Maintenance

Read more

Great Lakes Summer 2020 Maintenance

August 3, 2020

Great Lakes Summer 2020 Maintenance

Read more

Open OnDemand on Great Lakes being upgraded

May 18, 2020

1:00 PM – 5:00 PM: We are migrating Open OnDemand on Great Lakes from version 1.4 to 1.6 to fix a security issue. Users will…

Read more

Winter 2020 Maintenance

March 9, 2020

The ARC-TS 2020 winter maintenance will start on March 9, 2020.  You can read more about it here.  https://arc-ts.umich.edu/event/winter-2020-maintenance/   Great Lakes, Lighthouse, and Armis2…

Read more

Great Lakes Billing Begins

January 6, 2020

Billing for Great Lakes will begin on January 6, 2020 at 9am. Rates for using Great Lakes can be found here: https://arc-ts.umich.edu/greatlakes/rates   . No…

Read more

Beta Retires

November 29, 2019

Beta will be retired after both Flux and Armis have been retired, as the purpose of Beta is to assist users to transition to the…

Read more

Great Lakes Migration Complete

November 25, 2019

All HPC accounts, users, and workflows must be migrated to either Great Lakes (Standard, GPU, Large-Memory, and On-demand Flux) or Lighthouse (Flux Operating Environment nodes)…

Read more

Great Lakes Open for General Availability

August 19, 2019

Assuming all initial testing is successful, we expect that Great Lakes will become available for University users after the ARC-TS summer maintenance.

Read more

Great Lakes Early User Testing Ends

August 14, 2019

The Great Lakes early user period for testing will end. Great Lakes will transition into production and users will be able to submit work as…

Read more

2019 Summer Maintenance

August 12, 2019

The ARC-TS annual summer maintenance will begin.   The details, services impacted,  and length of the maintenance will be determined when we get closer to the date. …

Read more

Great Lakes Open for Early User Testing

July 15, 2019

We will be looking for sets of friendly users who will be able to test different aspects of the system to submit their workloads to…

Read more

Great lakes firmware updates complete

July 8, 2019

This date is a rough estimate.  Great Lakes, as of June, is running a GA-candidate HDR-100 InfiniBand firmware.   We will schedule an update to the…

Read more

Great Lakes primary installation complete and ARC-TS begins loading and configuration

May 30, 2019

The Great Lakes installation is primarily complete other than waiting for final HDR firmware testing for the new InfiniBand system.  The current InfiniBand system is…

Read more

HPC OnDemand Available for Beta

March 8, 2019

The replacement for ARC Connect, called HPC OnDemand, will be available for users.  This will allow users to submit jobs via the web rather than…

Read more

Beta Cluster testing continues

January 8, 2019

If you are a current HPC user on Flux or Armis and have not used Slurm before, we highly recommend you login and test your…

Read more

Great Lakes Beta (GLB) created

November 1, 2018

Great Lakes Beta is installed for HPC support staff to build and test software packages on the same hardware in Great Lakes.

Read more

Beta HPC cluster available

October 2, 2018

The Beta HPC cluster was introduced to enable HPC users to begin migrating their Torque job scripts to Slurm and test their workflows on a Slurm-based…

Read more

Great Lakes Installation Begins

October 1, 2018

Dell, Mellanox, and DDN will be delivering and installing the hardware to deliver the new Great Lakes service.  These teams will be working along side the…

Read more

If you have questions, please send email to arcts-support@umich.edu.

Order Service

Billing for the Great Lakes service began on January 6, 2020. Existing, active Flux accounts and logins have been added to the Great Lakes Cluster. Complete this form to get a new Great Lakes cluster login.

If you would like to create a Great Lakes Cluster account or have any questions, contact arcts-support@umich.edu with lists of users, admins, and a shortcode. Trial accounts are also available for new PIs.

Consulting Services

By | Uncategorized

Advanced Research Computing (ARC), a division of ITS, is pleased to offer a pilot called Scientific Computing and Research Consulting Services to help researchers implement data analytics and workflows within their research projects. This includes navigating technical resources like high-performance computing and storage.

The ARC Scientific Computing and Research Consulting Services team will be your guide to navigating the complex technical world: from implementing intense data projects, to teaching you how the technical systems work, to assist in identifying proper tools, to guiding you on how to hire a programmer.

Areas of Expertise

  • Data Science
    • Data Workflows
    • Data Analytics
    • Machine Learning
    • Programming
  • Grant Proposals
    • Compute Technologies
    • Data Storage and Management
    • Budgeting cost for computing and storage
  • Scientific Computing/Programming
    • Getting started with advanced Computing
    • Code optimization
    • Parallel computing
    • GPU/Accelerator Programing
  • Additional Resources
    • Facilitating Collaborations/User Communities
    • Workshops and Training

Who can use this service?

  • All researchers and their collaborators from any of the university’s three campuses, including faculty, staff, and students
  • Units that want help including technical information when preparing grants
  • Anyone who has a need for HPC services and needs help navigating resources

How much does it cost?

  • Initial consultation, grant pre-work, and short term general guidance/feedback on methods and code are available at no cost.
  • For protracted longer engagements, research teams will be asked to contribute to the cost of providing the service.

Partnership

The ARC Scientific Computing and Research Consulting Services team works in partnership with the Consulting for Statistics, Computing, and Analytics Research team (CSCAR), Biomedical Research Core Facilities, and others. ARC may refer or engage complimentary groups as required by the project.

Get Started

Send an email to arc-consulting@umich.edu with the following information:

  • Research topic and goal
  • What you would like ARC to help you with
  • Any current or future data types and sources
  • Current technical resources
  • Current tools (programs, software)
  • Timeline – when do you need the help or information?

Check out current or past projects

Get Help

If you have any questions or wish to setup a consult, please contact us at arc-consulting@umich.edu. Be sure to include as much information as possible from the “Get started” section noted above.

If you have more general questions about ARC services or software please contact us at arc-support@umich.edu

Virtual office hours are also available on Tuesdays, Wednesdays, and Thursdays. Get help with machine learning, algorithms, modeling, coding, computing on a cluster, survey sampling, using records across multiple surveys, and more. Anyone doing any type of research at any skill level is welcome!

Data Science Consulting

By | Uncategorized

Data Workflows

We are available to assist researchers along the entire lifecycle of the data workflow, from the conceptual stage to ingest, preprocessing, cleansing, and storage solutions. We can advise in the following areas:

  • Establishing and troubleshooting dataflows between systems
  • Selecting the appropriate systems for short-term and long-term storage
  • Transformation of raw data into structured formats
  • Data deduplication and cleansing
  • Conversion of data between different formats to aide in analysis
  • Automation of dataflow tasks

Analytics

The data science consulting team can assist with data analytics to support research:

  • Choosing the appropriate tools and techniques for performing analysis
  • Development of data analytics in a variety of frameworks
  • Cloud-based (Hadoop) analytic development

Machine Learning

Machine learning is an application of artificial intelligence (AI) that focuses on the development of computer programs to learn information from data.

We are available to consult on the following. This includes a general overview of concepts, discussion into what tools and architectures best fit your needs, or technical support on implementation.

Language Tools/Architectures Models
Python Python data tools (scikit, numpy, etc) Neural networks
C++ TensorFlow Decision trees
Java Jupyter notebooks Support vector machines
Matlab

Programming

We also provide consulting on programming in a variety of programming languages (including but not limited to: C++, Java, and Python) to support your data science needs. We can assist in algorithm design and implementation, as well as optimizing and parallelizing code to efficiently utilize high performance computing (HPC) resources where possible/necessary. We can help identify available commercial and open-source software packages to simplify your data analysis.

 

If you have any questions or wish to setup a consult please contact us at arcts-consulting@umich.edu