Category

Systems and Services

2023 Winter Maintenance & Globus File Transfer upgradeĀ 

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

These services will be unavailable: 

  • Great Lakes – We will be updating Great Lakes on a rolling basis throughout December and beginning of January, and if successful, there should be no downtime or impact, with the following exceptions: 
    • Single precision GPUs (SPGPU) will be down Jan. 4-5 for networking maintenance. Those nodes will return back to production when maintenance has been completed and the nodes have been reloaded.
    • Customers will be notified via email of any changes to Great Lakes maintenance that will require downtime.
    • If unsuccessful, the Great Lakes maintenance will begin on Jan. 4-5, starting at 8am.  In either case, we will email everyone with the updated maintenance status.
  • Globus on the storage transfer nodes: Jan. 17-18.

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 
  • Slurm email will be improved, providing  more detailed information about completed jobs.

Countdown to maintenance 

For Great Lakes HPC jobs, use the command “maxwalltime” to discover the amount of time remaining until maintenance begins. 

Jobs that request more walltime than remains until maintenance will automatically be queued and start once maintenance is complete. If the plan for Great Lakes maintenance is successful, any queued jobs will be able to run as usual (except for the SPGPU nodes as discussed above). Customers will be notified via email if downtime is required for Great Lakes.

Status updates and additional information

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.

New Resource Management Portal feature for Armis2 HPC Clusters

By | Armis2, HPC, News

Advanced Research Computing (ARC), a division of Information and Technology Services (ITS), has been developing a self-service tool called the Resource Management Portal (RMP) to give researchers and their delegates the ability to directly manage the IT research services they consume from ARC. 

Customers who use the Armis2 High-Performance Computing Cluster now have the ability to view their account information via the RMP, including the account name, resource limits (CPUs and GPUs), and the user access list.

“We are proud to be able to offer this tool for customers who use the HIPAA-certified Armis2 cluster,” said Brock Palen, ARC director. 

The RMP is a self-service-only user portal with tools and APIs for research managers, unit support staff, and delegates to manage their ARC IT resources. The RMP team is slowly adding capabilities over time. 

To get started or find help, contact arc-support@umich.edu.

No-cost research computing allocations now available

By | HPC, News, Research, Systems and Services, Uncategorized

U-M Research Computing PackageResearchers on all university campuses can now sign up for the U-M Research Computing Package, a new package of no-cost supercomputing resources provided by Information and Technology Services.

As of Sept. 1, university researchers have access to a base allocation for 80,000 CPU hours of high-performance computing and research storage services at no cost. This includes 10 terabytes of high-speed and 100 terabytes of archival storage.

These base allocations will meet the needs of approximately 75 percent of current high-performance-computing users and 90 percent of current research storage users. Researchers must sign up on ITS’s Advanced Research Computing website to receive the allocation.

“With support from President (Mark) Schlissel and executive leadership, this initiative provides a unified set of resources, both on campus and in the cloud, that meet the needs of the rich diversity of disciplines. Our goal is to encourage the use, support and availability of high-performance computing resources for the entire research community,” said Ravi Pendse, vice president for information technology and chief information officer.

The computing package was developed to meet needs across a diversity of disciplines and to provide options for long-term data management, sharing and protecting sensitive data, and more competitive cost structures that give faculty and research teams more flexibility to procure resources on short notice.

“It is incredibly important that we provide our research community with the tools necessary so they can use their experience and expertise to solve problems and drive innovation,” said Rebecca Cunningham, vice president for research and the William G. Barsan Collegiate Professor of Emergency Medicine. “The no-cost supercomputing resources provided by ITS and Vice President Pendse will greatly benefit our university community and the countless individuals who are positively impacted by their research.”

Ph.D. students may qualify for their own UMRCP resources depending on who is overseeing their research and their adviser relationship. Students should consult with their Ph.D. program administrator to determine their eligibility. ITS will confirm this status when a UMRCP request is submitted.

Undergraduate and master’s students do not currently qualify for their own UMRCP, but they can be added as users or administrators of another person’s UMRCP. Students can also access other ITS programs such as Great Lakes for Course Accounts, and Student Teams.

“If you’re a researcher at Michigan, these resources are available to you without financial impact. We’re going to make sure you have what you need to do your research. We’re investing in you as a researcher because you are what makes Michigan Research successful,” Brock Palen, Advanced Research Computing director.

Services that are needed beyond the base allocation provided by the UMRCP are available at reduced rates and are automatically available for all researchers on the Ann Arbor, Dearborn, Flint and Michigan Medicine campuses.

More Information

Access the sensitive data HPC cluster via web browser

By | Armis2, HPC, News

Researchers, data scientists, and students can now more easily analyze sensitive data on the Armis2 High-Performance Computing (HPC) Cluster. No Linux knowledge required, just a web browser, an account, and a login. 

This is made possible by a web interface called Open OnDemand, and is provided by Advanced Research Computing (ARC). 

“It is now much easier to analyze sensitive data, without investing hours in training. This makes the Open OnDemand tool more accessible and user-friendly. I’m excited to see the research breakthroughs that happen now that a significant barrier has been removed,” said Matt Britt, ARC HPC manager. 

Open OnDemand offers easy file management, command-line access to the Armis2 HPC cluster, job management and monitoring, and graphical desktop environments and desktop interactive applications such as RStudio, MATLAB, and Jupyter Notebook.

Resource: Getting started (Web-based Open OnDemand) – section 1.2. For assistance or questions, please contact ARC at arc-support@umich.edu.

ARC is a division of Information and Technology Services (ITS).

HPC, storage now more accessible for researchers

By | HPC, News, Systems and Services

U-M Research Computing Package decorative image

Information and Technology Services has launched a new package of supercomputing resources for researchers and PhD students on all U-M campuses: the U-M Research Computing Package, provided by ITS.

The U-M Research Computing Package will reduce the current rates for high performance computing and research storage services provided by ITS by an estimated 35-40 percent, effective July 1. 

In addition, beginning Sept. 1, university researchers will have access to a base allocation for high-performance computing and research storage services (including high-speed and archival storage) at no cost, thanks to an additional investment from ITS. These base allocations will meet the needs of approximately 75 percent of current high-performance computing users and 90 percent of current research storage users.

Learn more about the U-M Research Computing Package

 

RMP new feature alert: View Great Lakes HPC account information

By | General Interest, HPC, News, Systems and Services

Advanced Research Computing (ARC), a division of ITS, has been developing a self-service tool called the Resource Management Portal (RMP) to give researchers and their delegates the ability to directly manage the IT research services they consume from ARC. 

Customers who use the Great Lakes High-Performance Computing Clusters now have the ability to view their account information via the RMP, including the account name, resource limits (CPUs and GPUs), scratch space usage, and the user access list.

“We are excited to be able to offer this tool for customers. It should make their busy lives easier,” said Todd Raeker, ARC research experience manager. 

The RMP is a self-service-only user portal with tools and APIs for research managers, unit support staff, and delegates to manage their ARC IT resources. The RMP team is slowly adding capabilities over time. 

To get started or find help, contact arc-support@umich.edu.

Using tweets to understand climate change sentiment

By | HPC, News, Research, Systems and Services

A team from Urban Sustainability Research Group of the School for Environment and Sustainability (UM-SEAS) has been studying public tweets to understand climate change and global warming attitudes in the U.S. 

Dimitris Gounaridis, is a fellow with the study. The team is mentored by Joshua Newell, and combines work about perceptions on climate change by Jianxun Yang and proprietary level vulnerability assessment by Wanja Waweru

“This research is timely and urgent. It helps us identify hazards, and elevated risks of flooding and heat, for socially vulnerable communities across the U.S. This risk is exacerbated especially for populations that do not believe climate change is happening,” Dimitris stated. 

The research team used a deep learning algorithm that is able to recognize text and predict whether the person tweeting believes in climate change or not. The algorithm analyzed a total of 7 million public tweets from a combination of datasets from a dataset called the U-M Twitter Decahose and the George Washington University Libraries Dataverse. This dataset consists of an historical archive of Decahose tweets and an ongoing collection from the Decahose. The current deep learning model has an 85% accuracy rate and is validated at multiple levels.

The map below shows the prediction of specific users that believe or are skeptical of climate change and global warming. Dimitris used geospatial modeling techniques to identify clusters of American skepticism and belief to create the map.

A map of the United States with blue and red dots indicating climate change acceptance.

(Image courtesy Dimitris Gounaridis.)

The tweet stream is sampled in real-time. Armand Burks, a research data scientist with ARC, wrote the Python code that is responsible for continuously collecting the data and storing it in Turbo Research Storage. He says that many researchers across the university are using this data for various research projects as well as classes. 

“We are seeing an increased demand for shared community data sets like the Decahose. ARC’s platforms like Turbo, ThunderX, and Great Lakes, hold and process that data, and our data scientists are available, in partnership with CSCAR, to assist in deriving meaning from such large data. 

“This is proving to be an effective way to combine compute services, methodology, and campus research mission leaders to make an impact quickly,” said Brock Palen, director of ARC.

In the future, Dimitris plans to refine the model to increase its accuracy, and then combine that with climate change vulnerability for flooding and heat stress.

“MIDAS is pleased that so many U-M faculty members are interested in using the Twitter Decahose. We currently have over 40 projects with faculty in the Schools of Information, Kinesiology, Social Work, and Public Health, as well as at Michigan Ross, the Ford School, LSA and more,” said H.V. Jagadish, MIDAS director and professor of Electrical Engineering and Computer Science

The Twitter Decahose is co-managed and supported by MIDAS, CSCAR, and ARC, and is available to all researchers without any additional charge. For questions about the Decahose, email Kristin Burgard, MIDAS outreach and partnership manager.

RMP new feature alert: Manage Turbo users on-demand

By | General Interest, News, Systems and Services

Advanced Research Computing (ARC), a division of ITS, has been developing a self-service tool called the Resource Management Portal (RMP) to give researchers and their delegates the ability to directly manage the IT research services they consume from ARC. 

Turbo Research Storage customers now have the option to directly manage their own approved users. 

“Previously, customers would have to submit a ticket and then wait for several days to get a resolution to their request. Now it takes just seconds,” said Todd Raeker, ARC research experience manager. 

The RMP is a self-service-only user portal with tools and APIs for research managers, unit support staff, and delegates to manage their ARC IT resources. The RMP team is slowly adding capabilities over time. 

To get started or find help, contact arc-support@umich.edu

DNA sequencing productivity increases with ARC-TS services

By | HPC, News, Research, Systems and Services
NovaSeq, the DNA sequencer that is about the size of large laser printer.

The Advanced Genomics Core’s Illumina NovaSeq 6000 sequencing platform. It’s about the size of large laser printer.

On the cutting-edge of research at U-M is the Advanced Genomics Core’s Illumina NovaSeq 6000 sequencing platform. The AGC is one of the first academic core facilities to optimize this exciting and powerful instrument, that is about the size of a large laser printer. 

The Advanced Genomics Core (AGC), part of the Biomedical Research Core Facilities within the Medical School Office of Research, provides high-quality, low-cost next generation sequencing analysis for research clients on a recharge basis. 

One NovaSeq run can generate as much as 4TB of raw data. So how is the AGC able to generate, process, analyze, and transfer so much data for researchers? They have partnered with Advanced Research Computing – Technology Services (ARC-TS) to leverage the speed and power of the Great Lakes High-Performance Computing Cluster

With Great Lakes, AGC can process the data, and then store the output on other ARC-TS services: Turbo Research Storage and Data Den Research Archive, and share with clients using Globus File Transfer. All three services work together. Turbo offers the capacity and speed to match the computational performance of Great Lakes, Data Den provides an archive of raw data in case of catastrophic failure, and Globus has the performance needed for the transfer of big data. 

“Thanks to Great Lakes, we were able to process dozens of large projects simultaneously, instead of being limited to just a couple at a time with our in-house system,” said Olivia Koues, Ph.D., AGC managing director. 

“In calendar year 2020, the AGC delivered nearly a half petabyte of data to our research community. We rely on the speed of Turbo for storage, the robustness of Data Den for archiving, and the ease of Globus for big data file transfers. Working with ARC-TS has enabled incredible research such as making patients resilient to COVID-19. We are proudly working together to help patients.”

“Our services process more than 180,000GB of raw data per year for the AGC. That’s the same as streaming the three original Star Wars movies and the three prequels more than 6,000 times,” said Brock Palen, ARC-TS director. “We enjoy working with AGC to assist them into the next step of their big data journey.”

ARC-TS is a division of Information and Technology Services (ITS). The Advanced Genomics Core (ACG) is part of the Biomedical Research Core Facilities (BRCF) within the Medical School Office of Research.

Armis2 Update : May 2020 (Increased Compute/GPU Capacity and Limits)

By | Armis2, News

ARC-TS is pleased to announce the addition of compute resources in the standard, large memory, and GPU partitions, new V100 GPUs (graphics processing units), and increased Slurm root account limits for Armis2 effective May 20, 2020. 

 

Additional Compute Capability added

ARC-TS will be adding 93 standard compute nodes, 4 large memory nodes, and 3 new GPU nodes (each with 4 NVIDIA K40x GPUs).  These nodes are the same hardware type as the existing Armis2 nodes.  We plan on migrating the new hardware on May 20, 2020.

 

New GPUs added

ARC-TS has added five nodes, each with three V100 GPUs for faster service to the GPU partition. These are the same types of GPU nodes that are in the Great Lakes HPC cluster. Learn more about the V100 GPU

 

What do I need to do? 

You can access the new GPUs by submitting your jobs to the Armis2 gpu partition. Refer to the Armis2 user guide, section 1.2 Getting started, Part 5 “Submit a job” or contact arcts-support@umich.edu to get help or if you have questions. 

 

Resources

 

How do I get help? 

Contact arcts-support@umich.edu to get help or if you have questions. 

 

Slurm default resource limits increased

ARC-TS will be raising the default Slurm resource limits (set at the per-PI/project root account level) to give each researcher up to 33% of the resources in the standard partition, and 25% of the resources in the largemem and gpu partitions, to better serve your research needs. This will happen on May 20, 2020.

 

What do I need to do? 

Review, enable, or modify limits on your Armis2 Slurm accounts. Because of the higher cpu limit, your researchers will be able to run more jobs, which could generate a larger bill. Contact arcts-support@umich.edu if you would like to modify or add any limits. 

 

What is a Slurm root account?

A per-principal investigator (PI) or per-project root account contains one or more Slurm sub-accounts, each with their own users, limits, and shortcode(s). The entire root account has limits for overall cluster and /scratch usage in addition to any limits put on the sub-accounts.

 

What is the new Slurm root account limit? 

Each PI’s or project’s collection of Slurm accounts will be increased to 1,032 cores and 5,160GB of memory, and 10 GPUs, effective May 20, 2020. The Slurm root account level limit is currently set to 90 cores. We will document all of the updated limits, including large memory and GPU limits, on the Armis2 website when they go into effect.

 

Resources

 

How do I get help?