All Posts By

sdascola

Globus maintenance happening at 9 a.m. on March 11

By | Armis2, Data, General Interest, Great Lakes, HPC, News, Research, Uncategorized

Due to planned maintenance by the vendor, Globus services will be unavailable for up to two hours beginning at 9 a.m. U.S. Eastern Time (10 a.m. Central Time) on Saturday, March 11, 2023.

Customers will not be able to authenticate or initiate any transfers during that time. Any transfers that have started before the outage will be stalled until the outage is over. Transfers will resume once maintenance is complete.

More details are available on the Globus blog.

For assistance or questions, please contact ARC at arc-support@umich.edu.

Protein structure prediction team achieved top rankings

By | Great Lakes, News, Uncategorized

CASP15 is a bi-annual competition assessment of methods of protein structure modeling. Independent assessors then compared the models with experiments, and the results and their implications were discussed at the CASP15 Conference, held December 2022, in Turkey.

A joint team with members from the labs of Dr. Peter Freddolino and Dr. Yang Zhang took first place in the Multimer and Interdomain Prediction categories, and was again the top-ranked server in the Regular (domains) category according to the CASP assessor’s criteria.

These wins are well-earned. Freddolino noted, “This is a highly competitive event, against some of the very best minds and powerful companies in the world.”

The Zhang/Freddolino team competed against nearly 100 other groups which include other academic institutions, as well as major cloud and commercial companies. Groups from around the world submitted more than 53,000 models on 127 modeling targets in 5 prediction categories. 

“Wei’s predictions did amazingly well in CASP15!,” said Freddolino. Wei Zheng, Ph.D., is a lab member and a research fellow with the Department of Computational Medicine and Bioinformatics (DCMB). 

Zheng said that the team participates in the regular protein structure prediction and protein complex structure prediction categories. “The results are assessed as regular protein domain modeling, regular protein inter-domain modeling, and protein complex modeling. In all categories, our models performed very well!” 

The technology that supported this impressive work 

The resources to achieve these results were grant-funded, which allowed the team to leverage a number of university resources, including:  

  • The Lighthouse High-Performance Computing Cluster (HPC) service. Lighthouse is managed by the Advanced Research Computing (ARC) team, and ARC is a division of Information and Technology Services (ITS). 
  • The algorithms were GPU-intensive and run on the Great Lakes HPC Cluster. Graphics processing units (GPUs) are specialized processors designed to accelerate graphics rendering. The Great Lakes cluster provided additional space for running compute cycles. Kenneth Weiss, IT project manager senior with DCMB and HITS, said that many of the algorithms used by Zheng benefited from the increased performance of being able to compute the data on a GPU.
  • Multiple storage systems, including Turbo Research Storage. High-speed storage was crucial for storing AI-trained models and sequence libraries used by the methods developed by Zhang, Freddolino, and Zheng called D-I-TASSER/DMFold-Multimer. 
  • Given the scale of the CASP targets, the grant-funded compute augmented capacity by utilizing the Great Lakes cluster, Freddolino and his team took advantage of the allocations provided by the ITS U-M Research Computing Package (UMRCP) and the HITS Michigan Medicine Research Computing Investment (MMRCI) programs which defrayed the cost of computing substantially.
  • The collaboration tool Slack was used to keep Freddolino and Zheng in close contact with ARC and the DCMB teams. This provided the ability to deal with issues promptly, avoiding delays that would have had a detrimental impact on meeting CASP targets.

Technology staff from ARC, DCMB, and Health Information and Technology Services (HITS) provided assistance to the research team. All of the teams helped with the mitigation of bottlenecks that affected speed and throughput that Zheng needed for results. Staff also located and helped leverage resources including those on Great Lakes, utilizing available partitions and queues on the clusters.

“Having the flexibility and capacity provided by Great Lakes was instrumental in meeting competition deadlines,” said Weiss.

DCMB staff and the HITS HPC Teams team took the lead on triaging software problems giving Freddolino’s group high priority.

ARC Director Brock Palen provided monitoring and guidance on real-time impact and utilization of resources. “It was an honor to support this effort. It has always been ARC’s goal to take care of the technology so researchers can do what they do best. In this case, Freddelino and Zheng knocked it out of the park.” 

Jonathan Poisson, technical support manager with DCMB, was instrumental in helping to select and configure the equipment purchased by the grant. “This assistance was crucial in meeting the tight CASP15 targets, as each target is accompanied by a deadline for results.” 

Read more on the Computational Medicine and Bioinformatics website and the Department of Biological Chemistry website.

Related presentation: D-I-TASSER: Integrating Deep Learning with Multi-MSAs and Threading Alignments for Protein Structure Prediction

The resources to achieve these results were provided by an NIH-funded grant (“High-Performance Computing Cluster for Biomedical Research,” SIG: S10OD026825). 

Flux transfer servers will be decommissioned on Jan. 9; use new endpoints

By | Feature, General Interest, News

To provide faster transfers of data from ARC services, ARC will be decommissioning the Flux-Xfer servers on January 9, 2023. You will need to update how you migrate your data. 

For everyone who uses ARC storage services, especially Data Den users: This message is VERY important! This change includes the use of scp from Flux-Xfer, as well as the Globus endpoint umich#flux. Any shared endpoints that you have created on umich#flux will be automatically migrated to a new Globus collection on January 9. Those who use Data Den should take special interest in Item 1 listed below. 

Action item – Use the new endpoints

  1. If you currently use globus and umich#flux to access your Locker or Data Den volume, you should use the Globus Collection ‘UMich ARC Locker Non-Sensitive Volume Collection’ for Locker, and the Globus Collection ‘UMich ARC Data Den Non-Sensitive Volume Collection’ for Data Den. 
  2. If you currently use globus and umich#flux to access your Turbo volumes, you should use the Globus Collection ‘UMich ARC Turbo Non-Sensitive Volume Collection’.
  3. If you currently use globus and umich#flux to access other storage volumes, you should use the Globus Collection ‘umich#greatlakes’.  
  4. If you currently use scp on flux-xfer to copy data to/from Turbo, you should use ‘globus-xfer1.arc-ts.umich.edu’.

User guide 

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu

2023 Winter Maintenance & Globus File Transfer upgrade 

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

These services will be unavailable: 

  • Great Lakes – We will be updating Great Lakes on a rolling basis throughout December and beginning of January, and if successful, there should be no downtime or impact, with the following exceptions: 
    • Single precision GPUs (SPGPU) will be down Jan. 4-5 for networking maintenance. Those nodes will return back to production when maintenance has been completed and the nodes have been reloaded.
    • Customers will be notified via email of any changes to Great Lakes maintenance that will require downtime.
    • If unsuccessful, the Great Lakes maintenance will begin on Jan. 4-5, starting at 8am.  In either case, we will email everyone with the updated maintenance status.
  • Globus on the storage transfer nodes: Jan. 17-18.

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 
  • Slurm email will be improved, providing  more detailed information about completed jobs.

Countdown to maintenance 

For Great Lakes HPC jobs, use the command “maxwalltime” to discover the amount of time remaining until maintenance begins. 

Jobs that request more walltime than remains until maintenance will automatically be queued and start once maintenance is complete. If the plan for Great Lakes maintenance is successful, any queued jobs will be able to run as usual (except for the SPGPU nodes as discussed above). Customers will be notified via email if downtime is required for Great Lakes.

Status updates and additional information

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.

ARC Director Brock Palen spoke about Globus at SC22

By | General Interest, News

At the recent International Conference for High Performance Computing, Networking, Storage, and Analysis in Dallas (SC22) in Dallas, Tx., ARC Director Brock Palen spoke about how the University of Michigan is using Globus File Transfer. Globus allows the transfer of data between different storage systems, lab computers, and personal desktops/laptops.

“U-M is the largest public research university in North America by research spend. There are over a hundred top 10 graduate programs, and this drives a lot of interesting collaborations,” said Palen.  

Globus is a robust, cloud-based, file transfer service designed to move many large files, ranging from 10s of GBs to 10s of TBs. ARC is a Globus Subscription Provider for the U-M community, which allows U-M resources to serve as endpoints or collections for file transfers.

“Globus is the tool we use for data transfer, and it’s pretty much ubiquitous across Internet2, the national sites, other academic sites, and it interoperates everywhere,” Palen noted.  

Watch the YouTube video featuring Brock Palen

Check out the Globus user guide on the ARC website.

XSEDE is now ACCESS 

By | Feature, HPC, News

Access: advancing innovation and decorative imageThe national HPC resource known as XSEDE has now fully transitioned to ACCESS, Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support. 

ARC staff members, Todd Raeker and Shelly Johnson, are here to help you understand what is happening and how you can take advantage of the national resources and allocations.  

ACCESS has streamlined the allocation application process. Many projects that formerly had to submit an annual allocation proposal under XSEDE may now fit within one of the new ACCESS opportunities for small or medium-sized projects. You can now make one request with a much shorter allocations proposal. You can also make these requests at any time, rather than wait for the quarterly deadlines. Allocations are awarded for the duration of a supporting grant or for 12 months, with possible extensions up to five years.

There are new tiers as well. With the new Discover tier, there are 1.5 million available credits, and is closer in line to annual allocation from XSEDE. Researchers can submit the one-page application anytime. Check the overview of the ACCESS opportunities to see where your project fits. If you find that your research activities are considered to be a large-scale project, you will need to submit a proposal for a Maximize ACCESS award. Requests for Maximize ACCESS are accepted every six months. More details are available on the Maximize ACCESS web page.

Are you a grad student researcher? You now qualify to apply as the principal investigator (PI) for an ACCESS allocation in the Explore ACCESS tier to obtain resources to help complete your dissertation. Graduate students will need to include a letter from their advisor to support the request, and advisors must be added as a co-PI.

If you are interested in assisting the national research and educational community by reviewing ACCESS allocation requests, you can sign up to join the ACCESS allocations review committee.

Need help? Contact arc-support@umich.edu, or visit the ACCESS web page on the ARC website. You can also keep up with ongoing updates on the ACCESS webpage, access-ci.org.

Attend a listening session with ARC Director Brock Palen

By | Feature, HPC, News

You’re invited! ARC Director Brock Palen would like to hear from researchers. 

Are ARC services meeting their needs? What is not working well for them? Is something technical impeding their ability to do their research? Do they like Turbo Research Storage, the HPC web interface Open OnDemand, or the no-cost allocations offered by the U-M Research Computing Package

This is an open, virtual, drop-in office hour. All are welcome. 

There are three sessions are available: 

Researchers can also email Brock Palen at brockp@umich.edu or reach out to ARC at arc-support@umich.edu.

Precision Health and ARC team up on a self-service tool for genetic research

By | Great Lakes, HPC, News

Encore is a self-serve genetic analysis tool that researchers can now run using a point-and-click interface without the need to directly manipulate the genetic data. Only a phenotype file is needed to build a GWAS model with SAIGE (genetics analysis software), launch and monitor job progress, and interactively explore results.

It is geared for a range of disciplines and specialties including biostatistics, epidemiology, neuroscience, gastroenterology, anesthesiology, clinical pharmacy, and bioinformatics.

The tool was developed at the U-M School of Public Health Center for Statistical Genetics and is managed by Precision Health and supported by ITS’s Advanced Research Computing (ARC).  

Brock Palen, ARC director, “When someone uses Encore they are actually running on Great Lakes, and we are happy to provide the computational performance behind Encore.”

Using Encore is easy. No coding, command-line/Linux knowledge is required to run GWAS in Encore. Researchers also do not need to have knowledge of batch job submission or scheduling, or have direct access to a high-performance computing cluster. Encore automatically prepares job submission scripts and submits the analysis to the Great Lakes High-Performance Computing Cluster. 

Great Lakes is the university’s flagship open-science high-performance computing cluster. It is much faster and more powerful than a laptop, and provides quicker answers and optimized support for simulation, genomics, machine learning, life science, and more. The platform provides a balanced combination of computing power, I/O performance, storage capability, and accelerators.

Visit the Encore wiki page to learn more

To get started, send an email to PHDataHelp@umich.edu

For questions about Great Lakes, contact arc-support@umich.edu

Dailey receives U-M Robotics’ first-ever alumni award 

By | General Interest, Happenings, News, Research, Uncategorized

Meghan Dailey will be presenting The Future of Machine Learning in Robotics on September 23 at 2 p.m., at FMCRB or Zoom

Meghan Dailey is the U-M Robotics department’s first Alumni Merit Award recipient!

Dailey is a member of the first-ever class in U-M Robotics. She earned a Masters of Science degree in 2015 with a focus in artificial intelligence. She is currently a machine learning specialist with Advanced Research Computing (ARC), a division of Information and Technology Service (ITS)

You’re invited 

In honor of the award, Dailey will be presenting “The Future of Machine Learning in Robotics” on Friday, September 23, 2 p.m., Ford Robotics Building (FMCRB) or on Zoom (meeting ID: 961 1618 4387, passcode: 643563). Machine learning is becoming widely prevalent in many different fields, including robotics. In a future where robots and humans assist each other in completing tasks, what is the role of machine learning, and how should it evolve to effectively serve both humans and robots? Dailey will discuss her past experiences in robotics and machine learning, and how she envisions machine learning contributing to the growth of the robotics field.

About Dailey

A member of the ARC Scientific Computing and Research Consulting Services team, Dailey helps researchers with machine learning and artificial intelligence programming. She has consulted with student and faculty teams to build neural networks for image analysis and classification. She also has extensive experience in natural language processing and has worked on many projects analyzing text sentiment and intent.

Image courtesy U-M Robotics

Maintenance on May 3: Great Lakes, Armis2, and Lighthouse 

By | Uncategorized

On Tuesday, May 3, from 7-7:30 a.m., ARC will perform a vendor upgrade to Slurm on Great Lakes, Armis2, and Lighthouse HPC Clusters and their storage systems. ARC apologizes for the inconvenience.

Impact

  • This maintenance will cause a Slurm outage from 7-7:30 a.m.

  • Access to the cluster file systems will be available during the update.

  • Jobs not able to complete prior to 7 a.m. May 3 will not be able to start until after maintenance has ended. These jobs will be eligible to start once maintenance has been completed.

Status updates

Check the ITS Service Status page and follow ARC on Twitter for progress updates.

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu, or visit Virtual Drop-in Office Hours (CoderSpaces) for hands-on help, available 9:30-11 a.m. and 2-3:30 p..m. on Tuesdays; 1:30-3 p.m. on Wednesdays; and 2-3:30 p.m. on Thursdays.

For other topics, contact the ITS Service Center: