3-2-1…blast off! COE students use ARC-TS HPC clusters for rocket design

By | Educational, General Interest, Great Lakes, Happenings, HPC, News
MASA team photo

The MASA team has been working with the ARC-TS and the Great Lakes High-Performance Computing Clusters to rapidly iterate simulations. What previously took six hours on another cluster, takes 15 minutes on Great Lakes. (Image courtesy of MASA)

This article was written by Taylor Gribble, the ARC-TS summer 2020 intern. 

The Michigan Aeronautical Science Association (MASA) is a student-run engineering team at U-M that has been designing, building, and launching rockets since its inception in 2003. Since late 2017, MASA has focused on developing liquid-bipropellant rockets—which are rockets that react to a liquid fuel with a liquid oxidizer to produce thrust—in an effort to remain at the forefront of collegiate rocketry. The team is made up of roughly 70 active members including both undergraduate and graduate students who participate year-round.

Since 2018, MASA has been working on the Tangerine Space Machine (TSM) rocket which aims to be the first student-built liquid-bipropellant rocket to ever be launched to space. When completed, the rocket’s all-metal airframe will stand over 25 feet tall. The TSM will reach an altitude of 400,000 feet and will fly to space at over five times the speed of sound.

MASA is building this rocket as part of the Base 11 Space Challenge which was organized by the Base 11 Organization to encourage high school and college students to get involved in STEM fields. The competition has a prize of $1 million, to be awarded to the first team to successfully reach space. MASA is currently leading the competition, having won Phase 1 of the challenge in 2019 with the most promising preliminary rocket design.

Since the start of the TSM project, MASA has made great strides towards achieving its goals. The team has built and tested many parts of the complete system, including custom tanks, electronics, and ground support equipment. In 2020, the experimental rocket engine designed by MASA for the rocket broke the student thrust record when it was tested, validating the work that the team had put into the test.

The team’s rapid progress was made possible in-part by the extensive and lightning-quick simulations using the ARC-TS Great Lakes High-Performance Computing Cluster.

The student engineers are Edward Tang, Tommy Woodbury, and Theo Rulko, and they have been part of MASA for over two years.

Tang is MASA’s aerodynamics and recovery lead and a junior studying aerospace engineering with a minor in computer science. His team is working to develop advanced in-house flight simulation software to predict how the rocket will behave during its trip to space.

“Working on the Great Lakes HPC Cluster allows us to do simulations that we can’t do anywhere else. The simulations are complicated and can be difficult to run. We have to check it, and do it again; over and over and over,” said Tang. “The previous computer we used would take as long as six hours to render simulations. It took 15 minutes on Great Lakes.”

A computer simulation of Liquid Oxygen Dome Coupled Thermal-Structural

This image shows a Liquid Oxygen Dome Coupled Thermal-Structural simulation that was created on the ARC-TS Great Lakes HPC Cluster. (Image courtesy of MASA)

Rulko, the team’s president, is a junior studying aerospace engineering with a minor in materials science and engineering.

Just like Tang, Rulko has experience using the Great Lakes cluster. “Almost every MASA subteam has benefited from access to Great Lakes. For example, the Structures team has used it for Finite Element Analysis simulations of complicated assemblies to make them as lightweight and strong as possible, and the Propulsion team has used it for Computational Fluid Dynamics simulations to optimize the flow of propellants through the engine injector. These are both key parts of what it takes to design a rocket to go to space which we just wouldn’t be able to realistically do without access to the tools provided by ARC-TS.”

Rulko’s goals for the team include focusing on developing as much hardware/software as possible in-house so that members can control and understand the entire process. He believes MASA is about more than just building rockets; his goal for the team is to teach members about custom design and fabrication and to make sure that they learn the problem-solving skills they need to tackle real-world engineering challenges. “We want to achieve what no other student team has.”

MASA has recently faced unforeseen challenges due to the COVID-19 pandemic that threaten to hurt not only the team’s timeline but also to derail the team’s cohesiveness. “Beaucase of the pandemic, the team is dispersed literally all over the world. Working with ARC-TS has benefitted the entire team. The system has helped us streamline and optimize our workflow, and has made it easy to connect to Great Lakes, which allows us to rapidly develop and iterate our simulations while working remotely from anywhere,” said Tang. “The platform has been key to allowing us to continue to make progress during these difficult times.”

Tommy Woodbury is a senior studying aerospace engineering. Throughout his time on MASA he has been able to develop many skills. “MASA is what has made my time here at Michigan a really positive experience. Having a group of highly-motivated and supportive individuals has undoubtedly been one of the biggest factors in my success transferring to Michigan.

This image depicts the Liquid Rocket Engine Injector simulation.

This image depicts the Liquid Rocket Engine Injector simulation. (Image courtesy of MASA)

ARC-TS is a division of Information and Technology Services. Great Lakes is available without charge for student teams and organizations who need HPC resources. This program aims to enable students access to high-performance computing to enhance their team’s mission.

Great Lakes Update: March 2019

By | Flux, General Interest, Great Lakes, Happenings, HPC, News

ARC-TS previously shared much of this information through the December 2018 ARC Newsletter and on the ARC-TS website. We have added some additional details surrounding the timeline for Great Lakes as well as for users who would like to participate in Early User testing.

What is Great Lakes?

The Great Lakes service is a next generation HPC platform for University of Michigan researchers, which will provide several performance advantages compared to Flux. Great Lakes is built around the latest Intel CPU architecture called Skylake and will have standard, large memory, visualization, and GPU-accelerated nodes.  For more information on the technical aspects of Great Lakes, please see the Great Lakes configuration page.

Key Features:

  • Approximately 13,000 Intel Skylake Gold processors providing AVX512 capability providing over 1.5 TFlop of performance per node
  • 2 PB scratch storage system providing approximately 80 GB/s performance (compared to 8 GB/s on Flux)
  • New InfiniBand network with improved architecture and 100 Gb/s to each node
  • Each compute node will have significantly faster I/O via SSD-accelerated storage
  • Large Memory Nodes with 1.5 TB memory per node
  • GPU Nodes with NVidia Volta V100 GPUs (2 GPUs per node)
  • Visualization Nodes with Tesla P40 GPUs

Great Lakes will be using Slurm as the resource manager and scheduler, which will replace Torque and Moab on Flux. This will be the most immediate difference between the two clusters and will require some work on your part to transition from Flux to Great Lakes.

Another significant change is that we are making Great Lakes easier to use through a simplified accounting structure.  Unlike Flux where you need an account for each resource, on Great Lakes you can use the same account and simply request the resources you need, from GPUs to large memory.

There will be two primary ways to get access to compute time: 1) the on-demand model, which adds up the account’s job charges (reserved resources multiplied by the time used) and is billed monthly, similar to Flux On-Demand and 2) node purchases.  In the node purchase model, you will own the hardware which will reside in Great Lakes through the life of the cluster. You will receive an equivalent credit which you can use anywhere on the cluster, including on GPU and large memory nodes. We believe this will be preferable to buying actual hardware in the FOE model, as your daily computational usage can increase and decrease as your research requires. Send us an email at arcts-support@umich.edu if you have any questions or are interested in purchasing hardware on Great Lakes.

When will Great Lakes be available?

The ARC-TS team will prepare the cluster in April 2019 for an Early User period beginning in May, which will continue for approximately 4 weeks to ensure sufficient time to address any issues. General availability of Great Lakes should occur in June 2019.  We have a timeline for the Great Lakes project which will have more detail.

How does this impact me? Why Great Lakes?

After being the primary HPC cluster for the University for 8 years, Flux will be retired in September 2019.  Once Great Lakes becomes available to the University community, we will provide a few months to transition from Flux to Great Lakes.  Flux will be retired after that period due to aging hardware as well as expiring service contracts and licenses. We highly recommend preparing to migrate as early as possible so your research will not be interrupted.  Later in this email, we have suggestions for what you can do to make this migration process as easy as possible.

When Great Lakes becomes generally available to the University community, we will no longer be accepting new Flux accounts or allocations.  All new work should be focused on Great Lakes.

You can see the HPC timeline, including Great Lakes, Beta and Flux, here.

What is the current status of Great Lakes?

Today, the Great Lakes HPC compute hardware and high-performance Storage System has been fully installed and configured. In parallel with this work, the ARC-TS and Unit Support team members have been readying the new service with new software, modules as well as developing training to support the transition onto Great Lakes. A key feature of the new Great Lakes service is the just released HDR InfiniBand from Mellanox. Today, the hardware is installed but the firmware is still in its final stages of testing with the supplier with a target delivery of of mid-April 2019. Given the delays, ARC-TS and the suppliers have discussed an adjusted plan that allows quicker access to the cluster while supporting the future update once the firmware becomes available.

We are working with ITS Finance to define rates for Great Lakes.  We will update the Great Lakes documentation when we have final rates and let everyone know in subsequent communications.

What should I do to transition to Great Lakes?

We hope the transition from Flux to Great Lakes will be relatively straightforward, but to minimize disruptions to your research, we recommend you do your testing early.  In October 2018, we announced availability of the HPC cluster Beta in order to help users with this migration. Primarily, it allows users to migrate their PBS/Torque job submission scripts to Slurm.  You can and should also see the new Modules environments, as they have changed from their current configuration on Flux. Beta is using the same generation of hardware as Flux, so your performance will be similar to that on Flux. You should continue to use Flux for your production work; Beta is only to help test your Slurm job scripts and not for any production work.

Every user on Flux has an account on Beta.  You can login into Beta at beta.arc-ts.umich.edu.  You will have a new home directory on Beta, so you will need to migrate any scripts and data files you need to test your workloads into this new directory.  Beta should not be used for any PHI, HIPAA, Export Controlled, or any sensitive data!  We highly recommend that you use this time to convert your Torque scripts to Slurm and test that everything works as you would expect it to.  

To learn how to use Slurm, we have provided documentation on our Beta website.  Additionally, ARC-TS and academic unit support teams will be offering training sessions around campus. We will have a schedule on the ARC-TS website as well as communicate new sessions through Twitter and email.

If you have compiled software for use on Flux, we highly recommend that you recompile on Great Lakes once it becomes available.  Great Lakes is using the latest CPUs from Intel and by recompiling, your code may get performance gains by taking advantage of new capabilities on the new CPUs.

Questions? Need Assistance?

Contact arcts-support@umich.edu.

Great Lakes Update: December 2018

By | Flux, General Interest, Great Lakes, Happenings, News

What is Great Lakes?

The Great Lakes service is a next generation HPC platform for University of Michigan researchers. Great Lakes will provide several performance advantages compared to Flux, primarily in the areas of storage and networking. Great Lakes is built around the latest Intel CPU architecture called Skylake and will have standard, large memory, visualization, and GPU-accelerated nodes. For more information on the technical aspects of Great Lakes, please see the Great Lakes configuration page.

Key Features:

  • Approximately 13,000 Intel Skylake Gold processors providing AVX512 capability providing over 1.5 TFlop of performance per node
  • 2 PB scratch storage system providing approximately 80 GB/s performance (compared to 8 GB/s on Flux)
  • New InfiniBand network with improved architecture and 100 Gb/s to each node
  • Each compute node will have significantly faster I/O via SSD-accelerated storage
  • Large Memory Nodes with 1.5 TB memory per node
  • GPU Nodes with NVidia Volta V100 GPUs (2 GPUs per node)
  • Visualization Nodes with Tesla P40 GPUs

Great Lakes will be using Slurm as the resource manager and scheduler, which will replace Torque and Moab on Flux. This will be the most immediate difference between the two clusters and will require some work on your part to transition from Flux to Great Lakes.

Another significant change is that we are making Great Lakes easier to use through a simplified accounting structure. Unlike Flux where you need an account for each resource, on Great Lakes you can use the same account and simply request the resources you need, from GPUs to large memory.

There will be two primary ways to get access to compute time: 1) the pay-as-you-go model similar to Flux On-Demand and 2) node purchases.  Node purchases will give you computational time commensurate to 4 years multiplied by the number of nodes you buy. We believe this will be preferable to buying actual hardware in the FOE model, as your daily computational usage can increase and decrease as your research requires. Additionally you will not be limited by hardware failures on your specific nodes, as your jobs can run anywhere on Great Lakes. Send us an email at arcts-support@umich.edu if you have any questions or are interested in purchasing hardware on Great Lakes.

When will Great Lakes be available?

The ARC-TS team will prepare the cluster in February/March 2019 for an Early User period which will continue for several weeks to ensure sufficient time to address any issues. General availability of Great Lakes should occur in April.

How does this impact me? Why Great Lakes?

After being the primary HPC cluster for the University for 8 years, Flux will be retired in September 2019.  Once Great Lakes becomes available to the University community, we will provide a few months to transition from Flux to Great Lakes. Flux will be retired after that period due to aging hardware as well as expiring service contracts and licenses. We highly recommend preparing to migrate as early as possible so your research will not be interrupted. Later in this email, we have suggestions for what you can do to make this migration process as easy as possible.

When Great Lakes becomes generally available to the University community, we will no longer be accepting new Flux accounts or allocations.  All new work should be focused on Great Lakes.

What is the current status of Great Lakes?

Today, the Great Lakes HPC compute hardware has been fully installed and the high-performance Storage System configuration is in progress. In parallel with this work, the ARC-TS and Unit Support team members have been readying the new service with new software, modules as well as developing training to support the transition onto Great Lakes. A key feature of the new Great Lakes service is the just released HDR InfiniBand from Mellanox. Today, the hardware is available but the firmware is still in its final stages of testing with the supplier with a target delivery date of March (2019). Given the delays, ARC-TS and the suppliers have discussed an adjusted plan that allows quicker access to the cluster while supporting the future update once the firmware becomes available.

What should I do to transition to Great Lakes?

We hope the transition from Flux to Great Lakes will be relatively straightforward, but to minimize disruptions to your research, we recommend you do your testing early.  In October, we announced availability of the HPC cluster Beta in order to help users with this migration. Primarily, it allows users to migrate their PBS/Torque job submission scripts to Slurm. You can also see the new Modules environments, as they have changed from their current configuration on Flux. Beta is using the same generation of hardware as Flux, so your performance will be similar to that on Flux. You should continue to use Flux for your production work; Beta is only to help test your Slurm job scripts and not for any production work.

Every user on Flux has an account on Beta.  You can login into Beta at beta.arc-ts.umich.edu. You will have a new home directory on Beta, so you will need to migrate any scripts and data files you need to test your workloads into this new directory. Beta should not be used for any PHI, HIPAA, Export Controlled, or any sensitive data! We highly recommend that you use this time to convert your Torque scripts to Slurm and test that everything works as you would expect it to.  

To learn how to use Slurm, we have provided documentation on our Beta website. Additionally, ARC-TS and academic unit support teams will be offering training sessions around campus. We’ll have a schedule on the ARC-TS website as well as communicate new sessions through Twitter and email.

If you have compiled software for use on Flux, we highly recommend that you recompile on Great Lakes once it becomes available. Great Lakes is using the latest CPUs from Intel and by recompiling, your code may get performance gains by taking advantage of new capabilities on the new CPUs.

Questions? Need Assistance?

Contact arcts-support@umich.edu.

U-M participates in SC18 conference in Dallas

By | General Interest, Happenings, News

University of Michigan researchers and IT staff wrapped up a successful Supercomputing ‘18 (SC18) in Dallas from Nov. 11-16, 2018, taking part in a number of different aspects of the conference.

SC “Perennial” Quentin Stout, U-M professor of Electrical Engineering and Computer Science and one of only 19 people who have been to every Supercomputing conference, co-presented a tutorial titled Parallel Computing 101.

And with the recent announcement of a new HPC cluster on campus called Great Lakes, IT staff from Advanced Research Computing – Technology Services (ARC-TS) made presentations around the conference on the details of the new supercomputer.

U-M once again shared a booth with Michigan State University booth, highlighting our computational and data-intensive research as well as the comprehensive set of tools and services we provide to our researchers. Representatives from all ARC units were at the booth: ARC-TS, the Michigan Institute for Data Science (MIDAS), the Michigan Institute for Computational Discovery and Engineering (MICDE), and Consulting for Statistics, Computing and Analytics Research (CSCAR).

The booth also featured two demonstrations: one on the Open Storage Research Infrastructure or OSiRIS, the multi-institutional software-defined data storage system, and the Services Layer At The Edge (SLATE) project, both of which are supported by the NSF; the other tested conference-goers’ ability to detect “fake news” stories compared to an artificial intelligence system created by researchers supported by MIDAS.

Gallery

U-M Activities

  • Tutorial: Parallel Computing 101: Prof. Stout and Associate Professor Christiane Jablonowski of the U-M Department of Climate and Space Sciences and Engineering provided a comprehensive overview of parallel computing.
  • Introduction to Kubernetes. Presented by Bob Killen, Research Cloud Administrator, and Scott Paschke, Research Cloud Solutions Designer, both from ARC-TS. Containers have shifted the way applications are packaged and delivered. Their use in data science and machine learning is skyrocketing with the beneficial side effect of enabling reproducible research. This rise in use has necessitated the need to explore and adopt better container-centric orchestration tools. Of these tools, Kubernetes – an open-source container platform born within Google — has become the de facto standard. This half-day tutorial introduced researchers and sys admins who may already be familiar with container concepts to the architecture and fundamental concepts of Kubernetes. Attendees explored these concepts through a series of hands-on exercises and left with the leg-up in continuing their container education, and gained a better understanding of how Kubernetes may be used for research applications.
  • Brock Palen, Director of ARC-TS, spoke about the new Great Lakes HPC cluster:
    • DDN booth (3123)
    • Mellanox booth (3207)
    • Dell booth (3218)
    • SLURM booth (1242)
  • Todd Raeker, Research Technology Consultant for ARC-TS, went to the Globus booth (4201) to talk about U-M researchers’ use of the service.
  • Birds of a Feather: Meeting HPC Container Challenges as a Community. Bob Killen, Research Cloud Administrator at ARC-TS, gave a lightning talk as part of this session that presented, prioritized, and gathered input on top issues and budding solutions around containerization of HPC applications.
  • Sharon Broude Geva, Director of ARC, was live on the SC18 News Desk discussing ARC HPC services, Women in HPC, and the Coalition for Scientific Academic Computation (CASC). The stream was available from the Supercomputing Twitter account: https://twitter.com/Supercomputing
  • Birds of a Feather: Ceph Applications in HPC Environments: Ben Meekhof, HPC Storage Administrator at ARC-TS, gave a lightning talk on Ceph and OSiRIS as part of this session. More details at https://www.msi.umn.edu/ceph-hpc-environments-sc18
  • ARC was a sponsor of the Women in HPC Reception. See the event description for more details and to register. Sharon Broude Geva, Director of ARC, gave a presentation.
  • Birds of a Feather: Cloud Infrastructure Solutions to Run HPC Workloads: Bob Killen, Research Cloud Administrator at ARC-TS, presented at this session aimed at architects, administrators, software engineers, and scientists interested in designing and deploying cloud infrastructure solutions such as OpenStack, Docker, Charliecloud, Singularity, Kubernetes, and Mesos.
  • Jing Liu of the Michigan Institute for Data Science, participated in a panel discussion at the Purdue University booth.

Follow ARC on Twitter at https://twitter.com/ARC_UM for updates.