Great Lakes cluster is available for general access
What is the current status of the Great Lakes cluster
Now that we have completed Early User testing, the Great Lakes cluster is available for general access to the University community. Until the migration from Flux is complete on November 25, 2019, there will be no charge for using the Great Lakes cluster.
- The Great Lakes cluster compute nodes use the new Intel Skylake processor. In particular, the Skylake CPUs on the standard and large memory compute nodes will provide researchers more consistent performance, regardless of how many other jobs are on the machine.
- The Great Lakes cluster has 20 GPU nodes, each of which contains two NVidia V100 GPUs which are significantly faster than the K20 and K40 GPUs on Flux.
- The HDR100 InfiniBand network will provide consistent 100Gb/s performance across all nodes. On Flux, this ranged from 40-100Gb/s, depending on the node your job used.
- The high performance GPFS /scratch system, with a capacity of approximately two petabytes, is significantly faster than /scratch on Flux.
- The Torque-based batch job submission environment has been replaced with the Slurm resource manager. We expect this system to be significantly more responsive and quicker at starting jobs than was the case on Flux.
- For web-based job submission, the Open OnDemand system will replace the ARC Connect environment for providing web based file access, job submission, remote desktop, graphical Matlab, Jupyter Notebooks, and more. For more information, see the web-based access section in our user guide.
How do I get access?
Every Flux user has a login on the Great Lakes cluster; you should be able to log in via ssh to greatlakes.arc-ts.umich.edu. We have created Slurm accounts for each PI or project based on the current Flux accounts. You can see what Slurm accounts you have access to by running the command `my_accounts.`
Additionally, you can access the Great Lakes cluster via the web through our Open OnDemand portal. Here you can submit jobs, see submitted jobs, create Jupyter Notebooks and more. Please see the Great Lakes Cluster User Guide for more information.
Where do I read more about the Great Lakes cluster and how to use it?
The current documentation for the Great Lakes cluster, including configuration, user guides, and known issues can be found at https://arc-ts.umich.edu/greatlakes.
There is a schedule for upcoming training sessions on the CSCAR website, and we will communicate new sessions through Twitter and email.
Almost all of the software packages available on Flux have been recompiled on the Great Lakes cluster for improved performance anticipated from the Intel SkyLake architecture. In most cases, the latest software version available is being provided. If you need older versions or need additional packages, let us know via email at email@example.com.
We have also reorganized the software module structure to make it easier to find packages you want to load as well as automatically loading prerequisites. To search for packages, use the “module spider” command along with the name of the package or keywords. In many cases we combined similar packages into “Collections” such as Chemistry and BioInformatics. The command “module load Chemistry” will make any Chemistry package available to you and packages in the Chemistry collection will then be discoverable via the “module available” command. After loading a specific collection, you must then load any individual packages within that collection that you would like to use.
What are the rates?
We are working with ITS and UM Finance for approved service rates. Current plans are to have proposed1 rates identified by end of August. As soon as this information is more concrete, we will provide an update on the Great Lakes cluster website and in our email communication. We understand that this information is necessary for planning purposes and apologize for any impacts this has had on your budget planning.
What can be shared at this time is the new approach to billing that will be used for the Great Lakes cluster. Unlike Flux, there are no monthly allocations with fixed fees regardless of whether they are used or not. On the Great Lakes cluster, the monthly charge for an account will be calculated based on the resources used by jobs each month. The cost calculation for each job will be based on the amount and type of resources the job reserves and how long the job runs. This should be a significantly more flexible system and won’t require updating allocations as your computing needs change over time.
1 Rates are not considered final until they have been formally approved by OFA.
Flux to the Great Lakes cluster transition efforts
If you have not already, you should be developing a plan to migrate your work from Flux to the Great Lakes cluster. If you need help in developing a plan, please contact us and we can provide assistance during this migration period.
- ARC-TS and academic unit support teams will be offering training sessions around campus. We will have a training sessions schedule on the ARC-TS website. We also communicate new sessions through Twitter and email.
- To assist your transition, if you have any Turbo or MiStorage NFS mounts on Flux, those mounts will also be available on the Great Lakes cluster. If you would prefer to not have those volumes mounted on the Great Lakes cluster, email us at firstname.lastname@example.org.
Ensure that your migration from Flux to the Great Lakes cluster is completed by November 25, 2019. No jobs on Flux will run after November 25, 2019.
We will be adding new capabilities in the coming weeks and months and will continue to communicate these capabilities by email as they become available. If you have any questions, email us at email@example.com.