Category

Great Lakes

Using natural language processing to improve everyday life

By | Data, Great Lakes, HPC, News, Research, Uncategorized

Joyce Y. Chai, professor of electrical engineering and computer science, College of Engineering, and colleagues have been seeking answers to complex questions using natural language processing and machine learning that may improve everyday life.

Some of the algorithms that they develop in their work are meant for tasks that machines may have little to no prior knowledge of. For example, to guide human users to gain a particular skill (e.g., building a special apparatus or even, “Tell me how to bake a cake”). A set of instructions based on the observation of what the user is doing, e.g., to correct mistakes or provide the next step, would be generated by Generative AI, or GenAI. The better the data and engineering behind the AI, the more useful the instructions will be. 

“To enable machines to quickly learn and adapt to a new task, developers may give a few examples of recipe steps with both language instructions and video demonstrations. Machines can then (hopefully) guide users through the task by recognizing the right steps and generating relevant instructions using GenAI,” said Chai.

What are AI, machine learning, deep learning, and natural language processing?

It might help to take a step back to understand AI, machine learning (ML), and deep learning at a high level. Both ML and deep learning are subsets of AI, as seen in the figure. Some natural language processing (NLP) tasks fall within the realm of deep learning. They all work together and build off of each other.

Artificial Intelligence, or AI, is a branch of computer science that attempts to simulate human intelligence with computers. It involves creating systems to perform tasks that usually need human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

“NLP is a sub-area in AI, where DL/ML approaches are predominantly applied,” stated Chai.

Christopher Brown, a research data scientist, with ITS Advanced Research Computing (ARC) and a member of the ARC consulting team, explains that ML is a subfield of AI. Within ML, algorithms are used to generalize situations beyond those seen in training data and then complete tasks without further guidance from people. A good example is U-M GPT. The large language models (LLMs) accessible via U-M GPT are trained with millions of diverse examples. “The goal is to train the models to reliably predict, translate, or generate something.” 

“Data is any information that can be formatted and fed into an algorithm that can be used for some task, including journal articles, chats, numbers, videos, audio, and texts,” said Brown, Algorithms can be trained to perform tasks using these real-world data. 

Natural Language Processing is a branch of artificial intelligence that helps computers understand and generate human language in a way that is both meaningful and useful to humans. NLP teaches computers to understand languages and then respond so that humans can understand, and even accounting for when rich context language is used. 

“NLP is highly interdisciplinary, and involves multiple fields, such as computer science, linguistics, philosophy, cognitive science, statistics, mathematics, etc.,” said Chai.

Examples of NLP are everywhere: when you ask Siri for directions, or when Google efficiently completes your half-typed query, or even when you get suggested replies in your email. 

Ultimately NLP, along with AI, can be used to make interactions between humans and machines as natural and as easy as possible. 

A lot of data is needed to train the models

Dr. Chai and her team use large language models, a lot of data, and computing resources. These models take longer to train and are harder to interpret. Brown says, “The state of the art, groundbreaking work tends to be in this area.” 

Dr. Chai uses deep learning algorithms that make predictions about what the next part of the task or conversation is. “For example, they use deep learning and the transformer architecture to enable embodied agents to learn how new words are connected to the physical environment, to follow human language instructions, and to collaborate with humans to come up with a shared plan,” Brown explains.

The technology that supports this work

To accomplish her work, Dr. Chai uses the Great Lakes High-Performance Computing Cluster and Turbo Research Storage, both of which are managed by U-M’s Advanced Research Computing Group (ARC) in Information and Technology Services. She has 16 GPUs on Great Lakes at the ready, with the option to use more at any given time. 

A GPU, or Graphics Processing Unit, is a piece of computer equipment that is good at displaying pictures, animations, and videos on your screen. The GPU is especially adept at quickly creating and manipulating images. Traditionally, GPUs were used for video games and professional design software where detailed graphics were necessary. But more recently, researchers including Dr. Chai discovered that GPUs are also good at handling many simple tasks at the same time. This includes tasks like scientific simulations and AI training where a lot of calculations need to be done in parallel (which is perfect for training large language models).

“GPUs are popular for deep learning, and we will continue to get more and better GPUs in the future. There is a demand, and we will continue supporting this technology so that deep learning can continue to grow,” said Brock Palen, ITS Advanced Research Computing director. 

Chai and her team also leveraged 29 terabytes of the Turbo Research Storage service at ARC. NLP benefits from the high-capacity, reliable, secure, and fast storage solution. Turbo enables investigators across the university to store and access data needed for their research via Great Lakes. 

Great Lakes HPC in the classroom 

ARC offers classroom use of high-performance computing cluster resources on the Great Lakes High-Performance Computing Cluster

Dr. Chai regularly leverages this resource. “Over 300 students have benefited from this experience. We have homework that requires the use of the Great Lakes, e.g., having students learn how to conduct experiments in a managed job-scheduling system like SLURM. This will benefit them in the future if they engage in any compute-intensive R&D (research and development).

“For my NLP class, I request Great Lakes access for my students so they have the ability to develop some meaningful final projects. We also use the Great Lakes HPC resources to study the reproducibility for NLP beginners,” said Chai. A gallery is available for many of the student projects.

The UMRCP defrays costs

The U-M Research Computing Package is a set of cost-sharing allocations offered by ITS ARC, and they are available upon request. Other units offer additional cost-sharing to researchers. Chai said, “We typically use the nodes owned by my group for research projects that require intensive, large-scale GPU model training. We use the UMRCP for less intensive tasks, thereby extending the budgetary impact of the allocations.” 

New Hardware-Accelerated Visualization PartitionĀ 

By | General Interest, Great Lakes, HPC, News, Research

We are excited to introduce a brand-new feature of Great Lakes to researchers who need hardware acceleration for their visualization requirements. ARC has created a specialized “viz” partition, consisting of four nodes equipped with NVIDIA P40 GPUs. These nodes are accessible through Open OnDemand’s Remote Desktop functionality.

Key details about this new feature:

  • Jobs utilizing the viz partition have a maximum walltime of 2 hours.
  • The charge rate for the viz partition is currently aligned with our standard partition. 

To make use of the viz partition, follow these steps:

  • Create a Remote Desktop job on Great Lakes via Open OnDemand.
  • Request the “viz” partition, specifying 1 node and 1 GPU (Please note that we have only one GPU available per node).
  • Prefix any application you intend to run with accelerated graphics with the command “vglrun.” (example: “vglrun glxgears”)

For questions or support requests, please contact our team at arc-support@umich.edu.

2024 ARC Winter Maintenance

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

HPC

Like last year, we will have a rolling update which, outside of a few brief interruptions, should keep the clusters in production, Here is the schedule: 

December 6, 11 p.m.

  • Update Slurm Controllers: Expect a brief 1-minute interruption when querying Slurm. All jobs will continue to run.
  • Update Open OnDemand Servers: Expect a few seconds of interruption if you are using Open OnDemand.
  • Login Servers Update: We will begin updating our login servers. This update is not expected to impact any users.

 December 7:

  • Compute Rolling Updates: We will start rolling updates across all clusters a few nodes at a time, so there should be minimal impact on access to resources.

December 19, 10:30a.m.:

  • Update Globus transfer (xfer) nodes: as these nodes are in pairs for each cluster. For the Globus transfer nodes, we will take one node of each pair down at a time, so all Globus services will remain working. If you are using scp/sftp, your jobs may be interrupted, so please schedule these transfers accordingly or use Globus. Total maintenance time should be approximately one hour.

January 3, 8 a.m.:

  • Reboot Slurm Controller Nodes: This will cause an approximately 10-minute Slurm outage. All running jobs will continue to run.
  • Armis2 Open OnDemand Node: We will reload and reboot the Armis2 Open OnDemand node. This will take approximately 1 hour.
  • Great Lakes and Lighthouse Open OnDemand Nodes: These nodes will be down approximately 10 to 15 minutes.
  • Globus Transfer (xfer) Nodes: These nodes will be rebooted. This will take approximately 15 minutes.
  • Sigbio Login Reload/Reboot: This will take approximately 1 hour.
  • Some Armis2 Faculty Owned Equipment (FOE) nodes will require physical and configuration updates. Expected downtime is 4 hours.

HPC Maintenance Notes:

  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • New viz partition : there will be a new partition called viz with 16 new GPUs, which can support exactly one GPU per job .
  • The –cpus-per-gpu Slurm bug has been fixed.

HPC Maintenance Details:

NEW version in BOLD

OLD version

Red Hat 8.6 EUS

  • Kernel 4.18.0-372.75.1.el8_6.x86_64

  • glibc-2.28-189.6

  • ucx-1.15.0-1.59056 (OFED provided)

  • gcc-8.5.0-10.1.el8

Red Hat 8.6 EUS

  • Kernel 4.18.0-372.51.1.el8_6.x86_64

  • glibc-2.28-189.6

  • ucx-1.15.0-1.59056 (OFED provided)

  • gcc-8.5.0-10.1.el8

Mlnx-ofa_kernel-modules

  • OFED 5.9.0.5.5.1

    • kver.4.18.0_372.51.1.el8_6

Mlnx-ofa_kernel-modules

  • OFED 5.9.0.5.5.1

    • kver.4.18.0_372.51.1.el8_6

Slurm 23.02.6 copiles with:

  • PMIx

    • /opt/pmix/3.2.5

    • /opt/pmix/4.2.6

  • hwloc 2.2.0-3 (OS provided)

  • ucx-1.15.0-1.59056 (OFED provided)

  • slurm-libpmi

  • slurm-contribs

Slurm 23.02.5 copiles with:

  • PMIx

    • /opt/pmix/3.2.5

    • /opt/pmix/4.2.6

  • hwloc 2.2.0-3 (OS provided)

  • ucx-1.15.0-1.59056 (OFED provided)

  • slurm-libpmi

  • slurm-contribs

PMIx LD config /opt/pmix/3.2.5/lib

PMIx LD config /opt/pmix/3.2.5/lib

PMIx versions available in /opt :

  • 3.2.5

  • 4.2.6

PMIx versions available in /opt :

  • 3.2.5

  • 4.2.6

Singularity CE (Sylabs.io)

  • 3.10.4

  • 3.11.1

Singularity CE (Sylabs.io)

  • 3.10.4

  • 3.11.1

NVIDIA driver 545.23.06

NVIDIA driver 530.30.02

Open OnDemand 3.0.3

Open OnDemand 3.0.3

 

Storage

There is no scheduled downtime for Turbo, Locker, or  Data Den.

 

Secure Enclave Service (SES)

  • SES team to add details here

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 

Status updates and additional information

  • Status updates will be available on the ARC Twitter feed and ITS service status page,  throughout the course of the maintenance.
  • ARC will send an email to all HPC users when the maintenance has been completed. 

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.

Gordon Bell Prize winning team also leverages ITS services

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

A U-M College of Engineering team led by Vikram Gavini was recently awarded the prestigious ACM Gordon Bell Prize. The honor was presented in recognition of their outstanding achievement for developing and demonstrating an approach that brought near-quantum mechanical accuracy for large systems consisting of tens of thousands of atoms into the range of today’s supercomputers. 

The ACM Gordon Bell Prize is awarded each year to recognize outstanding achievement in high-performance computing.

Dr. Gavini and team carried out their largest calculation on the fastest known computer in the world, the U.S. Department of Energy’s Frontier, and they sustained 660 petaflops. 

“The ACM Gordon Bell Prize is the most prestigious prize available for high-performance computing,” said Brock Palen, Advanced Research Computing (ARC) director, a division of Information and Technology Services (ITS). 

“This is so exciting because it is more than 660 times faster than our entire Great Lakes cluster at perfect efficiency. Their calculation was 10 times the improvement of any prior density-functional theory (DFT) calculation.” 

DFT is a method used in physics and chemistry to investigate the electronic structure of many-body systems. These systems can include atoms, molecules, or solids.  

“Dr. Gavini is a major user of ARC’s Great Lakes HPC Cluster, and we are so proud of this astonishing achievement.” 

Gavini said that any development needs to be carefully tested. “We use ARC’s Great Lakes to test smaller scale systems before moving to larger-scale, production-ready calculations. This testing is critical, and we need the accessibility and speed of Great Lakes to run calibrations and debug our implementations.”

“We use Great Lakes to actively test our implementations.”

For quick-access storage, Gavini and his team use Turbo Research Storage and Data Den Research Archive for longer-term storage. “We generate a lot of data, and storage is important to our work.”

The U-M Research Computing Package was also helpful in defraying some of their HPC and storage costs. 

“Thank you to ARC for providing their continual assistance for group members who have been in trenches. This has been a decade-long effort, and ITS/ARC was crucial along the journey,” said Gavini. 

Dr. Gavini is a professor of Mechanical Engineering and professor of Materials Science and Engineering, College of Engineering.

Globus maintenance happening at 9 a.m. on March 11

By | Armis2, Data, General Interest, Great Lakes, HPC, News, Research, Uncategorized

Due to planned maintenance by the vendor, Globus services will be unavailable for up to two hours beginning at 9 a.m. U.S. Eastern Time (10 a.m. Central Time) on Saturday, March 11, 2023.

Customers will not be able to authenticate or initiate any transfers during that time. Any transfers that have started before the outage will be stalled until the outage is over. Transfers will resume once maintenance is complete.

More details are available on the Globus blog.

For assistance or questions, please contact ARC at arc-support@umich.edu.

Protein structure prediction team achieved top rankings

By | Great Lakes, News, Uncategorized

CASP15 is a bi-annual competition assessment of methods of protein structure modeling. Independent assessors then compared the models with experiments, and the results and their implications were discussed at the CASP15 Conference, held December 2022, in Turkey.

A joint team with members from the labs of Dr. Peter Freddolino and Dr. Yang Zhang took first place in the Multimer and Interdomain Prediction categories, and was again the top-ranked server in the Regular (domains) category according to the CASP assessor’s criteria.

These wins are well-earned. Freddolino noted, “This is a highly competitive event, against some of the very best minds and powerful companies in the world.”

The Zhang/Freddolino team competed against nearly 100 other groups which include other academic institutions, as well as major cloud and commercial companies. Groups from around the world submitted more than 53,000 models on 127 modeling targets in 5 prediction categories. 

“Wei’s predictions did amazingly well in CASP15!,” said Freddolino. Wei Zheng, Ph.D., is a lab member and a research fellow with the Department of Computational Medicine and Bioinformatics (DCMB). 

Zheng said that the team participates in the regular protein structure prediction and protein complex structure prediction categories. “The results are assessed as regular protein domain modeling, regular protein inter-domain modeling, and protein complex modeling. In all categories, our models performed very well!” 

The technology that supported this impressive work 

The resources to achieve these results were grant-funded, which allowed the team to leverage a number of university resources, including:  

  • The Lighthouse High-Performance Computing Cluster (HPC) service. Lighthouse is managed by the Advanced Research Computing (ARC) team, and ARC is a division of Information and Technology Services (ITS). 
  • The algorithms were GPU-intensive and run on the Great Lakes HPC Cluster. Graphics processing units (GPUs) are specialized processors designed to accelerate graphics rendering. The Great Lakes cluster provided additional space for running compute cycles. Kenneth Weiss, IT project manager senior with DCMB and HITS, said that many of the algorithms used by Zheng benefited from the increased performance of being able to compute the data on a GPU.
  • Multiple storage systems, including Turbo Research Storage. High-speed storage was crucial for storing AI-trained models and sequence libraries used by the methods developed by Zhang, Freddolino, and Zheng called D-I-TASSER/DMFold-Multimer. 
  • Given the scale of the CASP targets, the grant-funded compute augmented capacity by utilizing the Great Lakes cluster, Freddolino and his team took advantage of the allocations provided by the ITS U-M Research Computing Package (UMRCP) and the HITS Michigan Medicine Research Computing Investment (MMRCI) programs which defrayed the cost of computing substantially.
  • The collaboration tool Slack was used to keep Freddolino and Zheng in close contact with ARC and the DCMB teams. This provided the ability to deal with issues promptly, avoiding delays that would have had a detrimental impact on meeting CASP targets.

Technology staff from ARC, DCMB, and Health Information and Technology Services (HITS) provided assistance to the research team. All of the teams helped with the mitigation of bottlenecks that affected speed and throughput that Zheng needed for results. Staff also located and helped leverage resources including those on Great Lakes, utilizing available partitions and queues on the clusters.

“Having the flexibility and capacity provided by Great Lakes was instrumental in meeting competition deadlines,” said Weiss.

DCMB staff and the HITS HPC Teams team took the lead on triaging software problems giving Freddolino’s group high priority.

ARC Director Brock Palen provided monitoring and guidance on real-time impact and utilization of resources. “It was an honor to support this effort. It has always been ARC’s goal to take care of the technology so researchers can do what they do best. In this case, Freddelino and Zheng knocked it out of the park.” 

Jonathan Poisson, technical support manager with DCMB, was instrumental in helping to select and configure the equipment purchased by the grant. “This assistance was crucial in meeting the tight CASP15 targets, as each target is accompanied by a deadline for results.” 

Read more on the Computational Medicine and Bioinformatics website and the Department of Biological Chemistry website.

Related presentation: D-I-TASSER: Integrating Deep Learning with Multi-MSAs and Threading Alignments for Protein Structure Prediction

The resources to achieve these results were provided by an NIH-funded grant (“High-Performance Computing Cluster for Biomedical Research,” SIG: S10OD026825). 

2023 Winter Maintenance & Globus File Transfer upgradeĀ 

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

These services will be unavailable: 

  • Great Lakes – We will be updating Great Lakes on a rolling basis throughout December and beginning of January, and if successful, there should be no downtime or impact, with the following exceptions: 
    • Single precision GPUs (SPGPU) will be down Jan. 4-5 for networking maintenance. Those nodes will return back to production when maintenance has been completed and the nodes have been reloaded.
    • Customers will be notified via email of any changes to Great Lakes maintenance that will require downtime.
    • If unsuccessful, the Great Lakes maintenance will begin on Jan. 4-5, starting at 8am.  In either case, we will email everyone with the updated maintenance status.
  • Globus on the storage transfer nodes: Jan. 17-18.

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 
  • Slurm email will be improved, providing  more detailed information about completed jobs.

Countdown to maintenance 

For Great Lakes HPC jobs, use the command “maxwalltime” to discover the amount of time remaining until maintenance begins. 

Jobs that request more walltime than remains until maintenance will automatically be queued and start once maintenance is complete. If the plan for Great Lakes maintenance is successful, any queued jobs will be able to run as usual (except for the SPGPU nodes as discussed above). Customers will be notified via email if downtime is required for Great Lakes.

Status updates and additional information

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.

Precision Health and ARC team up on a self-service tool for genetic research

By | Great Lakes, HPC, News

Encore is a self-serve genetic analysis tool that researchers can now run using a point-and-click interface without the need to directly manipulate the genetic data. Only a phenotype file is needed to build a GWAS model with SAIGE (genetics analysis software), launch and monitor job progress, and interactively explore results.

It is geared for a range of disciplines and specialties including biostatistics, epidemiology, neuroscience, gastroenterology, anesthesiology, clinical pharmacy, and bioinformatics.

The tool was developed at the U-M School of Public Health Center for Statistical Genetics and is managed by Precision Health and supported by ITS’s Advanced Research Computing (ARC).  

Brock Palen, ARC director, “When someone uses Encore they are actually running on Great Lakes, and we are happy to provide the computational performance behind Encore.”

Using Encore is easy. No coding, command-line/Linux knowledge is required to run GWAS in Encore. Researchers also do not need to have knowledge of batch job submission or scheduling, or have direct access to a high-performance computing cluster. Encore automatically prepares job submission scripts and submits the analysis to the Great Lakes High-Performance Computing Cluster. 

Great Lakes is the university’s flagship open-science high-performance computing cluster. It is much faster and more powerful than a laptop, and provides quicker answers and optimized support for simulation, genomics, machine learning, life science, and more. The platform provides a balanced combination of computing power, I/O performance, storage capability, and accelerators.

Visit the Encore wiki page to learn more

To get started, send an email to PHDataHelp@umich.edu

For questions about Great Lakes, contact arc-support@umich.edu

Understanding the strongest electromagnetic fields in the universe

By | Data, Great Lakes, HPC, Research, Uncategorized

Alec Thomas is part of the team from the U-M College of Engineering Gérard Mourou Center for Ultrafast Optical Science that is building the most powerful laser in the U.S.

Dubbed “ZEUS,” the laser will be 3-petawatts of power. That’s a ‘3’ with 15 zeros. All the power generated in the entire world is 10-terawatts, or 1000 times less than the ZEUS laser. 

The team’s goal is to use the laser to explore how matter behaves in the most extreme electric and magnetic fields in the universe, and also to generate new sources of radiation beams, which may lead to developments in medicine, materials science, and national security. 

A simulation of a plasma wake.

This simulation shows a plasma wake behind a laser pulse. The plasma behaves like water waves generated behind a boat. In this image, the “waves” are extremely hot plasma matter, and the “boat” is a short burst of powerful laser light. (Image courtesy of Daniel Seipt.)

“In the strong electric fields of a petawatt laser, matter becomes ripped apart into a `plasma,’ which is what the sun is made of. This work involves very complex and nonlinear physical interactions between matter particles and light. We create six-dimensional models of particles to simulate how they might behave in a plasma in the presence of these laser fields to learn how to harness it for new technologies. This requires a lot of compute power,” Thomas said. 

That compute power comes from the Great Lakes HPC cluster, the university’s fastest high-performance computing cluster. The team created equations to solve a field of motion for each six-dimensional particle. The equations run on Great Lakes and help Thomas and his team to learn how the particle might behave within a cell. Once the field of motion is understood, solutions can be developed. 

“On the computing side, this is a very complex physical interaction. Great Lakes is designed to handle this type of work,” said Brock Palen, director of Advanced Research Computing, a division of Information and Technology Services. 

Thomas has signed up for allocations on the Great Lakes HPC cluster and Data Den storage. “I just signed up for the no-cost allocations offered by the U-M Research Computing Package. I am planning to use those allocations to explore ideas and concepts in preparation for submitting grant proposals.”

Learn more and sign up for the no-cost U-M Research Computing Package (UMRCP).

Prof. Thomas’ work is funded by a grant from the National Science Foundation.

Global research uses computing services to advance parenting and child development

By | General Interest, Great Lakes, HPC, News, Research, Uncategorized

Andrew Grogan-Kaylor, professor of Social Work, has spent the past 15 years studying the impact of physical discipline on children within the United States. 

Working with a team of other researchers at the School of Social Work, co-led by professors Shawna Lee and Julie Ma, he recently expanded his research to include children from all over the world, rather than exclusively the U.S. Current data for 62 low- and middle-income countries has been provided by UNICEF, a United Nations agency responsible for providing humanitarian and developmental aid to children worldwide. This data provides a unique opportunity to study the positive things that parents do around the world.

a group of smiling children

(Image by Eduardo Davad from Pixabay)

“We want to push research on parenting and child development in new directions. We want to do globally-based, diversity-based work, and we can’t do that without ARC services,” said Grogan-Kaylor. “I needed a bigger ‘hammer’ than my laptop provided.” 

The “hammer” he’s referring to is the Great Lakes HPC cluster. It can handle processing the large data set easily. When Grogan-Kaylor first heard about ARC, he thought it sounded like an interesting way to grow his science, and that included the ability to run more complicated statistical models that were overwhelming his laptop and department desktop computers. 

He took a workshop led by Bennet Fauber, ARC senior applications programmer/analyst, and found Bennet to be sensible and friendly. Bennet made HPC resources feel within reach to a newcomer. Typically, Grogan-Kaylor says, this type of resource is akin to learning a new language, and he’s found that being determined and persistent and finding the right people are key to maximizing ARC services. Bennet has explained error messages, how to upload data, and how to schedule jobs on Great Lakes. He also found a friendly and important resource at the ARC Help Desk, which is staffed by James Cannon. Lastly, departmental IT director Ryan Bankston has been of enormous help in learning about the cluster.

“We’re here to help researchers do what they do best. We can handle the technology, so they can solve the world’s problems,” said Brock Palen, ARC director. 

“Working with ARC has been a positive, growthful experience, and has helped me contribute significantly to the discussion around child development and physical punishment,” said Grogan-Kaylor. “I have a vision of where I’d like our research to go, and I’m pleased to have found friendly, dedicated people to help me with the pragmatic details.” 

More information