Category

HPC

Technology supports researchers’ quest to understand parental discipline behaviors

By | Feature, HPC, News, Research, Systems and Services, Uncategorized

Image by Rajesh Balouria from Pixabay

How do different types of parental discipline behaviors affect children’s development in low- and middle-income countries (LMICs)? A group of researchers set out to understand that question. They used a large data set from UNICEF of several hundred thousand families. The data came from the fourth (2009–2013) and fifth (2012–2017) rounds of the UNICEF Multiple Indicator Cluster Surveys. 

“The majority of parenting research is conducted in higher income and Westernized settings. We need more research that shows what types of parenting behaviors are most effective at promoting children’s development in lower resourced settings outside of the United States. I wanted to conduct an analysis that provided helpful direction for families and policymakers in LMICs regarding what parents can do to raise healthy, happy children,” said Kaitlin Paxton Ward, People Analytics Researcher at Google and Research Affiliate at the University of Michigan.

Dr. Paxton Ward is the lead author on the recently-released paper, “Associations between 11 parental discipline behaviors and child outcomes across 60 countries.” Other authors are also cited in the article: Andrew Grogan-Kaylor, Julie Ma, Garrett T. Pace, and Shawna Lee.

Together, they tested associations between 11 parental discipline behaviors and outcomes (aggression, distraction, and prosocial peer relations) of children under five years in 60 LMICs:

  • Verbal reasoning (i.e., explaining why the misbehavior was wrong)
  • Shouting
  • Name calling
  • Shaking
  • Spanking
  • Hitting/slapping the body
  • Hitting with an object 
  • Beating as hard as one could
  • Removing privileges 
  • Explaining
  • Giving the child something else to do

Results

Verbal reasoning and shouting were the most common parental discipline behaviors towards young children. Psychological and physical aggression were associated with higher child aggression and distraction. Verbal reasoning was associated with lower odds of aggression, and higher odds of prosocial peer relations. Taking away privileges was associated with higher odds of distraction, and lower odds of prosocial peer relations. Giving the child something else to do was associated with higher odds of distraction. The results indicated that there was some country-level variation in the associations between parenting behaviors and child socioemotional outcomes, but also that no form of psychological or physical aggression benefitted children in any country.

Conclusion 

Parental use of psychological and physical aggression were disadvantageous for children’s socioemotional development across countries. Only verbal reasoning was associated with positive child socioemotional development. The authors suggest that greater emphasis should be dedicated to reducing parental use of psychological and physical aggression across cultural contexts, and increasing parental use of verbal reasoning.

The technology used to analyze the data

The researchers relied on a complicated Bayesian multilevel model. This type of analysis incorporated knowledge from previous studies to inform the current analysis, and also provided a way for the researchers to look in more detail at variation across countries. To accomplish this task, the team turned to ITS Advanced Research Computing (ARC) and the Great Lakes High-Performance Computing Cluster. Great Lakes is the largest and fastest HPC service on U-M’s campus. 

“I know for me as a parent of young children, you want the best outcome. I have known people to grow up with different forms of discipline and what the negative or positive influence of those are,” said Brock Palen, ARC director. 

The researchers also created a visual interpretation of their paper for public outreach using a web app called ArcGIS StoryMaps. This software helps researchers tell the story of their work. With no coding required, StoryMaps combine images, text, audio, video, and interactive maps in a captivating web experience. StoryMaps can be shared with groups of users, with an organization, or with the world. 

All students, faculty, and staff have access to ArcGIS StoryMaps. Since 2014, U-M folks have authored over 7,500 StoryMaps, and the number produced annually continues to increase year-over-year. Explore examples of how people around the world are using this technology in the StoryMaps Gallery.

“This intuitive software empowers the U-M community to author engaging, multimedia, place-based narratives, without involving IT staff,” said Peter Knoop, research consultant with LSA Technology Services. 

Correspondence to Dr. Kaitlin Paxton Ward, kpward@umich.edu.

Related article

2024 ARC Winter Maintenance

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

HPC

Like last year, we will have a rolling update which, outside of a few brief interruptions, should keep the clusters in production, Here is the schedule: 

December 6, 11 p.m.

  • Update Slurm Controllers: Expect a brief 1-minute interruption when querying Slurm. All jobs will continue to run.
  • Update Open OnDemand Servers: Expect a few seconds of interruption if you are using Open OnDemand.
  • Login Servers Update: We will begin updating our login servers. This update is not expected to impact any users.

 December 7:

  • Compute Rolling Updates: We will start rolling updates across all clusters a few nodes at a time, so there should be minimal impact on access to resources.

December 19, 10:30a.m.:

  • Update Globus transfer (xfer) nodes: as these nodes are in pairs for each cluster. For the Globus transfer nodes, we will take one node of each pair down at a time, so all Globus services will remain working. If you are using scp/sftp, your jobs may be interrupted, so please schedule these transfers accordingly or use Globus. Total maintenance time should be approximately one hour.

January 3, 8 a.m.:

  • Reboot Slurm Controller Nodes: This will cause an approximately 10-minute Slurm outage. All running jobs will continue to run.
  • Armis2 Open OnDemand Node: We will reload and reboot the Armis2 Open OnDemand node. This will take approximately 1 hour.
  • Great Lakes and Lighthouse Open OnDemand Nodes: These nodes will be down approximately 10 to 15 minutes.
  • Globus Transfer (xfer) Nodes: These nodes will be rebooted. This will take approximately 15 minutes.
  • Sigbio Login Reload/Reboot: This will take approximately 1 hour.
  • Some Armis2 Faculty Owned Equipment (FOE) nodes will require physical and configuration updates. Expected downtime is 4 hours.

HPC Maintenance Notes:

  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • New viz partition : there will be a new partition called viz with 16 new GPUs, which can support exactly one GPU per job .
  • The –cpus-per-gpu Slurm bug has been fixed.

HPC Maintenance Details:

NEW version in BOLD

OLD version

Red Hat 8.6 EUS

  • Kernel 4.18.0-372.75.1.el8_6.x86_64

  • glibc-2.28-189.6

  • ucx-1.15.0-1.59056 (OFED provided)

  • gcc-8.5.0-10.1.el8

Red Hat 8.6 EUS

  • Kernel 4.18.0-372.51.1.el8_6.x86_64

  • glibc-2.28-189.6

  • ucx-1.15.0-1.59056 (OFED provided)

  • gcc-8.5.0-10.1.el8

Mlnx-ofa_kernel-modules

  • OFED 5.9.0.5.5.1

    • kver.4.18.0_372.51.1.el8_6

Mlnx-ofa_kernel-modules

  • OFED 5.9.0.5.5.1

    • kver.4.18.0_372.51.1.el8_6

Slurm 23.02.6 copiles with:

  • PMIx

    • /opt/pmix/3.2.5

    • /opt/pmix/4.2.6

  • hwloc 2.2.0-3 (OS provided)

  • ucx-1.15.0-1.59056 (OFED provided)

  • slurm-libpmi

  • slurm-contribs

Slurm 23.02.5 copiles with:

  • PMIx

    • /opt/pmix/3.2.5

    • /opt/pmix/4.2.6

  • hwloc 2.2.0-3 (OS provided)

  • ucx-1.15.0-1.59056 (OFED provided)

  • slurm-libpmi

  • slurm-contribs

PMIx LD config /opt/pmix/3.2.5/lib

PMIx LD config /opt/pmix/3.2.5/lib

PMIx versions available in /opt :

  • 3.2.5

  • 4.2.6

PMIx versions available in /opt :

  • 3.2.5

  • 4.2.6

Singularity CE (Sylabs.io)

  • 3.10.4

  • 3.11.1

Singularity CE (Sylabs.io)

  • 3.10.4

  • 3.11.1

NVIDIA driver 545.23.06

NVIDIA driver 530.30.02

Open OnDemand 3.0.3

Open OnDemand 3.0.3

 

Storage

There is no scheduled downtime for Turbo, Locker, or  Data Den.

 

Secure Enclave Service (SES)

  • SES team to add details here

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 

Status updates and additional information

  • Status updates will be available on the ARC Twitter feed and ITS service status page,  throughout the course of the maintenance.
  • ARC will send an email to all HPC users when the maintenance has been completed. 

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.

Gordon Bell Prize winning team also leverages ITS services

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

A U-M College of Engineering team led by Vikram Gavini was recently awarded the prestigious ACM Gordon Bell Prize. The honor was presented in recognition of their outstanding achievement for developing and demonstrating an approach that brought near-quantum mechanical accuracy for large systems consisting of tens of thousands of atoms into the range of today’s supercomputers. 

The ACM Gordon Bell Prize is awarded each year to recognize outstanding achievement in high-performance computing.

Dr. Gavini and team carried out their largest calculation on the fastest known computer in the world, the U.S. Department of Energy’s Frontier, and they sustained 660 petaflops. 

“The ACM Gordon Bell Prize is the most prestigious prize available for high-performance computing,” said Brock Palen, Advanced Research Computing (ARC) director, a division of Information and Technology Services (ITS). 

“This is so exciting because it is more than 660 times faster than our entire Great Lakes cluster at perfect efficiency. Their calculation was 10 times the improvement of any prior density-functional theory (DFT) calculation.” 

DFT is a method used in physics and chemistry to investigate the electronic structure of many-body systems. These systems can include atoms, molecules, or solids.  

“Dr. Gavini is a major user of ARC’s Great Lakes HPC Cluster, and we are so proud of this astonishing achievement.” 

Gavini said that any development needs to be carefully tested. “We use ARC’s Great Lakes to test smaller scale systems before moving to larger-scale, production-ready calculations. This testing is critical, and we need the accessibility and speed of Great Lakes to run calibrations and debug our implementations.”

“We use Great Lakes to actively test our implementations.”

For quick-access storage, Gavini and his team use Turbo Research Storage and Data Den Research Archive for longer-term storage. “We generate a lot of data, and storage is important to our work.”

The U-M Research Computing Package was also helpful in defraying some of their HPC and storage costs. 

“Thank you to ARC for providing their continual assistance for group members who have been in trenches. This has been a decade-long effort, and ITS/ARC was crucial along the journey,” said Gavini. 

Dr. Gavini is a professor of Mechanical Engineering and professor of Materials Science and Engineering, College of Engineering.

You’re invited: Parallel programming with MATLAB webinar on Dec. 4 

By | Events, HPC, News

We invite you to join us for an engaging virtual session on Parallel Computing with MATLAB, scheduled for December 4 from 1-4 p.m. EST. This session promises to equip you with valuable insights and knowledge. Here’s a glimpse of what you can expect to learn during the session.

Parallel Computing Hands-On Workshop:

Join us for an immersive hands-on workshop where we will introduce you to the world of parallel computing using MATLAB®. This workshop aims to equip you with the skills to tackle computationally and data-intensive problems by harnessing the power of multicore processors, GPUs, and computer clusters. Through practical exercises and real-world examples, you will gain a comprehensive understanding of parallel computing and learn best practices for its implementation.

Highlights:

  • Explore a range of exercises and examples, varying in difficulty from fundamental parallel usage concepts to more advanced techniques.
  • Learn how to optimize MATLAB applications by leveraging parallel computing capabilities.
  • Discover the benefits of running multiple Simulink simulations in parallel and enhance your simulation efficiency.
  • Dive into the world of GPU computing and unlock the potential for accelerated computations.
  • Explore the concept of offloading computations and delve into the realm of cluster computing.
  • Master the art of working with large data sets and efficiently process them using parallel computing techniques.

Don’t miss out on this opportunity to enhance your parallel computing skills with MATLAB. Join us for this exciting workshop and unlock the potential of parallel computing for your computational challenges.

Register soon to guarantee your spot and receive the Webex link before the workshop.

HPC Emergency 2023 Maintenance: September 15

By | Data, HPC, News, Research, Systems and Services

Due to a critical issue which requires an immediate update, we will be performing updates to Slurm and underlying libraries which allow parallel jobs to communicate. We will be updating the login nodes and the rest of the cluster on the fly and you should only experience minimal impact when interacting with the clusters. 

  • Jobs that are currently running will be allowed to finish. 
  • All new jobs will only be allowed to run on nodes which have been updated. 
  • The login and Open OnDemand nodes will also be updated, which will require a brief interruption in service.

Queued jobs and maintenance reminders

Jobs will remain queued, and will automatically begin after the maintenance is completed. Any parallel using MPI will fail; those jobs may need to be recompiled, as described below. Jobs not using MPI will not be affected by this update.

Jobs will be initially slow to start, as compute nodes are drained of running jobs so they can be updated. We apologize for this inconvenience, and want to assure you that we would not be performing this maintenance during a semester unless it was absolutely necessary.

Software updates

Only one version of OpenMPI (version 4.1.6) will be available; all other versions will be removed. Modules for the versions of OpenMPI that were removed will warn you that it is not available, as well as prompt you to load openmpi/4.1.6. 

When you use the following command, it will default to openmpi/4.1.6:
module load openmpi 

Any software packages you use (provided by ARC/LSA/COE/UMMS or yourself) will need to be updated to use openmpi/4.1.6. The software package updates will be completed by ARC. The code you compile yourself will need to be updated by you.

Note that at the moment openmpi/3.1.6 will be discontinued and warned to update your use to openmpi/4.1.6.

Status updates

 

System software changes

Great Lakes, Armis2 and Lighthouse

NEW version in BOLD OLD version
Slurm 23.02.5 compiles with:
  • PMIx
    • /opt/pmix/3.2.5
    • /opt/pmix/4.2.6
  • hwloc 2.2.0-3 (OS provided)
  • ucx-1.15.0-1.59056 (OFED provided)
  • slurm-libpmi
  • slurm-contribs
Slurm 23.02.3 compiles with:
  • PMIx
    • /opt/pmix/2.2.5
    • /opt/pmix/3.2.3
    • /opt/pmix/4.2.3
  • hwloc 2.2.0-3 (OS provided)
  • ucx-1.15.0-1.59056 (OFED provided)
  • slurm-libpmi
  • slurm-contribs
PMIx LD config /opt/pmix/3.2.5/lib PMIx LD config /opt/pmix/2.2.5/lib
PMIx versions available in /opt :
    • 3.2.5
    • 4.2.6
PMIx versions available in /opt :
  • 2.2.5
  • 3.2.3
  • 4.1.2
OpenMPI
  • 4.1.6
OpenMPI
  • 3.1.6
  • others

 

How can we help you?

For assistance or questions, contact ARC at arc-support@umich.edu.

Summer 2023 Network Maintenance: HPC and storage unavailable August 21-22 

By | Data, HPC, News, Research, Systems and Services

During the 2023 summer maintenance, a significant networking software bug was discovered and ARC was unable to complete the ARC HPC and Storage network updates at the MACC Data Center.

ITS has been working with the vendor on a remediation, and it will be implemented on August 21-22.  This will require scheduled maintenance for the HPC clusters Great Lakes, Armis2, and Lighthouse, as well as the ARC storage systems Turbo, Locker, and Data Den. The date was selected to minimize any impact during the fall semester. 

Maintenance dates:

HPC clusters and storage systems (/home and /scratch) and ARC storage systems (Turbo, Locker, and Data Den) will be unavailable August 21 starting at 7:00am.  Expected completion date is August 22nd.

Queued jobs and maintenance reminders

Jobs will remain queued, and will automatically begin after the maintenance is completed. The command “maxwalltime” will show the amount of time remaining until maintenance begins for each cluster, so you can size your jobs appropriately. The countdown to maintenance will also appear on the ARC homepage

Status updates

How can we help you?

For assistance or questions, contact ARC at arc-support@umich.edu.

Globus can now be used with Armis2 

By | Armis2, HPC, News, Uncategorized

Researchers who have an Armis2 High-Performance Computing account can now move data to and from other Protected Health Information (PHI)-approved systems using Globus File Transfer. (The endpoint is umich#armis2.) 

To learn more about your responsibility and approved services, visit the Sensitive Data Guide and the Protected Health Information (PHI) webpage on the Safe Computing Website. Send an email to ARC at arc-support@umich.edu to get started using Globus with PHI on your own system (this is not needed for researchers using ARC services including Armis2, and Data Den and Turbo with Sensitive Data).

“With the addition of Globus on Armis2, researchers using ITS ARC services can use the same Globus tools and processes to securely and reliably move their data on all ARC systems and across the university and beyond,” said Matt Britt, ARC HPC systems manager.

Globus allows the transfer and collaborative access of data between different storage systems, lab computers, and personal desktops and laptops. Globus enables researchers to use a web browser to submit transfer and data synchronization requests between destinations. 

As a robust, cloud-based, file transfer service, Globus is designed to securely move your data, ranging from megabytes to petabytes. ARC is a Globus Subscription Provider for the U-M community, which allows U-M resources to serve as endpoints or collections for file transfers.

“There are many interesting research collaborations happening at U-M, as well as nationally and internationally. Globus can facilitate all of those interactions securely,” said Brock Palen, ARC director. “Globus is the go-to tool we recommend for data transfer.”

Learn more 

How can we help you?

For assistance or questions, contact ARC at arc-support@umich.edu.

ARC Summer 2023 Maintenance happening in June

By | HPC, News, Systems and Services

Summer maintenance will be happening earlier this year (June instead of August). Updates will be made to software, hardware, and operating systems to improve the performance and stability of services. ARC works to complete these tasks quickly to minimize the impact of the maintenance on research.

The dates listed below are the weeks the work will be occurring; the actual dates will be revised as planning continues.

HPC clusters and storage systems (/scratch) will be unavailable:

  • June 5-9: Great Lakes, Armis2, and Lighthouse

Storage systems will be unavailable:

  • June 6-7: Turbo, Locker, and Data Den

Queued jobs and maintenance reminders

Jobs will remain queued, and will automatically begin after the maintenance is completed. The command “maxwalltime” will show the amount of time remaining until maintenance begins for each cluster, so you can size your jobs appropriately. The countdown to maintenance will also appear on the ARC homepage

Status updates

How can we help you?

For assistance or questions, contact ARC at arc-support@umich.edu.

Globus maintenance happening at 9 a.m. on March 11

By | Armis2, Data, General Interest, Great Lakes, HPC, News, Research, Uncategorized

Due to planned maintenance by the vendor, Globus services will be unavailable for up to two hours beginning at 9 a.m. U.S. Eastern Time (10 a.m. Central Time) on Saturday, March 11, 2023.

Customers will not be able to authenticate or initiate any transfers during that time. Any transfers that have started before the outage will be stalled until the outage is over. Transfers will resume once maintenance is complete.

More details are available on the Globus blog.

For assistance or questions, please contact ARC at arc-support@umich.edu.

2023 Winter Maintenance & Globus File Transfer upgrade 

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

These services will be unavailable: 

  • Great Lakes – We will be updating Great Lakes on a rolling basis throughout December and beginning of January, and if successful, there should be no downtime or impact, with the following exceptions: 
    • Single precision GPUs (SPGPU) will be down Jan. 4-5 for networking maintenance. Those nodes will return back to production when maintenance has been completed and the nodes have been reloaded.
    • Customers will be notified via email of any changes to Great Lakes maintenance that will require downtime.
    • If unsuccessful, the Great Lakes maintenance will begin on Jan. 4-5, starting at 8am.  In either case, we will email everyone with the updated maintenance status.
  • Globus on the storage transfer nodes: Jan. 17-18.

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 
  • Slurm email will be improved, providing  more detailed information about completed jobs.

Countdown to maintenance 

For Great Lakes HPC jobs, use the command “maxwalltime” to discover the amount of time remaining until maintenance begins. 

Jobs that request more walltime than remains until maintenance will automatically be queued and start once maintenance is complete. If the plan for Great Lakes maintenance is successful, any queued jobs will be able to run as usual (except for the SPGPU nodes as discussed above). Customers will be notified via email if downtime is required for Great Lakes.

Status updates and additional information

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.