All Posts By

Dan Meisler

Most CSCAR workshops will be free for the U-M community starting in January 2019

By | Educational, General Interest, Happenings, News

Beginning in January of 2019, most of CSCAR’s workshops will be offered free of charge to UM students, faculty, and staff.

CSCAR is able to do this thanks to funding from UM’s Data Science Initiative.  Registration for CSCAR workshops is still required, and seats are limited.

CSCAR requests that participants please cancel their registration if they decide not to attend a workshop for which they have previously registered.

Note that a small number of workshops hosted by CSCAR but taught by non-CSCAR personnel will continue to have a fee, and fees will continue to apply for people who are not UM students, faculty or staff.

Eric Michielssen completes term as Associate Vice President for Research – Advanced Research Computing

By | General Interest, Happenings, News

Eric Michielssen will step down from his position as Associate Vice President for Research – Advanced Research Computing on December 31, 2018, after serving in that leadership role for almost six years. Dr. Michielssen will return to his faculty role in the Department of Electrical Engineering and Computer Science in the College of Engineering.

Under his leadership, Advanced Research Computing has helped empower computational discovery through the Michigan Institute for Computational Discovery and Engineering (MICDE), the Michigan Institute for Data Science (MIDAS), Advanced Research Computing-Technology Services (ARC-TS) and Consulting for Statistics, Computing and Analytics Research (CSCAR).

In 2015, Eric helped launch the university’s $100 million Data Science initiative, which enhances opportunities for researchers across campus to tap into the enormous potential of big data. He also serves as co-director of the university’s Precision Health initiative, launched last year to harness campus-wide research aimed at finding personalized solutions to improve the health and wellness of individuals and communities.

The Office of Research will convene a group to assess the University’s current and emerging needs in the area of research computing and how best to address them.

U-M approves new graduate certificate in computational neuroscience

By | Educational, General Interest, Happenings, News

The new Graduate Certificate in Computational Neuroscience will help bridge the gap between experimentally focused studies and quantitative modeling and analysis, giving graduate students a chance to broaden their skill sets in the diversifying field of brain science.

“The broad, practical training provided in this certificate program will help prepare both quantitatively focused and lab-based students for the increasingly cross-disciplinary job market in neuroscience,” said Victoria Booth, Professor of Mathematics and Associate Professor of Anesthesiology, who will oversee the program.

To earn the certificate, students will be required to take core computational neuroscience courses and cross-disciplinary courses outside of their home departments; participate in a specialized interdisciplinary journal club; and complete a practicum.

Cross-discplinary courses will depend on a student’s focus: students in experimental neuroscience programs will take quantitative coursework, and students in quantitative science programs such as physics, biophysics, mathematics and engineering will take neuroscience coursework.

The certificate was approved this fall, and will be jointly administered by the Neuroscience Graduate Program (NGP) and the Michigan Institute for Computational Discovery and Engineering (MICDE).

For more information, visit micde.umich.edu/comput-neuro-certificate. Enrollment is not yet open, but information sessions will be scheduled early next year. Please register for the program’s mailing list if you’re interested.

Along with the Graduate Certificate in Computational Neuroscience, U-M offers several other graduate programs aimed at training students in computational and data-intensive science, including:

  • The Graduate Certificate in Computational Discovery and Engineering, which is focused on quantitative and computing techniques that can be applied broadly to all sciences.
  • The Graduate Certificate in Data Science, which specializes in statistical and computational methods required to analyze large data sets.
  • The Ph.D in Scientific Computing, intended for students who will make extensive use of large-scale computation, computational methods, or algorithms for advanced computer architectures in their doctoral studies. This degree is awarded jointly with an existing program, so that a student receives, for example, a Ph.D in Aerospace engineering and Scientific Computing.

 

U-M awarded a Clare Boothe Luce grant for fellowships to support women in STEM

By | Educational, General Interest, Happenings, News

The Clare Boothe Luce Program of the Henry Luce Foundation has awarded a $270,000 grant to the University of Michigan. The funding will support women PhD students through the Michigan Institute for Computational Discovery and Engineering (MICDE). The program aims to encourage women “to enter, study, graduate and teach” in science, and the funding will support women PhD students who make use of computational science in their research.

“We’re very excited to be able to promote women in scientific computing,” said Mariana Carrasco-Teja, manager of the grant and Associate Director of MICDE. “These resources generously provided by the Clare Boothe Luce program will make a huge difference in the careers of women pursuing computational science at U-M.”

For details on applying, and fellowship requirements, see the fellowship page at micde.umich.edu/academic-programs/cbl/.

The fellowships carry a $35,000 annual stipend and tuition, among other benefits. They will be awarded to students applying for PhD programs in fall 2019 in the College of Engineering, or several programs in the College of Literature, Science and the Arts (Applied and Interdisciplinary Mathematics, Applied Physics, Astronomy, Chemistry, Earth & Environmental Sciences, Mathematics, Physics, and Statistics).

The CBL program at U-M is funded by the Clare Boothe Luce Program of the Henry Luce Foundation, with additional support from the Rackham School of Graduate Studies, the College of Engineering, the College of Literature, Sciences and the Arts, and MICDE.

Winter HPC maintenance scheduled for Jan. 6-9

By | Beta, Flux, General Interest, Happenings, HPC, News

To accommodate updates to software, hardware, and operating systems, Flux, Beta, Armis, Cavium, and ConFlux, and their storage systems (/home and /scratch) will be unavailable starting at 6 a.m. Sunday, January 6th and returning to service on Wednesday, January 9th.  These updates will improve the performance and stability of ARC-TS services. We try to encapsulate the required changes into two maintenance periods per year and work to complete these tasks quickly, as we understand the impact of the maintenance on your research.

During this time, the following maintenance tasks are planned:

  • Preventative maintenance at the Modular Data Center (MDC) which requires a full power outage
  • InfiniBand networking updates (firmware and software)
  • Ethernet networking updates (datacenter distribution layer switches)
  • Operating system and software updates
  • Potential updates to job scheduling software
  • Migration of Turbo networking to new switches (affects /home and /sw)
  • Perform consistency checks on the Lustre file systems that provide /scratch
  • Update firmware and software of the GPFS file systems (ConFlux, starting 9 a.m., Monday, Jan. 7)
  • Perform consistency checks on the GPFS file systems that provide /gpfs (ConFlux, starting 9 a.m., Monday, Jan. 7) 

You can use the command “maxwalltime” to discover the amount of time remaining until the beginning of the maintenance. Jobs requesting more walltime than remains before the maintenance will be queued and started after the maintenance is completed.

All filesystems will be unavailable during the maintenance. We encourage you to copy any data that might be needed during that time from Flux prior to the start of the maintenance.

We will post status updates on our Twitter feed ( https://twitter.com/arcts_um ) throughout the course of the maintenance and send an email to all users when the maintenance has been completed.  Please contact hpc-support@umich.edu if you have any questions.

ARC Director Sharon Broude Geva re-elected Chair of the Coalition for Academic Scientific Computation

By | HPC, News

Dr. Sharon Broude Geva, Director of Advanced Research Computing at the University of Michigan, has been re-elected as Chair of the Coalition for Academic Scientific Computation (CASC) for 2020.

Founded in 1989, CASC advocates for the use of advanced computing technology to accelerate scientific discovery for national competitiveness, global security, and economic success. The organization’s members represent 87 institutions of higher education and national labs.

The chair position is one of four elected CASC executive officers. The officers work closely as a team with the director of CASC. The Chair is responsible for arranging and presiding over general CASC meetings and acts as an official representative of CASC.

Geva served as CASC secretary in 2015 and 2016, and vice-chair in 2017 and 2018.

 

U-M participates in SC18 conference in Dallas

By | General Interest, Happenings, News

University of Michigan researchers and IT staff wrapped up a successful Supercomputing ‘18 (SC18) in Dallas from Nov. 11-16, 2018, taking part in a number of different aspects of the conference.

SC “Perennial” Quentin Stout, U-M professor of Electrical Engineering and Computer Science and one of only 19 people who have been to every Supercomputing conference, co-presented a tutorial titled Parallel Computing 101.

And with the recent announcement of a new HPC cluster on campus called Great Lakes, IT staff from Advanced Research Computing – Technology Services (ARC-TS) made presentations around the conference on the details of the new supercomputer.

U-M once again shared a booth with Michigan State University booth, highlighting our computational and data-intensive research as well as the comprehensive set of tools and services we provide to our researchers. Representatives from all ARC units were at the booth: ARC-TS, the Michigan Institute for Data Science (MIDAS), the Michigan Institute for Computational Discovery and Engineering (MICDE), and Consulting for Statistics, Computing and Analytics Research (CSCAR).

The booth also featured two demonstrations: one on the Open Storage Research Infrastructure or OSiRIS, the multi-institutional software-defined data storage system, and the Services Layer At The Edge (SLATE) project, both of which are supported by the NSF; the other tested conference-goers’ ability to detect “fake news” stories compared to an artificial intelligence system created by researchers supported by MIDAS.

Gallery

U-M Activities

  • Tutorial: Parallel Computing 101: Prof. Stout and Associate Professor Christiane Jablonowski of the U-M Department of Climate and Space Sciences and Engineering provided a comprehensive overview of parallel computing.
  • Introduction to Kubernetes. Presented by Bob Killen, Research Cloud Administrator, and Scott Paschke, Research Cloud Solutions Designer, both from ARC-TS. Containers have shifted the way applications are packaged and delivered. Their use in data science and machine learning is skyrocketing with the beneficial side effect of enabling reproducible research. This rise in use has necessitated the need to explore and adopt better container-centric orchestration tools. Of these tools, Kubernetes – an open-source container platform born within Google — has become the de facto standard. This half-day tutorial introduced researchers and sys admins who may already be familiar with container concepts to the architecture and fundamental concepts of Kubernetes. Attendees explored these concepts through a series of hands-on exercises and left with the leg-up in continuing their container education, and gained a better understanding of how Kubernetes may be used for research applications.
  • Brock Palen, Director of ARC-TS, spoke about the new Great Lakes HPC cluster:
    • DDN booth (3123)
    • Mellanox booth (3207)
    • Dell booth (3218)
    • SLURM booth (1242)
  • Todd Raeker, Research Technology Consultant for ARC-TS, went to the Globus booth (4201) to talk about U-M researchers’ use of the service.
  • Birds of a Feather: Meeting HPC Container Challenges as a Community. Bob Killen, Research Cloud Administrator at ARC-TS, gave a lightning talk as part of this session that presented, prioritized, and gathered input on top issues and budding solutions around containerization of HPC applications.
  • Sharon Broude Geva, Director of ARC, was live on the SC18 News Desk discussing ARC HPC services, Women in HPC, and the Coalition for Scientific Academic Computation (CASC). The stream was available from the Supercomputing Twitter account: https://twitter.com/Supercomputing
  • Birds of a Feather: Ceph Applications in HPC Environments: Ben Meekhof, HPC Storage Administrator at ARC-TS, gave a lightning talk on Ceph and OSiRIS as part of this session. More details at https://www.msi.umn.edu/ceph-hpc-environments-sc18
  • ARC was a sponsor of the Women in HPC Reception. See the event description for more details and to register. Sharon Broude Geva, Director of ARC, gave a presentation.
  • Birds of a Feather: Cloud Infrastructure Solutions to Run HPC Workloads: Bob Killen, Research Cloud Administrator at ARC-TS, presented at this session aimed at architects, administrators, software engineers, and scientists interested in designing and deploying cloud infrastructure solutions such as OpenStack, Docker, Charliecloud, Singularity, Kubernetes, and Mesos.
  • Jing Liu of the Michigan Institute for Data Science, participated in a panel discussion at the Purdue University booth.

Follow ARC on Twitter at https://twitter.com/ARC_UM for updates.

Beta cluster available for learning Slurm; new scheduler to be part of upcoming cluster updates

By | Flux, General Interest, Happenings, HPC, News

New HPC resources to replace Flux and updates to Armis are coming.  They will run a new scheduling system (Slurm). You will need to learn the commands in this system and update your batch files to successfully run jobs. Read on to learn the details and how to get training and adapt your files.

In anticipation of these changes, ARC-TS has created the test cluster “Beta,” which will provide a testing environment for the transition to Slurm. Slurm will be used on Great Lakes; the Armis HIPAA-aligned cluster; and a new cluster called “Lighthouse” which will succeed the Flux Operating Environment in early 2019.

Currently, Flux and Armis use the Torque (PBS) resource manager and the Moab scheduling system; when completed, Great Lakes and Lighthouse will use the Slurm scheduler and resource manager, which will enhance the performance and reliability of the new resources. Armis will transition from Torque to Slurm in early 2019.

The Beta test cluster is available to all Flux users, who can login via ssh at ‘beta.arc-ts.umich.edu’. Beta has its own /home directory, so users will need to create or transfer any files they need, via scp/sftp or Globus.

Slurm commands will be needed to submit jobs. For a comparison of Slurm and Torque commands, see our Torque to Slurm migration page. For more information, see the Beta home page.

Support staff from ARC-TS and individual academic units will conduct several in-person and online training sessions to help users become familiar with Slurm. We have been testing Slurm for several months, and believe the performance gains, user communications, and increased reliability will significantly improve the efficiency and effectiveness of the HPC environment at U-M.

The tentative time frame for replacing or transitioning current ARC-TS resources is:

  • Flux to Great Lakes, first half of 2019
  • Armis from Torque to Slurm, January 2019
  • Flux Operating Environment to Lighthouse, first half of 2019
  • Open OnDemand on Beta, which replaces ARC Connect for web-based job submissions, Jupyter Notebooks, Matlab, and additional software packages, fall 2018

MIDAS announces winners of 2018 poster competition

By | Educational, General Interest, Happenings, Research

The Michigan Institute for Data Science (MIDAS) is pleased to announce the winners of its 2018 poster competition, which is held in conjunction with the MIDAS annual symposium.

The symposium was held on Oct. 9-10, 2018, and the student poster competition had more than 60 entries. The winners, judged by a panel of faculty members, received cash prizes.

Best Overall

Arthur Endsley, “Comparing and timing business cycles and land development trends in U.S. metropolitan housing markets”

Most likely health impact

  • Yehu Chen, Yingsi Jian, Qiucheng Wu, Yichen Yang, “Compressive Big Data Analytics – CBDA: Applications to Biomedical and Health Studies”
  • Jinghui Liu, “An Information Retrieval System with an Iterative Pattern for TREC Precision Medicine”

Most likely transformative science impact

  • Prashant Rajaram, “Bingeability and Ad Tolerance: New Metrics for the Streaming Media Age”
  • Mike Ion, “Learning About the Norms of Teaching Practice: How Can Machine Learning Help Analyze Teachers’ Reactions to Scenarios?”

Most interesting methodological advancement

  • Nina Zhou and Qiucheng Wu, “DataSifter: Statistical Obfuscation of Electronic Health Records and Other Sensitive Datasets”
  • Aniket Deshmukh, “Simple Regret Minimization for Contextual Bandits”

Most likely societal impact

  • Ece Sanci, “Optimization of Food Pantry Locations to Address Food Scarcity in Toledo, OH”
  • Rohail Syed, “Human Perception of Surprise: A User Study”

Most innovative use of data

  • Lan Luo, “Renewable Estimation and Incremental Inference in Generalized Linear Models with Streaming Datasets”
  • Danaja  Maldeniya, “Psychological Response of Communities affected by Natural Disasters in Social Media”

U-M selects Dell EMC, Mellanox and DDN to Supply New “Great Lakes” Computing Cluster

By | Flux, General Interest, Happenings, HPC, News

The University of Michigan has selected Dell EMC as lead vendor to supply its new $4.8 million Great Lakes computing cluster, which will serve researchers across campus. Mellanox Technologies will provide networking solutions, and DDN will supply storage hardware.

Great Lakes will be available to the campus community in the first half of 2019, and over time will replace the Flux supercomputer, which serves more than 2,500 active users at U-M for research ranging from aerospace engineering simulations and molecular dynamics modeling to genomics and cell biology to machine learning and artificial intelligence.

Great Lakes will be the first cluster in the world to use the Mellanox HDR 200 gigabit per second InfiniBand networking solution, enabling faster data transfer speeds and increased application performance.

“High-performance research computing is a critical component of the rich computing ecosystem that supports the university’s core mission,” said Ravi Pendse, U-M’s vice president for information technology and chief information officer. “With Great Lakes, researchers in emerging fields like machine learning and precision health will have access to a higher level of computational power. We’re thrilled to be working with Dell EMC, Mellanox, and DDN; the end result will be improved performance, flexibility, and reliability for U-M researchers.”

“Dell EMC is thrilled to collaborate with the University of Michigan and our technology partners to bring this innovative and powerful system to such a strong community of researchers,” said Thierry Pellegrino, vice president, Dell EMC High Performance Computing. “This Great Lakes cluster will offer an exceptional boost in performance, throughput and response to reduce the time needed for U-M researches to make the next big discovery in a range of disciplines from artificial intelligence to genomics and bioscience.”

The main components of the new cluster are:

  • Dell EMC PowerEdge C6420 compute nodes, PowerEdge R640 high memory nodes, and PowerEdge R740 GPU nodes
  • Mellanox HDR 200Gb/s InfiniBand ConnectX-6 adapters, Quantum switches and LinkX cables, and InfiniBand gateway platforms
  • DDN GRIDScaler® 14KX® and 100 TB of usable IME® (Infinite Memory Engine) memory

“HDR 200G InfiniBand provides the highest data speed and smart In-Network Computing acceleration engines, delivering HPC and AI applications with the best performance, scalability and efficiency,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “We are excited to collaborate with the University of Michigan, Dell EMC and DataDirect Networks, in building a leading HDR 200G InfiniBand-based supercomputer, serving the growing demands of U-M researchers.”

“DDN has a long history of working with Dell EMC and Mellanox to deliver optimized solutions for our customers. We are happy to be a part of the new Great Lakes cluster, supporting its mission of advanced research and computing. Partnering with forward-looking thought leaders as these is always enlightening and enriching,” said Dr. James Coomer, SVP Product Marketing and Benchmarks at DDN.

Great Lakes will provide significant improvement in computing performance over Flux. For example, each compute node will have more cores, higher maximum speed capabilities, and increased memory. The cluster will also have improved internet connectivity and file system performance, as well as NVIDIA Tensor GPU cores, which are very powerful for machine learning compared to prior generations of GPUs.

“Users of Great Lakes will have access to more cores, faster cores, faster memory, faster storage, and a more balanced network,” said Brock Palen, Director of Advanced Research Computing – Technology Services (ARC-TS).

The Flux cluster was created approximately 8 years ago, although many of the individual nodes have been added since then. Great Lakes represents an architectural overhaul that will result in better performance and efficiency. Based on extensive input from faculty and other stakeholders across campus, the new Great Lakes cluster will be designed to deliver similar services and capabilities as Flux, including the ability to accommodate faculty purchases of hardware, access to GPUs and large-memory nodes, and improved support for emerging uses such as machine learning and genomics.

ARC-TS will operate and maintain the cluster once it is built. Allocations of computing resources through ARC-TS include access to hundreds of software titles, as well as support and consulting from professional staff with decades of combined experience in research computing.

Updates on the progress of Great Lakes will be available at https://arc-ts.umich.edu/greatlakes/.