Category

General Interest

Using GenAI to design floor plans and buildings

By | Data, Data sets, Educational, Feature, General Interest, HPC, News, Research, Systems and Services

There is a lot to consider when designing places where humans live and work. How will the space be used? Who’s using the space? What are budget considerations? It is painstaking and time consuming to develop all of those details into something usable. 

What if Generative AI (GenAI) could help? We already know that it can be used to create text, music, and images. Did you know that it can also create building designs and floor plans? 

Dr. Matias del Campo, associate professor of architecture in the Taubman College for Architecture and Urban Planning, has been working to make architectural generative models more robust. He aims to expand on the patterns, structures, and features from the available input data to create architectural works. Himself a registered architect, designer, and educator, del Campo conducts research on advanced design methods in architecture using artificial intelligence techniques.

He leverages something called neural networks for two projects: 

  • Common House: A project that focuses on floor plan analysis and generation.
  • Model Mine: A large-scale, 3D model housing database for architecture design using Graph Convolutional Neural Networks and 3D Generative Adversarial Networks.

This is an example from the annotated data created from the Common House research project. The main obstacle that has emerged in creating more real-life plans is the lack of databases that are tailored for these architecture applications. The Common House project aims at creating a large-scale dataset for plans with semantic information. Precisely, our data creation pipeline consists of annotating different components of a floor plan, for e.g., Dining Room, Kitchen, Bed Room, etc.

 

Four quadrants showing 9 models each of chairs, laptops, benches, and airplanes

A large scale 3D model housing database for Architecture design using Graph Convolutional Neural Networks and 3D Generative Adversarial Networks.

What exactly are neural networks? The name itself takes inspiration from the human brain and the way that biological neurons signal to one another. In the GenAI world, neural networks are a subset of machine learning and are at the heart of deep learning algorithms. This image of AI hierarchy may be helpful in understanding how they are connected.

Dr. del Campo’s research uses GenAI for every step of the design process including 2D models for things like floors and exteriors, and 3D models for shapes of the rooms, buildings, and volume of the room. The analysis informs design decisions. 

DEI considerations

del Campo notes that there are some DEI implications for the tools he’s developing. “One of the observations that brought us to develop the ‘Common House’ (Plangenerator) project is that the existing apartment and house plan datasets are heavily biased towards European and U.S. housing. They do not contain plans from other regions of the world; thus, most cultures are underrepresented.” 

To counterbalance that, del Campo and his team made a global data collection effort, collecting plans and having them labeled by local architects and architecture students. “This not only ensured a more diverse dataset but also increased the quality of the semantic information in the dataset.”

How technology supports del Campo’s work

A number of services from Information Technology & Services are used in these projects, including: Google at U-M collaboration tools, GenAI, Amazon Web Services at U-M (AWS), and GitHub at U-M

Also from ITS, the Advanced Research Computing (ARC) team provides support to del Campo’s work. 

“We requested allocations from the U-M Research Computing Package for high-performance computing (HPC) services in order to train two models. One focuses on the ‘Common House’ plan generator, and the other focuses on the ‘Model Mine’ dataset to create 3D models based,” said del Campo. 

Additionally, they used HPC allocations from the UMRPC in the creation of a large-scale artwork called MOSAIK which consists of over 20,000 AI-generated images, organized in a color gradient. 

A large scale 3D model housing database for Architecture design using Graph Convolutional Neural Networks and 3D Generative Adversarial Networks.

“We used HPC to run the algorithm that organized the images. Due to the necessary high resolution of the image, this was only possible using HPC.”

“Dr. del Campo’s work is really novel, and it is different from the type of research that is usually processed on Great Lakes. I am impressed by the creative ways Dr. del Campo is applying ITS resources in a way that we did not think was possible,” said Brock Palen, director of the ITS Advanced Research Computing. 

Related: Learn about The Architecture + Artificial Intelligence Laboratory (AR2IL)

New Hardware-Accelerated Visualization Partition 

By | General Interest, Great Lakes, HPC, News, Research

We are excited to introduce a brand-new feature of Great Lakes to researchers who need hardware acceleration for their visualization requirements. ARC has created a specialized “viz” partition, consisting of four nodes equipped with NVIDIA P40 GPUs. These nodes are accessible through Open OnDemand’s Remote Desktop functionality.

Key details about this new feature:

  • Jobs utilizing the viz partition have a maximum walltime of 2 hours.
  • The charge rate for the viz partition is currently aligned with our standard partition. 

To make use of the viz partition, follow these steps:

  • Create a Remote Desktop job on Great Lakes via Open OnDemand.
  • Request the “viz” partition, specifying 1 node and 1 GPU (Please note that we have only one GPU available per node).
  • Prefix any application you intend to run with accelerated graphics with the command “vglrun.” (example: “vglrun glxgears”)

For questions or support requests, please contact our team at arc-support@umich.edu.

What DNA can tell us about dog evolution

By | General Interest, HPC, News, Research, Systems and Services

An excerpt from the Michigan Medicine Health Lab Podcast:

An international consortium of scientists, led by Jeff Kidd, Ph.D., of the University of Michigan, Jennifer R. S. Meadows of Uppsala University in Sweden, and Elaine A. Ostrander, Ph.D. of the NIH National Human Genome Research Institute, is using an unprecedentedly large database of canine DNA to take an unbiased look at how our furry friends evolved into the various breeds we know and love.

A paper, published in the journal Genome Biology, outlines what the Dog10K project discovered after sequencing the genomes of close to 2,000 samples from 321 different breed dogs, wild dogs, coyotes, and wolves, and comparing them to one reference sample—that of a German Shepherd named Mischka.

Analyzing more than 48 million pieces of genetic information, they discovered that each breed dog had around 3 million single nucleotide polymorphism differences.

These SNPs or “snips” are what account for most of the genetic variation among people and dogs alike.

They also found 26,000 deleted sequences that were present in the German Shepherd but not in the comparison breed and 14,000 that were in the compared breed but missing from Mischka’s DNA.

“We did an analysis to see how similar the dogs were to each other, and it ended up that we could divide them into around 25 major groups that pretty much match up with what people would have expected based on breed origin, the dogs’ type, size and coloration,” said Kidd.

Most of the varying genes, he added, had to do with morphology, confirming that the breed differences were driven by how the dogs look.

Relative to dogs, wolves had around 14% more variation. And wild village dogs—dogs that live amongst people in villages or cities but aren’t kept as pets—exhibited more genetic variation than breed dogs.

The data set, which was processed using the Great Lakes high-performing computing cluster at U of M, also revealed an unusual amount of retrogenes, a new gene that forms when RNA gets turned back into DNA and inserted back into the genome in a different spot.

Listen to the podcast or read the transcript on the Health Lab webpage.

2024 ARC Winter Maintenance

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

HPC

Like last year, we will have a rolling update which, outside of a few brief interruptions, should keep the clusters in production, Here is the schedule: 

December 6, 11 p.m.

  • Update Slurm Controllers: Expect a brief 1-minute interruption when querying Slurm. All jobs will continue to run.
  • Update Open OnDemand Servers: Expect a few seconds of interruption if you are using Open OnDemand.
  • Login Servers Update: We will begin updating our login servers. This update is not expected to impact any users.

 December 7:

  • Compute Rolling Updates: We will start rolling updates across all clusters a few nodes at a time, so there should be minimal impact on access to resources.

December 19, 10:30a.m.:

  • Update Globus transfer (xfer) nodes: as these nodes are in pairs for each cluster. For the Globus transfer nodes, we will take one node of each pair down at a time, so all Globus services will remain working. If you are using scp/sftp, your jobs may be interrupted, so please schedule these transfers accordingly or use Globus. Total maintenance time should be approximately one hour.

January 3, 8 a.m.:

  • Reboot Slurm Controller Nodes: This will cause an approximately 10-minute Slurm outage. All running jobs will continue to run.
  • Armis2 Open OnDemand Node: We will reload and reboot the Armis2 Open OnDemand node. This will take approximately 1 hour.
  • Great Lakes and Lighthouse Open OnDemand Nodes: These nodes will be down approximately 10 to 15 minutes.
  • Globus Transfer (xfer) Nodes: These nodes will be rebooted. This will take approximately 15 minutes.
  • Sigbio Login Reload/Reboot: This will take approximately 1 hour.
  • Some Armis2 Faculty Owned Equipment (FOE) nodes will require physical and configuration updates. Expected downtime is 4 hours.

HPC Maintenance Notes:

  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • New viz partition : there will be a new partition called viz with 16 new GPUs, which can support exactly one GPU per job .
  • The –cpus-per-gpu Slurm bug has been fixed.

HPC Maintenance Details:

NEW version in BOLD

OLD version

Red Hat 8.6 EUS

  • Kernel 4.18.0-372.75.1.el8_6.x86_64

  • glibc-2.28-189.6

  • ucx-1.15.0-1.59056 (OFED provided)

  • gcc-8.5.0-10.1.el8

Red Hat 8.6 EUS

  • Kernel 4.18.0-372.51.1.el8_6.x86_64

  • glibc-2.28-189.6

  • ucx-1.15.0-1.59056 (OFED provided)

  • gcc-8.5.0-10.1.el8

Mlnx-ofa_kernel-modules

  • OFED 5.9.0.5.5.1

    • kver.4.18.0_372.51.1.el8_6

Mlnx-ofa_kernel-modules

  • OFED 5.9.0.5.5.1

    • kver.4.18.0_372.51.1.el8_6

Slurm 23.02.6 copiles with:

  • PMIx

    • /opt/pmix/3.2.5

    • /opt/pmix/4.2.6

  • hwloc 2.2.0-3 (OS provided)

  • ucx-1.15.0-1.59056 (OFED provided)

  • slurm-libpmi

  • slurm-contribs

Slurm 23.02.5 copiles with:

  • PMIx

    • /opt/pmix/3.2.5

    • /opt/pmix/4.2.6

  • hwloc 2.2.0-3 (OS provided)

  • ucx-1.15.0-1.59056 (OFED provided)

  • slurm-libpmi

  • slurm-contribs

PMIx LD config /opt/pmix/3.2.5/lib

PMIx LD config /opt/pmix/3.2.5/lib

PMIx versions available in /opt :

  • 3.2.5

  • 4.2.6

PMIx versions available in /opt :

  • 3.2.5

  • 4.2.6

Singularity CE (Sylabs.io)

  • 3.10.4

  • 3.11.1

Singularity CE (Sylabs.io)

  • 3.10.4

  • 3.11.1

NVIDIA driver 545.23.06

NVIDIA driver 530.30.02

Open OnDemand 3.0.3

Open OnDemand 3.0.3

 

Storage

There is no scheduled downtime for Turbo, Locker, or  Data Den.

 

Secure Enclave Service (SES)

  • SES team to add details here

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 

Status updates and additional information

  • Status updates will be available on the ARC Twitter feed and ITS service status page,  throughout the course of the maintenance.
  • ARC will send an email to all HPC users when the maintenance has been completed. 

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.

Gordon Bell Prize winning team also leverages ITS services

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

A U-M College of Engineering team led by Vikram Gavini was recently awarded the prestigious ACM Gordon Bell Prize. The honor was presented in recognition of their outstanding achievement for developing and demonstrating an approach that brought near-quantum mechanical accuracy for large systems consisting of tens of thousands of atoms into the range of today’s supercomputers. 

The ACM Gordon Bell Prize is awarded each year to recognize outstanding achievement in high-performance computing.

Dr. Gavini and team carried out their largest calculation on the fastest known computer in the world, the U.S. Department of Energy’s Frontier, and they sustained 660 petaflops. 

“The ACM Gordon Bell Prize is the most prestigious prize available for high-performance computing,” said Brock Palen, Advanced Research Computing (ARC) director, a division of Information and Technology Services (ITS). 

“This is so exciting because it is more than 660 times faster than our entire Great Lakes cluster at perfect efficiency. Their calculation was 10 times the improvement of any prior density-functional theory (DFT) calculation.” 

DFT is a method used in physics and chemistry to investigate the electronic structure of many-body systems. These systems can include atoms, molecules, or solids.  

“Dr. Gavini is a major user of ARC’s Great Lakes HPC Cluster, and we are so proud of this astonishing achievement.” 

Gavini said that any development needs to be carefully tested. “We use ARC’s Great Lakes to test smaller scale systems before moving to larger-scale, production-ready calculations. This testing is critical, and we need the accessibility and speed of Great Lakes to run calibrations and debug our implementations.”

“We use Great Lakes to actively test our implementations.”

For quick-access storage, Gavini and his team use Turbo Research Storage and Data Den Research Archive for longer-term storage. “We generate a lot of data, and storage is important to our work.”

The U-M Research Computing Package was also helpful in defraying some of their HPC and storage costs. 

“Thank you to ARC for providing their continual assistance for group members who have been in trenches. This has been a decade-long effort, and ITS/ARC was crucial along the journey,” said Gavini. 

Dr. Gavini is a professor of Mechanical Engineering and professor of Materials Science and Engineering, College of Engineering.

Globus maintenance happening at 9 a.m. on March 11

By | Armis2, Data, General Interest, Great Lakes, HPC, News, Research, Uncategorized

Due to planned maintenance by the vendor, Globus services will be unavailable for up to two hours beginning at 9 a.m. U.S. Eastern Time (10 a.m. Central Time) on Saturday, March 11, 2023.

Customers will not be able to authenticate or initiate any transfers during that time. Any transfers that have started before the outage will be stalled until the outage is over. Transfers will resume once maintenance is complete.

More details are available on the Globus blog.

For assistance or questions, please contact ARC at arc-support@umich.edu.

Flux transfer servers will be decommissioned on Jan. 9; use new endpoints

By | Feature, General Interest, News

To provide faster transfers of data from ARC services, ARC will be decommissioning the Flux-Xfer servers on January 9, 2023. You will need to update how you migrate your data. 

For everyone who uses ARC storage services, especially Data Den users: This message is VERY important! This change includes the use of scp from Flux-Xfer, as well as the Globus endpoint umich#flux. Any shared endpoints that you have created on umich#flux will be automatically migrated to a new Globus collection on January 9. Those who use Data Den should take special interest in Item 1 listed below. 

Action item – Use the new endpoints

  1. If you currently use globus and umich#flux to access your Locker or Data Den volume, you should use the Globus Collection ‘UMich ARC Locker Non-Sensitive Volume Collection’ for Locker, and the Globus Collection ‘UMich ARC Data Den Non-Sensitive Volume Collection’ for Data Den. 
  2. If you currently use globus and umich#flux to access your Turbo volumes, you should use the Globus Collection ‘UMich ARC Turbo Non-Sensitive Volume Collection’.
  3. If you currently use globus and umich#flux to access other storage volumes, you should use the Globus Collection ‘umich#greatlakes’.  
  4. If you currently use scp on flux-xfer to copy data to/from Turbo, you should use ‘globus-xfer1.arc-ts.umich.edu’.

User guide 

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu

2023 Winter Maintenance & Globus File Transfer upgrade 

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

These services will be unavailable: 

  • Great Lakes – We will be updating Great Lakes on a rolling basis throughout December and beginning of January, and if successful, there should be no downtime or impact, with the following exceptions: 
    • Single precision GPUs (SPGPU) will be down Jan. 4-5 for networking maintenance. Those nodes will return back to production when maintenance has been completed and the nodes have been reloaded.
    • Customers will be notified via email of any changes to Great Lakes maintenance that will require downtime.
    • If unsuccessful, the Great Lakes maintenance will begin on Jan. 4-5, starting at 8am.  In either case, we will email everyone with the updated maintenance status.
  • Globus on the storage transfer nodes: Jan. 17-18.

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 
  • Slurm email will be improved, providing  more detailed information about completed jobs.

Countdown to maintenance 

For Great Lakes HPC jobs, use the command “maxwalltime” to discover the amount of time remaining until maintenance begins. 

Jobs that request more walltime than remains until maintenance will automatically be queued and start once maintenance is complete. If the plan for Great Lakes maintenance is successful, any queued jobs will be able to run as usual (except for the SPGPU nodes as discussed above). Customers will be notified via email if downtime is required for Great Lakes.

Status updates and additional information

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.

ARC Director Brock Palen spoke about Globus at SC22

By | General Interest, News

At the recent International Conference for High Performance Computing, Networking, Storage, and Analysis in Dallas (SC22) in Dallas, Tx., ARC Director Brock Palen spoke about how the University of Michigan is using Globus File Transfer. Globus allows the transfer of data between different storage systems, lab computers, and personal desktops/laptops.

“U-M is the largest public research university in North America by research spend. There are over a hundred top 10 graduate programs, and this drives a lot of interesting collaborations,” said Palen.  

Globus is a robust, cloud-based, file transfer service designed to move many large files, ranging from 10s of GBs to 10s of TBs. ARC is a Globus Subscription Provider for the U-M community, which allows U-M resources to serve as endpoints or collections for file transfers.

“Globus is the tool we use for data transfer, and it’s pretty much ubiquitous across Internet2, the national sites, other academic sites, and it interoperates everywhere,” Palen noted.  

Watch the YouTube video featuring Brock Palen

Check out the Globus user guide on the ARC website.

Dailey receives U-M Robotics’ first-ever alumni award 

By | General Interest, Happenings, News, Research, Uncategorized

Meghan Dailey will be presenting The Future of Machine Learning in Robotics on September 23 at 2 p.m., at FMCRB or Zoom

Meghan Dailey is the U-M Robotics department’s first Alumni Merit Award recipient!

Dailey is a member of the first-ever class in U-M Robotics. She earned a Masters of Science degree in 2015 with a focus in artificial intelligence. She is currently a machine learning specialist with Advanced Research Computing (ARC), a division of Information and Technology Service (ITS)

You’re invited 

In honor of the award, Dailey will be presenting “The Future of Machine Learning in Robotics” on Friday, September 23, 2 p.m., Ford Robotics Building (FMCRB) or on Zoom (meeting ID: 961 1618 4387, passcode: 643563). Machine learning is becoming widely prevalent in many different fields, including robotics. In a future where robots and humans assist each other in completing tasks, what is the role of machine learning, and how should it evolve to effectively serve both humans and robots? Dailey will discuss her past experiences in robotics and machine learning, and how she envisions machine learning contributing to the growth of the robotics field.

About Dailey

A member of the ARC Scientific Computing and Research Consulting Services team, Dailey helps researchers with machine learning and artificial intelligence programming. She has consulted with student and faculty teams to build neural networks for image analysis and classification. She also has extensive experience in natural language processing and has worked on many projects analyzing text sentiment and intent.

Image courtesy U-M Robotics