Tag

HPC

Using GenAI to design floor plans and buildings

By | Data, Data sets, Educational, Feature, General Interest, HPC, News, Research, Systems and Services

There is a lot to consider when designing places where humans live and work. How will the space be used? Who’s using the space? What are budget considerations? It is painstaking and time consuming to develop all of those details into something usable. 

What if Generative AI (GenAI) could help? We already know that it can be used to create text, music, and images. Did you know that it can also create building designs and floor plans? 

Dr. Matias del Campo, associate professor of architecture in the Taubman College for Architecture and Urban Planning, has been working to make architectural generative models more robust. He aims to expand on the patterns, structures, and features from the available input data to create architectural works. Himself a registered architect, designer, and educator, del Campo conducts research on advanced design methods in architecture using artificial intelligence techniques.

He leverages something called neural networks for two projects: 

  • Common House: A project that focuses on floor plan analysis and generation.
  • Model Mine: A large-scale, 3D model housing database for architecture design using Graph Convolutional Neural Networks and 3D Generative Adversarial Networks.

This is an example from the annotated data created from the Common House research project. The main obstacle that has emerged in creating more real-life plans is the lack of databases that are tailored for these architecture applications. The Common House project aims at creating a large-scale dataset for plans with semantic information. Precisely, our data creation pipeline consists of annotating different components of a floor plan, for e.g., Dining Room, Kitchen, Bed Room, etc.

 

Four quadrants showing 9 models each of chairs, laptops, benches, and airplanes

A large scale 3D model housing database for Architecture design using Graph Convolutional Neural Networks and 3D Generative Adversarial Networks.

What exactly are neural networks? The name itself takes inspiration from the human brain and the way that biological neurons signal to one another. In the GenAI world, neural networks are a subset of machine learning and are at the heart of deep learning algorithms. This image of AI hierarchy may be helpful in understanding how they are connected.

Dr. del Campo’s research uses GenAI for every step of the design process including 2D models for things like floors and exteriors, and 3D models for shapes of the rooms, buildings, and volume of the room. The analysis informs design decisions. 

DEI considerations

del Campo notes that there are some DEI implications for the tools he’s developing. “One of the observations that brought us to develop the ‘Common House’ (Plangenerator) project is that the existing apartment and house plan datasets are heavily biased towards European and U.S. housing. They do not contain plans from other regions of the world; thus, most cultures are underrepresented.” 

To counterbalance that, del Campo and his team made a global data collection effort, collecting plans and having them labeled by local architects and architecture students. “This not only ensured a more diverse dataset but also increased the quality of the semantic information in the dataset.”

How technology supports del Campo’s work

A number of services from Information Technology & Services are used in these projects, including: Google at U-M collaboration tools, GenAI, Amazon Web Services at U-M (AWS), and GitHub at U-M

Also from ITS, the Advanced Research Computing (ARC) team provides support to del Campo’s work. 

“We requested allocations from the U-M Research Computing Package for high-performance computing (HPC) services in order to train two models. One focuses on the ‘Common House’ plan generator, and the other focuses on the ‘Model Mine’ dataset to create 3D models based,” said del Campo. 

Additionally, they used HPC allocations from the UMRPC in the creation of a large-scale artwork called MOSAIK which consists of over 20,000 AI-generated images, organized in a color gradient. 

A large scale 3D model housing database for Architecture design using Graph Convolutional Neural Networks and 3D Generative Adversarial Networks.

“We used HPC to run the algorithm that organized the images. Due to the necessary high resolution of the image, this was only possible using HPC.”

“Dr. del Campo’s work is really novel, and it is different from the type of research that is usually processed on Great Lakes. I am impressed by the creative ways Dr. del Campo is applying ITS resources in a way that we did not think was possible,” said Brock Palen, director of the ITS Advanced Research Computing. 

Related: Learn about The Architecture + Artificial Intelligence Laboratory (AR2IL)

Advanced Research Computing on the Great Lakes Cluster

By |

OVERVIEW

This workshop will cover some more advanced topics in computing on the U-M Great Lakes Cluster. Topics to be covered include a brief review of common parallel programming models and basic use of Great Lakes; dependent and array scheduling; workflow scripting; high-throughput computing using launcher; parallel processing in one or more of Python, R, and MATLAB; and profiling of parallel code.

To register and view more details, please refer to the linked TTC page.

Introduction to Research Computing on the Great Lakes Cluster

By |

OVERVIEW

This workshop will introduce you to high performance computing on the Great Lakes cluster.  After a brief overview of the components of the cluster and the resources available there, the main body of the workshop will cover creating batch scripts and the options available to run jobs, and hands-on experience in submitting, tracking, and interpreting the results of submitted jobs. By the end of the workshop, every participant should have created a submission script, submitted a job, tracked its progress, and collected its output. Additional tools including high-performance data transfer services and interactive use of the cluster will also be covered.

To register and view more details, please refer to the linked TTC page.

Intro to GPU & CUDA Programming

By |

This workshop is an introduction to GPU programing for scientific and
engineering applications. The basics of GPU architecture will be
presented. Parallel programing strategies will be discussed followed by
actual programing examples.

Participants should be familiar with programming and how to use the
Great Lakes computing cluster.  C/C++ examples will be provided to try
on the Great Lakes GPU nodes.

To register and view more details, please refer to the linked TTC page.

Introduction to the Linux Command Line

By |

OVERVIEW

This course will familiarize the student with the basics of accessing and interacting with Linux computers using the GNU/Linux operating system’s Bash shell, also generically referred to as “the command line”. Topics include: a brief overview of Linux, the Bash shell, navigating the file system, basic commands, shell redirection, permissions, processes, and the command environment. The workshop will also provide a quick introduction to nano a simple text editor that will be used in subsequent workshops to edit files.

 

To register and view more details, please refer to the linked TTC page

Advanced Research Computing on the Great Lakes Cluster

By |

OVERVIEW

This workshop will cover some more advanced topics in computing on the U-M Great Lakes Cluster. Topics to be covered include a brief review of common parallel programming models and basic use of Great Lakes; dependent and array scheduling; workflow scripting; high-throughput computing using launcher; parallel processing in one or more of Python, R, and MATLAB; and profiling of parallel code.

To register and view more details, please refer to the linked TTC page.

Introduction to Research Computing on the Great Lakes Cluster

By |

OVERVIEW

This workshop will introduce you to high performance computing on the Great Lakes cluster.  After a brief overview of the components of the cluster and the resources available there, the main body of the workshop will cover creating batch scripts and the options available to run jobs, and hands-on experience in submitting, tracking, and interpreting the results of submitted jobs. By the end of the workshop, every participant should have created a submission script, submitted a job, tracked its progress, and collected its output. Additional tools including high-performance data transfer services and interactive use of the cluster will also be covered.

To register and view more details, please refer to the linked TTC page.

Introduction to the Linux Command Line

By |

OVERVIEW

This course will familiarize the student with the basics of accessing and interacting with Linux computers using the GNU/Linux operating system’s Bash shell, also generically referred to as “the command line”. Topics include: a brief overview of Linux, the Bash shell, navigating the file system, basic commands, shell redirection, permissions, processes, and the command environment. The workshop will also provide a quick introduction to nano a simple text editor that will be used in subsequent workshops to edit files.

 

To register and view more details, please refer to the linked TTC page

You’re invited: Parallel programming with MATLAB webinar on Dec. 4 

By | Events, HPC, News

We invite you to join us for an engaging virtual session on Parallel Computing with MATLAB, scheduled for December 4 from 1-4 p.m. EST. This session promises to equip you with valuable insights and knowledge. Here’s a glimpse of what you can expect to learn during the session.

Parallel Computing Hands-On Workshop:

Join us for an immersive hands-on workshop where we will introduce you to the world of parallel computing using MATLAB®. This workshop aims to equip you with the skills to tackle computationally and data-intensive problems by harnessing the power of multicore processors, GPUs, and computer clusters. Through practical exercises and real-world examples, you will gain a comprehensive understanding of parallel computing and learn best practices for its implementation.

Highlights:

  • Explore a range of exercises and examples, varying in difficulty from fundamental parallel usage concepts to more advanced techniques.
  • Learn how to optimize MATLAB applications by leveraging parallel computing capabilities.
  • Discover the benefits of running multiple Simulink simulations in parallel and enhance your simulation efficiency.
  • Dive into the world of GPU computing and unlock the potential for accelerated computations.
  • Explore the concept of offloading computations and delve into the realm of cluster computing.
  • Master the art of working with large data sets and efficiently process them using parallel computing techniques.

Don’t miss out on this opportunity to enhance your parallel computing skills with MATLAB. Join us for this exciting workshop and unlock the potential of parallel computing for your computational challenges.

Register soon to guarantee your spot and receive the Webex link before the workshop.

Intro to GPU & CUDA Programming

By |

This workshop is an introduction to GPU programing for scientific and engineering applications. The basics of GPU architecture will be presented. Parallel programing strategies will be discussed followed by actual programing examples.

Participants should be familiar with programming and how to use Great Lakes computing cluster.  C/C++ examples will be provided to try on the Great Lakes GPU nodes.

 

To register and view more details, please refer to the linked TTC page.