Category

Feature

Using GenAI to design floor plans and buildings

By | Data, Data sets, Educational, Feature, General Interest, HPC, News, Research, Systems and Services

There is a lot to consider when designing places where humans live and work. How will the space be used? Who’s using the space? What are budget considerations? It is painstaking and time consuming to develop all of those details into something usable. 

What if Generative AI (GenAI) could help? We already know that it can be used to create text, music, and images. Did you know that it can also create building designs and floor plans? 

Dr. Matias del Campo, associate professor of architecture in the Taubman College for Architecture and Urban Planning, has been working to make architectural generative models more robust. He aims to expand on the patterns, structures, and features from the available input data to create architectural works. Himself a registered architect, designer, and educator, del Campo conducts research on advanced design methods in architecture using artificial intelligence techniques.

He leverages something called neural networks for two projects: 

  • Common House: A project that focuses on floor plan analysis and generation.
  • Model Mine: A large-scale, 3D model housing database for architecture design using Graph Convolutional Neural Networks and 3D Generative Adversarial Networks.

This is an example from the annotated data created from the Common House research project. The main obstacle that has emerged in creating more real-life plans is the lack of databases that are tailored for these architecture applications. The Common House project aims at creating a large-scale dataset for plans with semantic information. Precisely, our data creation pipeline consists of annotating different components of a floor plan, for e.g., Dining Room, Kitchen, Bed Room, etc.

 

Four quadrants showing 9 models each of chairs, laptops, benches, and airplanes

A large scale 3D model housing database for Architecture design using Graph Convolutional Neural Networks and 3D Generative Adversarial Networks.

What exactly are neural networks? The name itself takes inspiration from the human brain and the way that biological neurons signal to one another. In the GenAI world, neural networks are a subset of machine learning and are at the heart of deep learning algorithms. This image of AI hierarchy may be helpful in understanding how they are connected.

Dr. del Campo’s research uses GenAI for every step of the design process including 2D models for things like floors and exteriors, and 3D models for shapes of the rooms, buildings, and volume of the room. The analysis informs design decisions. 

DEI considerations

del Campo notes that there are some DEI implications for the tools he’s developing. “One of the observations that brought us to develop the ‘Common House’ (Plangenerator) project is that the existing apartment and house plan datasets are heavily biased towards European and U.S. housing. They do not contain plans from other regions of the world; thus, most cultures are underrepresented.” 

To counterbalance that, del Campo and his team made a global data collection effort, collecting plans and having them labeled by local architects and architecture students. “This not only ensured a more diverse dataset but also increased the quality of the semantic information in the dataset.”

How technology supports del Campo’s work

A number of services from Information Technology & Services are used in these projects, including: Google at U-M collaboration tools, GenAI, Amazon Web Services at U-M (AWS), and GitHub at U-M

Also from ITS, the Advanced Research Computing (ARC) team provides support to del Campo’s work. 

“We requested allocations from the U-M Research Computing Package for high-performance computing (HPC) services in order to train two models. One focuses on the ‘Common House’ plan generator, and the other focuses on the ‘Model Mine’ dataset to create 3D models based,” said del Campo. 

Additionally, they used HPC allocations from the UMRPC in the creation of a large-scale artwork called MOSAIK which consists of over 20,000 AI-generated images, organized in a color gradient. 

A large scale 3D model housing database for Architecture design using Graph Convolutional Neural Networks and 3D Generative Adversarial Networks.

“We used HPC to run the algorithm that organized the images. Due to the necessary high resolution of the image, this was only possible using HPC.”

“Dr. del Campo’s work is really novel, and it is different from the type of research that is usually processed on Great Lakes. I am impressed by the creative ways Dr. del Campo is applying ITS resources in a way that we did not think was possible,” said Brock Palen, director of the ITS Advanced Research Computing. 

Related: Learn about The Architecture + Artificial Intelligence Laboratory (AR2IL)

Technology supports researchers’ quest to understand parental discipline behaviors

By | Feature, HPC, News, Research, Systems and Services, Uncategorized

Image by Rajesh Balouria from Pixabay

How do different types of parental discipline behaviors affect children’s development in low- and middle-income countries (LMICs)? A group of researchers set out to understand that question. They used a large data set from UNICEF of several hundred thousand families. The data came from the fourth (2009–2013) and fifth (2012–2017) rounds of the UNICEF Multiple Indicator Cluster Surveys. 

“The majority of parenting research is conducted in higher income and Westernized settings. We need more research that shows what types of parenting behaviors are most effective at promoting children’s development in lower resourced settings outside of the United States. I wanted to conduct an analysis that provided helpful direction for families and policymakers in LMICs regarding what parents can do to raise healthy, happy children,” said Kaitlin Paxton Ward, People Analytics Researcher at Google and Research Affiliate at the University of Michigan.

Dr. Paxton Ward is the lead author on the recently-released paper, “Associations between 11 parental discipline behaviors and child outcomes across 60 countries.” Other authors are also cited in the article: Andrew Grogan-Kaylor, Julie Ma, Garrett T. Pace, and Shawna Lee.

Together, they tested associations between 11 parental discipline behaviors and outcomes (aggression, distraction, and prosocial peer relations) of children under five years in 60 LMICs:

  • Verbal reasoning (i.e., explaining why the misbehavior was wrong)
  • Shouting
  • Name calling
  • Shaking
  • Spanking
  • Hitting/slapping the body
  • Hitting with an object 
  • Beating as hard as one could
  • Removing privileges 
  • Explaining
  • Giving the child something else to do

Results

Verbal reasoning and shouting were the most common parental discipline behaviors towards young children. Psychological and physical aggression were associated with higher child aggression and distraction. Verbal reasoning was associated with lower odds of aggression, and higher odds of prosocial peer relations. Taking away privileges was associated with higher odds of distraction, and lower odds of prosocial peer relations. Giving the child something else to do was associated with higher odds of distraction. The results indicated that there was some country-level variation in the associations between parenting behaviors and child socioemotional outcomes, but also that no form of psychological or physical aggression benefitted children in any country.

Conclusion 

Parental use of psychological and physical aggression were disadvantageous for children’s socioemotional development across countries. Only verbal reasoning was associated with positive child socioemotional development. The authors suggest that greater emphasis should be dedicated to reducing parental use of psychological and physical aggression across cultural contexts, and increasing parental use of verbal reasoning.

The technology used to analyze the data

The researchers relied on a complicated Bayesian multilevel model. This type of analysis incorporated knowledge from previous studies to inform the current analysis, and also provided a way for the researchers to look in more detail at variation across countries. To accomplish this task, the team turned to ITS Advanced Research Computing (ARC) and the Great Lakes High-Performance Computing Cluster. Great Lakes is the largest and fastest HPC service on U-M’s campus. 

“I know for me as a parent of young children, you want the best outcome. I have known people to grow up with different forms of discipline and what the negative or positive influence of those are,” said Brock Palen, ARC director. 

The researchers also created a visual interpretation of their paper for public outreach using a web app called ArcGIS StoryMaps. This software helps researchers tell the story of their work. With no coding required, StoryMaps combine images, text, audio, video, and interactive maps in a captivating web experience. StoryMaps can be shared with groups of users, with an organization, or with the world. 

All students, faculty, and staff have access to ArcGIS StoryMaps. Since 2014, U-M folks have authored over 7,500 StoryMaps, and the number produced annually continues to increase year-over-year. Explore examples of how people around the world are using this technology in the StoryMaps Gallery.

“This intuitive software empowers the U-M community to author engaging, multimedia, place-based narratives, without involving IT staff,” said Peter Knoop, research consultant with LSA Technology Services. 

Correspondence to Dr. Kaitlin Paxton Ward, kpward@umich.edu.

Related article

2024 ARC Winter Maintenance

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

HPC

Like last year, we will have a rolling update which, outside of a few brief interruptions, should keep the clusters in production, Here is the schedule: 

December 6, 11 p.m.

  • Update Slurm Controllers: Expect a brief 1-minute interruption when querying Slurm. All jobs will continue to run.
  • Update Open OnDemand Servers: Expect a few seconds of interruption if you are using Open OnDemand.
  • Login Servers Update: We will begin updating our login servers. This update is not expected to impact any users.

 December 7:

  • Compute Rolling Updates: We will start rolling updates across all clusters a few nodes at a time, so there should be minimal impact on access to resources.

December 19, 10:30a.m.:

  • Update Globus transfer (xfer) nodes: as these nodes are in pairs for each cluster. For the Globus transfer nodes, we will take one node of each pair down at a time, so all Globus services will remain working. If you are using scp/sftp, your jobs may be interrupted, so please schedule these transfers accordingly or use Globus. Total maintenance time should be approximately one hour.

January 3, 8 a.m.:

  • Reboot Slurm Controller Nodes: This will cause an approximately 10-minute Slurm outage. All running jobs will continue to run.
  • Armis2 Open OnDemand Node: We will reload and reboot the Armis2 Open OnDemand node. This will take approximately 1 hour.
  • Great Lakes and Lighthouse Open OnDemand Nodes: These nodes will be down approximately 10 to 15 minutes.
  • Globus Transfer (xfer) Nodes: These nodes will be rebooted. This will take approximately 15 minutes.
  • Sigbio Login Reload/Reboot: This will take approximately 1 hour.
  • Some Armis2 Faculty Owned Equipment (FOE) nodes will require physical and configuration updates. Expected downtime is 4 hours.

HPC Maintenance Notes:

  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • New viz partition : there will be a new partition called viz with 16 new GPUs, which can support exactly one GPU per job .
  • The –cpus-per-gpu Slurm bug has been fixed.

HPC Maintenance Details:

NEW version in BOLD

OLD version

Red Hat 8.6 EUS

  • Kernel 4.18.0-372.75.1.el8_6.x86_64

  • glibc-2.28-189.6

  • ucx-1.15.0-1.59056 (OFED provided)

  • gcc-8.5.0-10.1.el8

Red Hat 8.6 EUS

  • Kernel 4.18.0-372.51.1.el8_6.x86_64

  • glibc-2.28-189.6

  • ucx-1.15.0-1.59056 (OFED provided)

  • gcc-8.5.0-10.1.el8

Mlnx-ofa_kernel-modules

  • OFED 5.9.0.5.5.1

    • kver.4.18.0_372.51.1.el8_6

Mlnx-ofa_kernel-modules

  • OFED 5.9.0.5.5.1

    • kver.4.18.0_372.51.1.el8_6

Slurm 23.02.6 copiles with:

  • PMIx

    • /opt/pmix/3.2.5

    • /opt/pmix/4.2.6

  • hwloc 2.2.0-3 (OS provided)

  • ucx-1.15.0-1.59056 (OFED provided)

  • slurm-libpmi

  • slurm-contribs

Slurm 23.02.5 copiles with:

  • PMIx

    • /opt/pmix/3.2.5

    • /opt/pmix/4.2.6

  • hwloc 2.2.0-3 (OS provided)

  • ucx-1.15.0-1.59056 (OFED provided)

  • slurm-libpmi

  • slurm-contribs

PMIx LD config /opt/pmix/3.2.5/lib

PMIx LD config /opt/pmix/3.2.5/lib

PMIx versions available in /opt :

  • 3.2.5

  • 4.2.6

PMIx versions available in /opt :

  • 3.2.5

  • 4.2.6

Singularity CE (Sylabs.io)

  • 3.10.4

  • 3.11.1

Singularity CE (Sylabs.io)

  • 3.10.4

  • 3.11.1

NVIDIA driver 545.23.06

NVIDIA driver 530.30.02

Open OnDemand 3.0.3

Open OnDemand 3.0.3

 

Storage

There is no scheduled downtime for Turbo, Locker, or  Data Den.

 

Secure Enclave Service (SES)

  • SES team to add details here

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 

Status updates and additional information

  • Status updates will be available on the ARC Twitter feed and ITS service status page,  throughout the course of the maintenance.
  • ARC will send an email to all HPC users when the maintenance has been completed. 

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.

Gordon Bell Prize winning team also leverages ITS services

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

A U-M College of Engineering team led by Vikram Gavini was recently awarded the prestigious ACM Gordon Bell Prize. The honor was presented in recognition of their outstanding achievement for developing and demonstrating an approach that brought near-quantum mechanical accuracy for large systems consisting of tens of thousands of atoms into the range of today’s supercomputers. 

The ACM Gordon Bell Prize is awarded each year to recognize outstanding achievement in high-performance computing.

Dr. Gavini and team carried out their largest calculation on the fastest known computer in the world, the U.S. Department of Energy’s Frontier, and they sustained 660 petaflops. 

“The ACM Gordon Bell Prize is the most prestigious prize available for high-performance computing,” said Brock Palen, Advanced Research Computing (ARC) director, a division of Information and Technology Services (ITS). 

“This is so exciting because it is more than 660 times faster than our entire Great Lakes cluster at perfect efficiency. Their calculation was 10 times the improvement of any prior density-functional theory (DFT) calculation.” 

DFT is a method used in physics and chemistry to investigate the electronic structure of many-body systems. These systems can include atoms, molecules, or solids.  

“Dr. Gavini is a major user of ARC’s Great Lakes HPC Cluster, and we are so proud of this astonishing achievement.” 

Gavini said that any development needs to be carefully tested. “We use ARC’s Great Lakes to test smaller scale systems before moving to larger-scale, production-ready calculations. This testing is critical, and we need the accessibility and speed of Great Lakes to run calibrations and debug our implementations.”

“We use Great Lakes to actively test our implementations.”

For quick-access storage, Gavini and his team use Turbo Research Storage and Data Den Research Archive for longer-term storage. “We generate a lot of data, and storage is important to our work.”

The U-M Research Computing Package was also helpful in defraying some of their HPC and storage costs. 

“Thank you to ARC for providing their continual assistance for group members who have been in trenches. This has been a decade-long effort, and ITS/ARC was crucial along the journey,” said Gavini. 

Dr. Gavini is a professor of Mechanical Engineering and professor of Materials Science and Engineering, College of Engineering.

Flux transfer servers will be decommissioned on Jan. 9; use new endpoints

By | Feature, General Interest, News

To provide faster transfers of data from ARC services, ARC will be decommissioning the Flux-Xfer servers on January 9, 2023. You will need to update how you migrate your data. 

For everyone who uses ARC storage services, especially Data Den users: This message is VERY important! This change includes the use of scp from Flux-Xfer, as well as the Globus endpoint umich#flux. Any shared endpoints that you have created on umich#flux will be automatically migrated to a new Globus collection on January 9. Those who use Data Den should take special interest in Item 1 listed below. 

Action item – Use the new endpoints

  1. If you currently use globus and umich#flux to access your Locker or Data Den volume, you should use the Globus Collection ‘UMich ARC Locker Non-Sensitive Volume Collection’ for Locker, and the Globus Collection ‘UMich ARC Data Den Non-Sensitive Volume Collection’ for Data Den. 
  2. If you currently use globus and umich#flux to access your Turbo volumes, you should use the Globus Collection ‘UMich ARC Turbo Non-Sensitive Volume Collection’.
  3. If you currently use globus and umich#flux to access other storage volumes, you should use the Globus Collection ‘umich#greatlakes’.  
  4. If you currently use scp on flux-xfer to copy data to/from Turbo, you should use ‘globus-xfer1.arc-ts.umich.edu’.

User guide 

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu

2023 Winter Maintenance & Globus File Transfer upgrade 

By | Feature, General Interest, Great Lakes, HPC, News, Systems and Services

Winter maintenance is coming up! See the details below. Reach out to arc-support@umich.edu with questions or if you need help. 

These services will be unavailable: 

  • Great Lakes – We will be updating Great Lakes on a rolling basis throughout December and beginning of January, and if successful, there should be no downtime or impact, with the following exceptions: 
    • Single precision GPUs (SPGPU) will be down Jan. 4-5 for networking maintenance. Those nodes will return back to production when maintenance has been completed and the nodes have been reloaded.
    • Customers will be notified via email of any changes to Great Lakes maintenance that will require downtime.
    • If unsuccessful, the Great Lakes maintenance will begin on Jan. 4-5, starting at 8am.  In either case, we will email everyone with the updated maintenance status.
  • Globus on the storage transfer nodes: Jan. 17-18.

Maintenance notes:

  • No downtime for ARC storage systems maintenance (Turbo, Locker, and Data Den).
  • Open OnDemand (OOD) users will need to re-login. Any existing jobs will continue to run and can be reconnected in the OOD portal.
  • Login servers will be updated, and the maintenance should not have any effect on most users. Those who are affected will be contacted directly by ARC. 
  • Copy any data and files that may be needed during maintenance to your local drive using Globus File Transfer before maintenance begins. 
  • Slurm email will be improved, providing  more detailed information about completed jobs.

Countdown to maintenance 

For Great Lakes HPC jobs, use the command “maxwalltime” to discover the amount of time remaining until maintenance begins. 

Jobs that request more walltime than remains until maintenance will automatically be queued and start once maintenance is complete. If the plan for Great Lakes maintenance is successful, any queued jobs will be able to run as usual (except for the SPGPU nodes as discussed above). Customers will be notified via email if downtime is required for Great Lakes.

Status updates and additional information

How can we help you?

For assistance or questions, please contact ARC at arc-support@umich.edu.

XSEDE is now ACCESS 

By | Feature, HPC, News

Access: advancing innovation and decorative imageThe national HPC resource known as XSEDE has now fully transitioned to ACCESS, Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support. 

ARC staff members, Todd Raeker and Shelly Johnson, are here to help you understand what is happening and how you can take advantage of the national resources and allocations.  

ACCESS has streamlined the allocation application process. Many projects that formerly had to submit an annual allocation proposal under XSEDE may now fit within one of the new ACCESS opportunities for small or medium-sized projects. You can now make one request with a much shorter allocations proposal. You can also make these requests at any time, rather than wait for the quarterly deadlines. Allocations are awarded for the duration of a supporting grant or for 12 months, with possible extensions up to five years.

There are new tiers as well. With the new Discover tier, there are 1.5 million available credits, and is closer in line to annual allocation from XSEDE. Researchers can submit the one-page application anytime. Check the overview of the ACCESS opportunities to see where your project fits. If you find that your research activities are considered to be a large-scale project, you will need to submit a proposal for a Maximize ACCESS award. Requests for Maximize ACCESS are accepted every six months. More details are available on the Maximize ACCESS web page.

Are you a grad student researcher? You now qualify to apply as the principal investigator (PI) for an ACCESS allocation in the Explore ACCESS tier to obtain resources to help complete your dissertation. Graduate students will need to include a letter from their advisor to support the request, and advisors must be added as a co-PI.

If you are interested in assisting the national research and educational community by reviewing ACCESS allocation requests, you can sign up to join the ACCESS allocations review committee.

Need help? Contact arc-support@umich.edu, or visit the ACCESS web page on the ARC website. You can also keep up with ongoing updates on the ACCESS webpage, access-ci.org.

Attend a listening session with ARC Director Brock Palen

By | Feature, HPC, News

You’re invited! ARC Director Brock Palen would like to hear from researchers. 

Are ARC services meeting their needs? What is not working well for them? Is something technical impeding their ability to do their research? Do they like Turbo Research Storage, the HPC web interface Open OnDemand, or the no-cost allocations offered by the U-M Research Computing Package

This is an open, virtual, drop-in office hour. All are welcome. 

There are three sessions are available: 

Researchers can also email Brock Palen at brockp@umich.edu or reach out to ARC at arc-support@umich.edu.

Preserving Michigan’s musical history and culture

By | Feature, News, Research

From Kentucky bluegrass to Louisiana Zydeco to German hurdy-gurdy to East European Klezmer to Indian Manipuri dancing to Native American pow wows, and much more, these musical traditions from around the country and around the world have found their way to Michigan. Beginning in 2014, the Musical Heritage Project has been documenting Michigan’s folk music history.

Lester Monts

Lester Monts Lester Monts specializes in ethnomusicology and has been documenting Michigan’s folk cultural heritage since 2014. (Image courtesy Lester Monts)

The project is led by ethnomusicologist Dr. Lester P. Monts, Arthur F. Thurnau Professor Emeritus of Music, who began his musical journey as an orchestral trumpet player. He earned bachelor’s and master’s degrees in trumpet performance and teaching trumpet at the college level before completing the doctoral degree in ethnomusicology and embarking on a research career. In the mid-1970s, Monts began to focus his research on music and culture in Liberia and Sierra Leone in West Africa. The fourteen-year Liberian civil war thwarted his fieldwork in that region.

Noting that there has been no systematic effort to collect and archive Michigan’s rich folk music heritage, the Michigan Musical Heritage Project was launched. Monts has embraced the study of music from the cultural and social aspects of the people who make it. He notes that “music brings people together; it has the power to create community, and we witnessed this occurring throughout our many journeys around the state.”

Using his charm, passion, likeability, and keen musical knowledge to cultivate trust with his interviewees, Monts captured more than 400 hours of audio and video data over the years, amassing a total of 80 terabytes of data. He believes this to be the most extensive collection of Michigan folk music in the state and that U-M is the right place to house this collection.

The Michigan Musical Heritage Project crew.

The Michigan Musical Heritage Project crew wraps up at the end of recording session. (Image courtesy Lester Monts)

With a videography crew consisting primarily of former U-M students, Monts traveled all around the state to record performances at folk music festivals and cultural gatherings, such as the Celtic Festival (Saline), Irish Folk Music Festival (Muskegon) Hispanic Heritage Festival (Hart), Hiawatha Traditional Music Festival (Marquette), Port Sanilac Blues Festival (Port Sanilac), Africa World Festival (Detroit), Aura Jamboree (Aura), Oldtime Fiddlers Convention and Traditional Music Festival (Hillsdale).

He says, “The creative talents of the state’s outstanding musicians must be preserved, not only for my research but for that of others as well. If properly preserved, I’m confident that in the future, the ethnomusicology program and the American Cultures department will find these data provide important insights into Michigan’s diverse musical heritage.”

How technology supports this project 

Monts’ crew includes a strong partnership with Tom Bray, converging technologies consultant and adjunct assistant professor of Art and Design, Penny W. Stamps School of Art and Design. Bray has been instrumental in pairing the right technology for the long-term preservation of this collection, which includes converting older footage to digital media. 

Tom Bray

Tom Bray (image courtesy LSA)

Bray has collaborated with Monts to convert older technologies, such as VHS, 8mm, and high-8 video, to digital files. The files are both compressed and uncompressed and are very large and of high resolution.

All of this wonderful and important audio and video footage needs to be preserved somewhere. But where do you turn when you have 80 terabytes of data? Monts said, “I’ve been desperately searching for a way to archive the video data collected under the auspices of the Michigan Musical Heritage Project.” 

Enter the U-M Research Computing Package (UMRCP) and the team from Advanced Research Computing (ARC), a division of Information and Technology Services. The UMRCP offers researchers across all campuses several resources at no additional cost to researchers, including 100 terabytes of long-term storage.

Bray said, “I had to read the UMRCP email announcement twice because I couldn’t believe my eyes. I was so excited that ITS and the university are supporting researchers in this way. We jumped on this opportunity right away.” 

ARC Director Brock Palen is excited about this work, too. “This is super interesting, and not like the usual types of research ARC normally sees, like climate and genomics. We’re happy to help Dr. Monts and Mr. Bray, and anyone who needs it, anytime. The archive is intentionally built for holding large-volume, raw data such as 4k video, and we are proud to be their go-to for this important cultural preservation project.” 

Old media in Dr. Monts' office

Hours and hours of media is being converted to a digital format. (Photo by Stephanie Dascola)

ARC replicates and encrypts in two secure locations that are miles apart, so those who use ARC services will not have to worry about crashes that they might experience if they are using their own equipment. The UMRCP also includes technical expertise by talented ARC staff to further remove barriers so researchers can do what they do best.

Monts and Bray also leverage the university’s network and WiFi services to transfer the files from their studio in the Duderstadt Center to storage. The network is designed to minimize bottlenecks so that data transfers quickly and efficiently. 

Dr. Monts said, “Although the pandemic temporarily disrupted my plans to complete the video documentary, I take solace in knowing that the many hours of data we collected is in a much safer environment than we had. The UMRCP storage resource is truly a boon!”

Related links

An old reel-to-reel tape player.

A reel-to-reel tape player. (Photo by Stephanie Dascola)

Lester Monts plays footage from a special women's only dance in Iberia.

Dr. Monts shows footage from a special women-only dance in Iberia. He earned permission to record this rarely-documented group of women. (Photo by Stephanie Dascola)

U-M partners with Cavium on Big Data computing platform

By | Feature, General Interest, Happenings, HPC, News

A new partnership between the University of Michigan and Cavium Inc., a San Jose-based provider of semiconductor products, will create a powerful new Big Data computing cluster available to all U-M researchers.

The $3.5 million ThunderX computing cluster will enable U-M researchers to, for example, process massive amounts of data generated by remote sensors in distributed manufacturing environments, or by test fleets of automated and connected vehicles.

The cluster will run the Hortonworks Data Platform providing Spark, Hadoop MapReduce and other tools for large-scale data processing.

“U-M scientists are conducting groundbreaking research in Big Data already, in areas like connected and automated transportation, learning analytics, precision medicine and social science. This partnership with Cavium will accelerate the pace of data-driven research and opening up new avenues of inquiry,” said Eric Michielssen, U-M associate vice president for advanced research computing and the Louise Ganiard Johnson Professor of Engineering in the Department of Electrical Engineering and Computer Science.

“I know from experience that U-M researchers are capable of amazing discoveries. Cavium is honored to help break new ground in Big Data research at one of the top universities in the world,” said Cavium founder and CEO Syed Ali, who received a master of science in electrical engineering from U-M in 1981.

Cavium Inc. is a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, data center, wired and wireless networking. The new U-M system will use dual socket servers powered by Cavium’s ThunderX ARMv8-A workload optimized processors.

The ThunderX product family is Cavium’s 64-bit ARMv8-A server processor for next generation Data Center and Cloud applications, and features high performance custom cores, single and dual socket configurations, high memory bandwidth and large memory capacity.

Alec Gallimore, the Robert J. Vlasic Dean of Engineering at U-M, said the Cavium partnership represents a milestone in the development of the College of Engineering and the university.

“It is clear that the ability to rapidly gain insights into vast amounts of data is key to the next wave of engineering and science breakthroughs. Without a doubt, the Cavium platform will allow our faculty and researchers to harness the power of Big Data, both in the classroom and in their research,” said Gallimore, who is also the Richard F. and Eleanor A. Towner Professor, an Arthur F. Thurnau Professor, and a professor both of aerospace engineering and of applied physics.

Along with applications in fields like manufacturing and transportation, the platform will enable researchers in the social, health and information sciences to more easily mine large, structured and unstructured datasets. This will eventually allow, for example, researchers to discover correlations between health outcomes and disease outbreaks with information derived from socioeconomic, geospatial and environmental data streams.

U-M and Cavium chose to run the cluster on Hortonworks Data Platform, which is based on open source Apache Hadoop. The ThunderX cluster will deliver high performance computer services for the Hadoop analytics and, ultimately, a total of three petabytes of storage space.

“Hortonworks is excited to be a part of forward-leading research at the University of Michigan exploring low-powered, high-performance computing,” said Nadeem Asghar, vice president and global head of technical alliances at Hortonworks. “We see this as a great opportunity to further expand the platform and segment enablement for Hortonworks and the ARM community.”