Applications accepted for Department of Energy Computational Science Graduate Fellowship — Jan. 19 deadline

Applications are now being accepted for the Department of Energy Computational Science Graduate Fellowship (DOE CSGF). This program provides up to four years of financial support for students pursuing doctoral degrees in fields of study that utilize high-performance computing to solve complex problems in science and engineering.

  • How to Apply: application insight and access, frequently asked questions, downloadable recruitment poster
  • DEIXIS Magazine: a look at the people and research impacted by the program (also available in print)

Applications are due on or by January 19, 2016. For more information and to access the online application, please visit .

U-M to host XSEDE workshop on OpenACC — Dec. 3

Advanced Research Computing – Technology Services (ARC-TS) will host a simulcast of the XSEDE monthly workshop on Dec. 3, on OpenACC.

OpenACC is the accepted standard using compiler directives to allow quick development of GPU capable codes using standard languages and compilers. It has been used with great success to accelerate real applications within very short development periods. This workshop assumes knowledge of either C or Fortran programming.

The workshop will be shown in 1180 Duderstadt, from 11 a.m. to 5 p.m., Thursday, Dec. 3. The workshop WILL NOT be broadcast on the internet, so attendance in person is the only way to participate.

For details on the workshop, see the course description. U-M affiliates can register at the XSEDE website. Space is limited to 40 people.

2015 call for proposals for allocations on Blue Waters HPC system — Nov. 16 deadline

The Great Lakes Consortium for Petascale Computation (GLCPC) has been allocated 3.5 million node hours (equivalent to approximately 50 million core hours) annually as part of the Blue Waters project. This allocation provides the GLCPC member institutions with an unprecedented opportunity to advance their programs in computation, data, and visualization intensive research and education. This Call For Proposals (CFP) describes the process for submitting a proposal to the GLCPC Allocations Committee for allocations on the Blue Waters system. Details on the Blue Waters system can be found at

Given the unprecedented scale and innovative architecture of the Blue Waters system, we are seeking proposals that focus on its scale and unique capabilities. Thus, projects that could be completed on one of the other NSF sponsored systems as part of the current XSEDE program are not encouraged for GLCPC Blue Waters allocations.

The GLCPC is seeking innovative proposals that fall into four categories:

  1. Scaling studies: The scaling of codes which will operate efficiently on large numbers of parallel processors presents a number of challenges. Therefore, projects of particular interest include those that optimize and/or scale community codes to very large scales. Examples include scaling of multilevel parallel applications (MPI+OpenMP), accelerators (CUDA, OpenACC or OpenCL), I/O and Data intensive applications, or novel communication topologies.
  2. Multi-GLCPC-institutional projects addressing focused scientific projects. An example might be a Great Lakes Ecosystems Modeling initiative (Digital Great Lakes).
  3. Proposals for applications well-suited for the Blue Waters system architecture.
  4. Proposals from non-traditional and underserved communities.

The GLCPC Allocations Committee anticipates 3-8 projects/allocations annually; consequently, the smallest project is expected to be approximately 320,000 node hours (~5 million core hours), which is roughly the same as dedicated use of a 4-core, 1,280-node cluster. We note that applications at this scale will require development efforts as well as different phases, such as: tuning and development; some smaller runs; large “production runs”; and then post processing; but all will be at scales beyond other available large resources. GLCPC allocation proposals will be accepted through midnight EST November 16, 2015. The proposal review process is expected to be complete by mid-February 2016. Allocations awarded through GLCPC will be available for use beginning April 1, 2016 and will expire one year from time of award.

To make your allocation request, please submit a written proposal (see format below), not to exceed 5 pages (in PDF format) to

For more information, visit

U-M presence at SC15 conference features demonstrations, talks, tutorial — visit us at booth 2103

University of Michigan researchers and staff are planning demonstrations, talks and a “Parallel Computing 101″ tutorial as part of the University’s presence at the Supercomputing 15 conference Nov. 15-20 at the Austin Convention Center in Austin, Texas. Below is a summary of U-M activities. (Times are local to Austin.)

  • U-M’s booth is #2103 (map), and will be open starting the evening of Monday, Nov. 16 through the afternoon of Thursday, Nov. 19 (see exhibit hours). The U-M booth will feature three demonstrations: SDN Optimized High-Performance Data Transfer Systems for Exascale Science, LHCONE Point2point Service with Data Transfer Nodes, and Lyuba the Mammoth: Collaborative Exploration of Volumetric Data. See descriptions below for details. As in previous years, U-M and Michigan State University are sharing a booth.
    The following talks are also scheduled for the U-M booth:

    • Azher Mughal, Caltech: “Programming OpenFlow Flows for Fun (and Scientific Profit),” 11:30 a.m., Wednesday, Nov. 18
    • Shawn McKee, University of Michigan, “Simple Infrastructure to Exploit 100G Wide-Area Networks for Data Intensive Science,” 1:30 p.m., Tuesday, Nov. 17; and “OSiRIS: Open Storage Research InfraStructure,” 11:30 a.m., Thursday, Nov. 19.
    • Kaushik De, University of Texas at Arlington: “PANDA: Update on the ATLAS Global Workflow System and High Performance Networks,” 1 p.m., Wednesday, Nov. 18
  • U-M professors Quentin Stout (Electrical Engineering and Computer Science) and Christiane Jablonowski (Climate and Space Sciences and Engineering) will teach a tutorial titled “Parallel Computing 101″ on Sunday, Nov. 15. Stout has attended every Supercomputing conference since they started in 1988, and has taught this tutorial 18 times.
  • Matt Britt, the HPC Operations Programming Manager for U-M’s Advanced Research Computing – Technology Services, will speak at Adaptive Computing’s booth (#833) at 2:30 p.m., Wednesday, Nov. 18, discussing the tools we are using to enable new utilization (Nitro) and gaining deeper insight into our HPC resources (ELK stack, Graphite and Grafana).
  • Sharon Broude Geva, U-M Director of Advanced Research Computing, is leading a Birds of a Feather session titled “Strategies for Academic HPC Centers,” from 5:30 – 7 p.m., Tuesday, Nov. 17. She will also be a panelist in the Women in HPC workshop, in a session titled “Improving diversity at Supercomputing,” from 11:20 a.m. to noon, Friday, Nov. 20.
  • Amy Liebowitz, a network planning analyst at U-M Information Technology and Services, is one of five women chosen to receive funding under the “Women in IT Networking at SC (WINS)” program. The program is a collaboration between University Corporations for Atmospheric Research (UCAR), the Department of Energy’s Energy Sciences Network (ESnet) and the Keystone Initiative for Network Based Education and Research (KINBER), and pays a week of expenses for women to attend the conference and work on the SCInet networking system. This is Liebowitz’s first time at the conference; he said she planned to attend for a week, but the funding allowed her to stay for two weeks.


Title: SDN Optimized High-Performance Data Transfer Systems for Exascale Science
Booth: California Institute of Technology / CACR #1248, University of Michigan #2103, Stanford University #2009, OCC #749, Vanderbilt #271, Dell #1009, and Echostreams #582
Description: The next generation of major science programs face unprecedented challenges in harnessing the wealth of knowledge hidden in Exabytes of globally distributed scientific data. Researchers from Caltech, FIU, Stanford, Univ of Michigan, Vanderbilt, UCSD, UNESP, along with other partner teams have come together to meet these challenges, by leveraging the recent major advances in software defined and Terabit/sec networks, workflow optimization methodologies, and state of the art long distance data transfer methods. This demonstration focuses on network path-building and flow optimizations using SDN and intelligent traffic engineering techniques, built on top of a 100G OpenFlow ring at the show site and connected to remote sites including the Pacific Research Platform (PRP). Remote sites will be interconnected using dedicated WAN paths provisioned using NSI dynamic circuits. The demonstrations include (1) the use of Open vSwitch (OVS) to extend the wide area dynamic circuits storage to storage, with stable shaped flows at any level up to wire speed, (2) a pair of data transfer nodes (DTNs) designed for an aggregate 400Gbps flow through 100GE switches from Dell, Inventec and Mellanox, and (3) the use of Named Data Networking (NDN) to distribute and cache large high energy physics and climate science datasets.

Title: LHCONE Point2point Service with Data Transfer Nodes
 California Institute of Technology # 1248, Univ of Michigan #2103, Vanderbilt University #271
Description: LHCONE (LHC Open Network Environment) is an globally distributed specialized environment in which the large volumes of LHC data are transferred among different international LHC Tier (data center and analysis) sites. To date these transfers are conducted over the LHC Optical Private Network (LHCOPN, dedicated high capacity circuits between LHC Tier1s (data centers)) and via LHCONE, currently based on L2+VRF services. The LHCONE Point2Point Service is planning to future-proof ways of networking for LHC – e.g., by providing support for OpenFlow. This demonstration will show how this goal can be accomplished, using dynamic path provisioning and at least one Data Transfer Node (DTN) in the US connected to at least one DTN in Europe, transferring LHC data between tiers. This demonstration will show a network services model that matches the requirements of LHC high energy physics research with emerging capabilities for programmable networking. This demonstration will integrate programmable networking techniques, including the Network Service Interface (NSI), a protocol defined within the Open Grid Forum standards organization. Multiple LHC sites will transfer LHC data through DTNs. DTNs are edge nodes that are specifically designed to optimize for high performance data transport, they have no other function. The DTNs will be connected by layer 2 circuits, created through dynamic requests by NSI.

Title: Lyuba the Mammoth: Collaborative Exploration of Volumetric Data
Booth: University of Michigan #2103
Description: This demonstration depicts Lyuba, a mammoth calf unearthed in 2007 after 50,000 years in Siberia. It is thought Lyuba died of asphyxiation after falling into the mud hole which ultimately preserved her. Her specimen is considered the best preserved mammoth mummy in the world, and is currently on display in the Shemanovsky Museum and Exhibition Center in Salekhard, Russia.
University of Michigan Professor Daniel Fisher and his colleagues at the U-M Museum of Paleontology arranged to have the mummy scanned using X-Ray computed tomography in Ford Motor Company’s Non- destructive Evaluation Laboratory. A color map was then applied to the density data by Research Museum Collection Manager Adam Rountrey to reveal the internal anatomical structures. This data was then provided to the UM3D Lab, a service of the Digital Media Commons and U-M Library, as an image stack for interactive volumetric visualization. The stack comprises 1,132 JPEG image slices with 762×700 pixel resolution per slice. Each of the resulting voxels is 1mm cubed.
When this data is brought into the 3D Lab’s Jugular software, it allows the user to interactively slice through the volume by manipulating a series of hexagonal planes. For this demo, users at SC15 in Austin, Texas, can occupy the same virtual space as another user situated in the immersive virtual reality MIDEN in Ann Arbor, Michigan. Via a Kinect sensor in Aus- tin, a 3D mesh of the user will be projected into the MIDEN alongside Lyuba allowing for simultaneous interaction and exploration of the data.
NOTE: The demonstration can also be seen at the 3D Lab in the Duderstadt Center on U-M’s North Campus in Ann Arbor (Ann Arbor times listed in parentheses.)
Monday, Nov. 16, 8 p.m. (9 p.m.)
Tuesday, Nov. 17, 10:30 a.m., 12:30 p.m., 2:30 p.m., 4:30 p.m. (11:30 a.m., 1:30 p.m., 3:30 p.m., 5:30 p.m.)
Wednesday, Nov. 18, 10:30 a.m., 11:30 a.m., 2:30 p.m., 4:30 p.m. (11:30 a.m., 12:30 p.m., 3:30 p.m., 5:30 p.m.)
Thursday, Nov. 19, 10:30 a.m., 12:30 p.m., 2:30 p.m., (11:30 a.m., 1:30 p.m., 3:30 p.m.)

Ann Arbor Data Dive — Nov. 14

The annual Ann Arbor Data Dive is scheduled for Nov. 14 at North Quad, 105 S. State St.

According to its website, the Data Dive is about empowering people to use data; connecting expderts and professionals with novices and students; and providing nonprofits and social service organizations the tools to work with data.

The daylong event is open and free to students an community members. Stories and datasets for local nonprofits are provided, and the attendees take on the rest. Data Divers are free to collaborate and work on the data in whatever way they choose. In the end, everyone has hands on experience with real data and nonprofits receive all ideas or results.


8:30am-9:00am Light breakfast and check-in
9:00am-10:00am Introductions to Data Dive and our 2015 clients
10:00am-1:00pm Work time
1:00pm-2:00pm Lunch is available
2:00pm-5:00pm Work time
5:00pm-6:00pm Wrap up data work/create presentation
6:00pm-6:30pm Snacks are available
6:30pm-7:30pm Presentations
7:30pm-9:00pm Post-Dive informal outings

For more information and to register, visit

Earth and Environmental Sciences lecture: Yihe Huang, Stanford University, “Characterizing Earthquakes and Fault Mechanics at Various Scales” — Nov. 10

Rapid improvement in seismic data acquisition and computational modeling capability provides an nnprecedented opportunity to gain new insights into the physics of earthquakes. More detailed observations and more realistic models support that earthquakes interact with fault mechanics at a wide range of scales. In this talk, I will explore the features of both small and large earthquakes and investigate their physical processes. By studying M-1 to M4 earthquakes associated with fluid injection in oil and gas industry, we find fluid migration leaves significant signatures in the magnitude-frequency distributions of induced earthquakes. For M4 to M8 earthquakes, ruptures in damaged fault zones can potentially explain the discrepancy between observed and theoretically predicted rupture speeds. For M8+ earthquakes, the geometry of the subduction wedge affects whether earthquake rupture can reach the trench and cause unusually large slip at shallow depths. In each case, the integration of seismic observations and numerical models enables us to identify the fundamental processes, make physical predictions, and reduce the non-uniqueness of the interpretation. Understanding how earthquakes interact with fault mechanics will be an important step forward in developing physics-based tools for assessing earthquake hazard and reducing earthquake risk.

MICDE Seminar: Jeffrey Hittinger, Lawrence Livermore National Lab, “Why Exascale Computing Will Be Slightly Less Disruptive Than the Comet that Killed the Dinosaurs” — Nov. 9

Jeffrey Hittinger, Computational Scientist and Group Leader in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory, will speak on campus Nov. 9 as part of the MICDE Seminar Series. His talk is titled “Why Exascale Computing Will Be Slightly Less Disruptive Than the Comet that Killed the Dinosaurs.”

Workshop: Exploring Your Data with R — Nov. 6

The Center for Statistical Consultation and Research (CSCAR) is offering a workshop on R.

Time/Date: 1 – 3 p.m., Friday, Nov. 6

Location: MLB Room 2001A

Size: Limited to 24

Registration: Call 734-764-7828

Description: R is an extremely powerful tool for data modeling, visualization, and general programming.  In many practical applications of statistics, the vast majority of time is spent preparing the data for eventual analysis. However, this also where many practitioners who use R often have relatively little training.  In recent years, a variety of packages have become available to make data wrangling, summarizing, generation and other common operations more straightforward, and easier to read for future use (e.g. via piping and clearer syntax).  In addition, some newer visualization packages work these approaches, allowing one to go quite seamlessly from raw data to interactive graphics.  This workshop will introduce participants to a handful of tools that can make their data exploration and analytical flow more streamlined and reproducible.

Prerequisites: Some basic R experience is assumed.

Instructor: Michael Clark, Statistical Consultant, CSCAR. (Email with any questions.)

Fee: None.

Flux User Meetup — Mon., Nov. 2, School of Public Health

Users of high performance computing resources are invited to meet Flux operators and support staff in person at an upcoming user meeting:

  • Monday, Nov. 2, 1 – 5 p.m., 2601 School of Public Health Building I (1415 Washington Heights)

There is not a set agenda; come at anytime and stay as long as you please. You can come and talk about your use of any sort of computational resource, Flux, Hadoop, XSEDE, Amazon, or other.

Ask any questions you may have. The Flux staff will work with you on your specific projects, or just show you new things that can help you optimize your research.

This is also a good time to meet other researchers doing similar work.

This is open to anyone interested; it is not limited to Flux users.

Examples of potential topics:

  • What ARC-TS services are there, and how to access them?
  • How to make the most of PBS and learn its features specific to your work?
  • I want to do X, do you have software capable of it?
  • What is special about GPU/Xeon Phi/Accelerators?
  • Are there resources for people without budgets?
  • I want to apply for grant X, but it has certain limitations. What support can ARC-TS provide?
  • I want to learn more about the compiler and debugging?
  • I want to learn more about performance tuning, can you look at my code with me?
  • Etc.

MICDE Seminar: Jim Chelikowsky, University of Texas, on “Addressing Dirac’s Challenge: Practical Quantum Mechanics for Materials” — Oct. 30

Jim Chelikowsky, Director of the ICES Center for Computational Materials, and professor of Physics, Chemical Engineering, and Chemistry and Biochemistry at the University of Texas at Austin, will be on campus as part of the Michigan Institute for Computational Discovery and Engineering Seminar Series. Students in the MICDE Graduate Certificate program are encouraged to attend his talk.

Time/Date: 10:30 a.m., Friday, Oct. 30

Location: 1571 GG Brown

Title: “Addressing Dirac’s Challenge: Practical Quantum Mechanics for Materials”

Abstract: After the invention of quantum mechanics, P. A. M. Dirac made the following observation: “The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems…” The creation of “approximate practical methods” in response to Dirac’s challenge has included the one electron picture, density functional theory and the pseudopotential concept. The combination of such concepts in conjunction with contemporary computational platforms, and new algorithms, offer the possibility of predicting properties of materials solely from knowledge of the atomic species present. I will give an overview of progress in this field with an emphasis on materials applications at the nanoscale.

Bio: Jim Chelikowsky is Director of the ICES Center for Computational Materials, and a professor of Physics, Chemical Engineering, and Chemistry and Biochemistry at the University of Texas at Austin. He obtained a BS degree, Summa Cum Laude, in physics from Kansas State University in 1970 and a PhD degree in physics from the University of California at Berkeley in 1975, where he held a National Science Foundation fellowship. He was an assistant professor at the University of Oregon from 1978-1980.  He went to the University of Minnesota in 1987 as a professor within the Department of Chemical Engineering and Materials Science. He was named an Institute of Technology Distinguished Professor at Minnesota in 2001. He assumed his current position as the W.A. “Tex” Moncrief, Jr. Chair of Computational Materials and professor in the Departments of Physics, Chemical Engineering, and Chemistry and Biochemistry in January 2005. He has also worked at Bell Laboratories, and at Exxon Research and Engineering Corporate Research Science Laboratories.

His research has made significant contributions within the field of computational materials science.   His work has focused on the optical and dielectric properties of semiconductors, surface and interfacial phenomena in solids, point and extended defects in electronic materials, pressure induced amorphization in silicates and disordered systems, clusters and nano-regime systems, diffusion and microstructure of liquids, and the development of high performance algorithms to predict the properties of materials.  He has published over 370 papers, including 5 monographs.