Title, authors, keywords, and abstract for each of the posters in the competition:
A Treecode-Accelerated Boundary Integral Poisson-Boltzmann Solver for Electrostatics of Solvated Biomolecules
Weihua Geng, Robert Krasny
computational biophysics, protein solvation
We present a treecode-accelerated boundary integral (TABI) solver
for electrostatics of solvated biomolecules described by the linear Poisson-Boltzmann equation. The method employs a well-conditioned boundary integral formulation for the electrostatic potential and its normal derivative on the molecular surface. The surface is triangulated and the integral equations are discretized by centroid collocation. The linear system is solved by GMRES iteration and the matrix-vector product is carried out by a Cartesian treecode which reduces the cost from O(N^2) to O(NlogN), where N is the number of faces in the surface triangulation. The TABI solver is applied to compute the solvation energy in two cases, the Kirkwood sphere and a solvated protein. We present the error, CPU time, and memory usage, and compare results for the Poisson-Boltzmann and Poisson equations. For the protein test case, we compare TABI results with those obtained using the grid-based APBS code, and we also present parallel TABI simulations using up to eight processors. We find that the TABI solver exhibits good serial and parallel performance combined with relatively simple implementation and efficient memory usage.
The Credits that Count: Credit and Risk in the Student Loan Market
Katharina Best, Jussi Keppo
Higher education finance, econometrics, credit, R, two-stage least squares regression
Using a two-stage least squares model, we build a macroeconomic model of supply and demand for US higher education. We find that credit factors (e.g. student loan amounts and household debt) are primary drivers of demand, along with college education benefits (e.g. relative earnings and employment level). Tuition prices and debt levels are highly correlated, suggesting that students respond to higher tuition prices by borrowing.
Far-from-equilibirum self-assembly of nanocolloids
Nguyen H.P Nguyen, Eric Jankowski, Sharon C. Glotzer
Self assembly, nanoscience, swarms, colloids, self-propelled, far from equilibrium, molecular dynamics, GPU
Swarms of self-propelled particles exhibit complex behavior that can arise from simple models, with large changes in swarm behavior resulting from small changes in model parameters. We investigate the steady-state swarms formed by self-propelled Morse particles in three dimensions using molecular dynamics simulations optimized for GPUs. We find a variety of swarms of different overall shape assemble spontaneously and that for certain Morse potential parameters at most two competing structures are observed. We report a rich “”phase diagram”” of athermal swarm structures observed across a broad range of interaction parameters. Unlike the structures formed in equilibrium self-assembly, we find that the probability of forming a self-propelled swarm can be biased by the choice of initial conditions. We investigate how thermal noise influences swarm formation and demonstrate ways it can be exploited to reconfigure one swarm into another. Our findings validate and extend previous observations of self-propelled Morse swarms and highlight open questions for predictive theories of non-equilibrium self-assembly.
Poster 4: (People’s Choice Award Winner)
Social History of the Muslim World in the Digital Age
Islamic Studies, History, Text-mining
Over centuries Muslim historians wrote hundreds of chronicles and biographical dictionaries, which became both a blessing and a curse for the scholars of the pre-modern Muslim world. They are the blessing because they a gold mine of informations on social, political, religious and cultural history of the Muslim world. Yet, their very advantage — the practically limitless abundance of historical data — is also the greatest impediment in the scholarly attempts to study them. This abundance of data combined with a number of technical difficulties in working with Arabic script is the main reason why only very few studies were produced since the 70s when scholars first voiced their awareness of the the value of these sources. Several studies in the 70s and 80 used analog databases to for the analysis of extracted data; a very few studies that appeared in the 90s used relational databases for that purpose. However, both approaches proved inefficient and, by and large, the number of quantitative studies of these sources came to naught.
Now, when most of these sources are available in a fully-searchable text format, a different approach can be taken for the analysis of these medieval books. In my dissertation research, I analyze a number biographical dictionaries and chronicles using Python text-mining scripts, methods from corpus linguistics, statistical exploratory analysis and GIS visualizations. My poster will show my advances in terms of both method and content on the data from the largest of my sources: Ta’rikh al-islam, a 52-volume “History of Islam” written by a Damascene scholar al-Dhahabi in the 14th century CE, which covers seven hundred years of Islamic history and includes approximately 30,000 biographies.
Analyzing how many times different cities are mentioned in biographies of different periods, I am exploring how the Muslim world grew and developed over almost seven centuries from ~650 to 1300 CE. A series of geographical maps, which my poster will feature, is the most effective way to display these complex developments and show how the prominence of different regions of the Islamic empire changed over time.
Although approximately 3 million words of this specific source hardly constitute “big data” in the scientific terms, for the historians, who do not habitually rely on text-mining methods in their research, this text is “scary big data”.
Thermal Conductivity of polymers using Molecular Dynamics Simulation
Abhishek Kumar, Veera Sundararaghavan
Molecular Dynamics, Thermal Conductivity, Green Kubo
Molecular dynamics simulations are performed to find thermal conductivity of epoxy-amine cross-linked polymer networks and PMMA. We performed Molecular Dynamics(MD) simulation on some variation of these two polymer structures and found thermal conductivity(k) comparable to that obtained with experiments. Results of atomistic simulation consistently gave upper bound value for K which can be used as a good alternative to experiments if we are interested in calculating K value for any new polymer. This is a supportive tool for ongoing work on Integrated Computational Manufacturing Engineering where we are interested in calculating the thermal and mechanical properties of new materials especially polymers using virtual experiments such as MD.
Measuring Innovation with Directed Acyclic Graphs
Russell J. Funk, Jason Owen-Smith
Network science, economic sociology, corporate strategy “BACKGROUND:
Driven by theoretical and substantive concerns, network scientists have sought for decades to develop methods for identifying radically innovative discoveries in large corpora of documents like scientific papers (Price, 1965; Wuchty, Jones, & Uzzi, 2007; Newman, 2009), patented inventions (Fleming & Sorenson, 2001; Wuchty et al., 2007), or blog posts (Adar & Adamic, 2005). Although details vary with the context, most existing quantitative measures of innovation evaluate discoveries along two dimensions.
First, innovations vary in their impact—the extent to which they are used. While the true impact of a discovery is difficult to measure, citations to papers and patents that report new findings and inventions offer a widely used proxy (Owen-Smith & Powell, 2003; Newman, 2009). Second, innovations differ in their distinctiveness—the extent to which they depart from existing knowledge. Researchers often measure distinctiveness by identifying patents that bridge disconnected technology classes or papers that contain rare keyword pairs (Fleming & Sorenson, 2001).
Prior efforts to measure the importance of innovations by using impact and distinctiveness have enabled important work about scientific and technological change. However, existing measures fail to capture the hugely important distinction between discoveries that are valuable because they complement and therefore increase the use of existing ideas, and those whose value derives from rendering existing ideas obsolete. By measuring only magnitude of use, impact studies miss how an innovation is used in relation to existing knowledge. Standard impact measures, for example, cannot distinguish between two highly cited papers, one of which establishes a new field and the other of which energizes an existing field by solving a central problem. Similarly, studies that focus on distinctiveness are limited because they measure properties of innovations at their introduction, and neglect the extent and nature of their future use.
We propose a new approach to the measurement of innovation that takes advantage of the growing availability of large network datasets and computational resources necessary for evaluating them. Our measure distinguishes between innovations that enhance and innovations that erode the value of knowledge they build on. For substantive and methodological reasons, we limit our attention to technological innovations embodied in patents. However, the citation-based approach we present is in principle appropriate for any directed graph where ties unfold over time such as links among blogs and citations among scientific papers or court cases.
We approach the problem of identifying radical discoveries by distinguishing conceptually between competency-enhancing (CE) and competency-destroying (CD) innovations (Tushman & Anderson, 1986). CE innovations increase the value of existing methods or skills. For example, smartphone applications that are tied to specific retailers like Amazon.com are competency enhancing because they make those retailers’ technologies easier to use. In contrast, CD innovations render existing methods and skills obsolete by introducing order-of-magnitude performance improvements. For example, the invention of recombinant DNA rendered earlier techniques for drug discovery obsolete.
Our measure exploits features of patents granted by the US Patent & Trademark Office. Patent data are useful methodologically because patents must cite all patented inventions (prior art) on which they build. Unlike journal citations, specialized examiners review reference lists in patent applications to ensure completeness. Similarly, applicants avoid unnecessary citations, which weaken a patent’s enforceability.
The core idea captured mathematically in our measure involves looking at how future citations to a focal patent make use of the prior art cited by the focal patent. The intuition behind this idea is that citations to the prior art listed by the focal patent should decrease after the introduction of a CD innovation, because the innovation entails a break with past ways of thinking. By contrast, CE innovations should be frequently cited together with their prior art and thus increase citations to technologies they build on.
Our measure has several attractive properties. First, it is structural in that it accounts for citations that are influenced by, but do not involve, the focal discovery. Second, it is dynamic in that it accounts for variations in effects across the life of a discovery. Third, the measure is continuous, and therefore recognizes that the effects innovations exert on existing technologies are usually a matter of degree. Fourth, the measure is valenced in that it distinguishes between discoveries that enhance the impact of prior technologies and those that render them obsolete.
We validate the measure by testing its discriminative abilities on the 0.25% most cited patents granted between 1975 and 2010 (N=10,000 patents), and the distinct pathways that link them with their prior art and all respective forward citations (N=269,663,060 pathways). We then use the measure to bring new insights to debates over the relationship between patenting experience and quality at US universities using a 1981–2005 panel. These substantive results have implications for understanding how industry and government ties influence academic R&D.
In addition to informing debates over university patenting, the measures we present have broader implications for network science and information retrieval. For instance, the measures could be adapted to capture the evolving link structure of the World Wide Web, where new pages will both cite and be cited by others. Rather than measuring importance in terms of competence destruction or enhancement, these measures might offer insight into political and cultural questions involving polarization in the blogosphere (Adamic & Glance, 2005), the evolution of management fads, the visibility of music artists and genres, or the critical success of films.
Adamic, L.A., E. Adar. 2003. Friends and Neighbors on the Web. Social Networks. 25:211-230.
Adar, E., L.A. Adamic. 2005. Tracking Information Epidemics in Blogspace. Web Intelligence.
Fleming, L., O. Sorenson. 2001. Technology as a Complex Adaptive System. Research Policy. 30:1019-1039.
Newman, M.E.J. 2009. The First-Mover Advantage in Scientific Publication. Europhysics Letters. 86:68001.
Owen-Smith, J., W.W. Powell. 2003. The Expanding Role of University Patenting in the Life Sciences. Research Policy. 32:1695-1711.
Price, D.J.deS. 1965. Networks of Scientific Papers. Science. 149:510-515.
Tushman, M.L., P. Anderson. 1986. Technological Discontinuities and Organizational Environments. Administrative Science Quarterly. 31:439-465.
Wuchty, S., B.F. Jones, B. Uzzi. 2007. The Increasing Dominance of Teams in Production of Knowledge. Science. 316:1036-1039.
Predictive Computational Methods for Charge-Transfer States in Organic Photovoltaic Materials
Heidi Phillips, Shaohui Zheng, Eitan Geva, Barry Dunietz
Computational Chemistry, Organic Optoelectronics, Time-Dependent Density Functional Theory, Range-Separated Hybrid Functional
A fundamental understanding of charge transfer processes and charge separation is necessary for efficient development and design of organic photovoltaic devices. Computational approaches provide fundamental insights to these processes at the atomistic level, and serve as cost-effective avenues to guide the synthesis of novel photovoltaic materials. Conventional computational methods have been known to fail in accurately characterizing charge transfer states in molecular systems, predicting energies that were significantly lower than experimental values.
In order to address these failures, we implement a range-separated hybrid (RSH) functional approach within density functional theory. The RSH approach was benchmarked using a simple organic system and the more complex silsesquioxane molecules, which are currently investigated as potential candidates for photovoltaic applications. For the simple organic system, the RSH approach successfully reproduced empirical energies and trends where the conventional methods had previously failed.
In order to extend the approach to more complex silsesquioxane molecules, a computational protocol was implemented to account for the effect of complex environments on charge transfer state energies, for example the effect of solvent molecules. The protocol was tested and validated by comparing the charge transfer state energies to experimental measurements on silsesquioxane molecules. The charge transfer state within the silsesquioxane system was identified as charge transfer between molecules attached to the silsesquioxane cage, instead of between the molecules and the cage itself. This method provides a way to understand CT within complex environments for molecules used in photovoltaic applications, and is currently being extended to several other experimentally relevant systems, including pure organic phosphorescent materials.
Network-Mediated Collaboration Tools and Writing Pedagogy
Network-Mediated Collaboration Tools; Pedagogy; Writing; English Language & Literature
Collaborative, student-centered pedagogy and a process-oriented approach to argumentative and persuasive writing are key components of effective undergraduate writing instruction. My project focuses on innovative uses of network-mediated collaboration tools available at the University of Michigan, including Google Docs, Google Drive and compatible advanced research applications. Drawing primarily upon examples from two first-year writing courses, English 124 and English 125, I demonstrate how network-mediated collaboration tools can contribute to positive student learning outcomes during successive phases of the writing process, including brainstorming ideas, developing a thesis, crafting and assessing supporting arguments, revising essays and participating in collaborative peer review. I then explore the potential of these tools for furthering pedagogical goals in upper-level literature and writing courses, especially those with an interdisciplinary or multidisciplinary research component. I conclude that social participation mediated by cyberinfrastructure can facilitate constructive collaboration and increase student engagement in the writing process, both inside and outside the classroom.
Building Disease – Small Molecule Networks with Cytoscape
Pervis Fly, Terry Weymouth, Alla Karnovsky
We have created a plugin for an open source tool Cytoscape to visualize the mapping of small molecules to diseases. The mapping data were collected from Metab2MeSH (http://metab2mesh.ncibi.org), a web application that allows a user to determine which compounds are associated with which Medical Subject Heading (MeSH) terms and vice versa. Metab2MeSH uses a statistical approach to find significant associations between compounds and MeSH terms (Sartor et al., Bioinformatics. 2012 May 15;28(10):1408-10). Our plugin allows user to build networks using this information. Specifically, we focused on a subset of MeSH terms that describe diseases. The networks one can view are: 1) a compound-disease network showing which compounds are shared between two or more diseases and vice versa; 2) a network showing the compounds as nodes and the edge thickness determined by the degree of diseases shared between them; 3) a network showing the diseases as nodes and the edge thickness determined by the degree of compounds shared between them.
A split-step solution to the Gray-Scott equations
Matthew Warnez, Benson Muite
Numerical methods, high-order accuracy, computational efficiency
The Gray-Scott equations are a pair of non-linear partial differential equations that model the diffusion/reaction process of two chemical reactants. In this study, we will work to create a split-step numerical method to solve the Gray-Scott equations in one, two, and three dimensions. We will also derive a pseudo-implicit model, which provides a well-known
basis for comparing the split-step method in both efficiency and accuracy. Finally, we work to create and validate several sixth- and eighth-order accurate split-step solutions.
Numerical Studies of the Klein-Gordon Equation in a Periodic Setting
Albert Liu, Brian Leu, Parth Sheth
Klein, Gordon, Visualization
In contemporary physics research, there is much interest in modeling quantum interactions. The Klein-Gordon equation is a wave equation useful for such purposes. We investigate the equation by simulating different solutions of the equation using various initial conditions, with solutions that tend to zero at infinity being of special interest. The primary site used to perform these simulations is Trestles at SDSC, and we also studied the performance increase when running jobs on supercomputing resources. This involved performing a scaling study involving the relationship between core/node number and performance increase. In addition to investigating the Klein-Gordon equation, another important goal of our project was to provide an undergraduate perspective on supercomputing. When considering undergraduate involvement in the field of high performance computing, the level of student engagement is very disappointing. From our experience with supercomputing resources, we look to provide new ways in enhancing student outreach and engagement.
Computational Drug Discovery for Rap GTPases
Andrew Kalenkiewicz, Barry J. Grant
Bioinformatics, computational medicine, computational biology, molecular simulation, computational drug discovery
Ras-related protein (Rap) is a small GTPase that plays an important regulatory role in cell adhesion mechanisms, as well as the famed MAPK/ERK intracellular signaling pathway. As a member of the Ras oncogene family, abnormal Rap expression is known to be associated with certain cancers, making it an attractive target for the design of new chemotherapeutic agents. Here we report the development and application of a molecular simulation and bioinformatics approach to the discovery of novel druggable pockets and potential allosteric inhibitors of Rap. Several high-resolution Rap crystal structures have been solved (PDB codes: 1C1Y, 1GUA and 3KUC). These structures represent only a small fraction of potential Rap conformations. Using molecular dynamics simulations, three different protein trajectories were generated, each using a different crystal structure as a starting point. Through principal component analysis, an ensemble of 66 distinct Rap conformations was extracted from the simulation results. Small molecule fragment mapping illuminated an assortment of potential small molecule binding pockets distal from the main nucleotide-binding site. The utility of these pockets was investigated via small molecule docking of the NCI diversity set. The most attractive candidate binders for these pockets are currently being selected for experimental characterization of their inhibitory activity on Rap signaling pathways. However, we note that results from related Ras and Rho systems indicate that the predicted allosteric sites are viable targets for the development and optimization of new drugs (1, 2).
1. Grant BJ, Lukman S, Hocker HJ, Sayyah J, Brown JH, McCammon JA (2011) Novel Allosteric Sites on Ras for Lead Generation. PLoS ONE 6: e25711.
2. Ortiz-Sanchez JM, Nichols SE, Sayyah J, Brown JH, McCammon JA, Grant BJ (2012) Identification of Potential Small Molecule Binding Pockets on Rho Family GTPases. PLoS ONE 7: e40809.
Predictive Self-assembly of Polyhedra into Complex Structures
Pablo F. Damasceno, Michael Engel, Sharon C. Glotzer
Self-Assembly, Computational Chemistry
Ordering nanoparticles into specific patterns is a central goal of material design since organization of building blocks and properties are closely tied together. Self-assembly (SA) constitutes an important fabrication approach when high yield and scalability is needed and is possible route for achievement of materials with properties such as photonic band gap, chirality, mesophase behavior or even cloaking from the bottom-up. One recent approach to allow for a bigger variety of structures achievable via SA is the utilization of anisotropic building blocks. We have performed a systematic investigation that allowed us to delineate principles relating the shape of those building blocks and the structures assembled purely from entropy. Finally, we have defined ways of predicting characteristics of the phase behavior for arbitrary convex particles not covered in our study.
Vibrational Energy Relaxation of Cyanide Ion in Aqueous Solution
Surma Talapatra, Eitan Geva
Vibrational energy relaxation, flux, openmpi, sander
Understanding the rates and mechanisms of energy transfer in condensed phase chemical reactions is a crucial problem in chemical dynamics. Vibrational energy relaxation (VER) of molecules in solutions is important as it provides direct information about solute-solvent energy exchange process. In systems like cyanide ion in aqueous solution, VER has significant quantum effects that arises from the high vibrational frequency of the C-N bond and the strong interactions of the polar ionic solute with the solvent water. Recent experimental study by Hamm et al. measured the vibrational relaxation time using IR pump-IR probe for a few isotopomers of cyanide ion in water. We have calculated the VER rate constant for the same isotopomers of cyanide ion in water using classical molecular dynamics simulation methods using a polarizable force field. Our calculations are based on Landau-Teller theory, where, the rate constant of VER is given in terms of the Fourier transform, at the transition frequency, of the autocorrelation function of the fluctuating force exerted by the bath modes on the relaxing mode. The quantum effects are taken into account using the harmonic/Schofield quantum correction factor. The relaxation times that we obtained have very good agreement with the experimentally measured values. We also looked into the contributions from the Coulombic and Lennard-Jones parts of the potential to the relaxation rate constant. We suggest that the ion pairing of cyanide and the counter ion is a significant contributor to the relaxation process giving its signature in the high electrostatic contribution in the relaxation rate constant.
Poster 15: (Second Place Refereed Poster Award)
A Natural User Interface for 3D Environments
Eric Maslowski, Theodore Hall, Rachael Miller
The input systems that we use today, such as joysticks and computer mice, do not provide a natural mode of interaction with complex information. This problem is amplified when we use these two-dimensional input systems to navigate within a three-dimensional space, as in the University of Michigan 3D Lab’s Michigan Immersive Digital Experience Nexus (MIDEN), an immersive virtual reality space. In an effort to improve interactivity in the MIDEN, the Microsoft Kinect has been applied as a way of representing the physical body in a virtual space. By analyzing the data received from the Kinect, we have created a real-time, digital model of the body. This body represents an avatar that corresponds to the user’s location in space, allowing them to interact with virtual objects. Because the MIDEN offers the user perspective and depth perception, interaction feels more natural than maneuvering an avatar on a screen; the user can reach out and directly “touch” objects. As a supplement to physical interaction, a gesture-based user interface provides the user greater control in simulations (e.g. navigation, selection). By using the hands rather than other, more restrictive input systems, the experience becomes more immersive and the user can focus on their data analysis and understanding.
Data Driven Digital Humanities Studies
Topic modeling, scholarly communication, blogs
This poster presents my dissertation work-in-progress investigating the use of social media in scholarly communication. The digital humanities (DH) have emerged as a focal point for debates about the impact of information technology in the humanities. While the DH has its roots in the computational processing of text, the landscape today is far richer and more complicated than early practitioners of “humanities computing” could have ever imagined. DH embodies transformative methods of inquiry, radically new kinds of research objects, and a potentially destabilizing shift in scholarly publishing and communications. My poster will present my ongoing dissertation work studying the use of blogs in scholarly communications within the digital humanities. My study is a data driven analysis of digital humanities blogs using a mixed methodological approach combining topic modeling with grounded theory. This mixture of methods enables the study of a vast corpus through computation while also rigorously interpreting the meanings of generated topical categories. The purpose of my study is twofold. First, it seeks to construct a topical landscape of informal digital humanities communication answering the question: What are digital humanities scholars talking about on their blogs? Second, this study aims to inform our understanding of scholarly communication with a set of analytical categories grounded in empirical analysis of the computational models and original texts. Some preliminary results from a pilot study of DH blogs will be presented as well as a thorough description of the study design.
Studying Properties of Galaxy Filaments with Parallel Computing on Flux
Yuanyuan Zhang, Joerg Dietrich, Timothy McKay
The large-scale structure of the Universe is dominated by a filamentary “cosmic web”. Galaxy clusters lie at the junctions of filaments, which provide the paths along which accretion into the clusters occurs. Most galaxies reside in filaments, and they may provide the environment in which galaxies transform from star-forming to passively evolving systems. Individual filaments are difficult to identify due to their low density contrast, and their properties and evolution remain largely unstudied.
This project investigates the general properties and redshift evolution of filaments by stacking pairs of galaxy clusters in SDSS data. Since filaments are likely to form between cluster pairs, and the probability of filament connection is dependent on cluster mass and cluster separation, we stack the galaxy populations along the lines which connect selected cluster pairs on the gmBCG cluster catalog. Because of the large amount of data with galaxies and galaxy clusters, the above algorithm is not applicable on PC desktops. With parallel computing on Flux to realize this method, we are able to observe a significant galaxy overdensity which we associate with the filament population and analyze the properties of these filament galaxies as a function of redshift.
Poster 18: (First Place Refereed Poster Award)
Using Dynamical Properties to Measure the Masses of Galaxy Clusters
Daniel Gifford, Christopher Miller, Craig Harrison, Nicholas Kern
Galaxy Clusters, Astronomy, Simulation
We calculate the masses of galaxy clusters in the Millennium simulation based on a direct measurement of their Newtonian gravitational potential. By using as few as 100 tracers (particles or sub-halos) within the cluster velocity-radius phase-space, we can accurately (zero bias) and precisely (to within 10-15%) measure their gravitational potential masses. The potential mass is unbiased only after accounting for the ellipticity of the halos. Using semi-analytic catalogs, we show how observational techniques can add small biases and increase the statistical uncertainties of measured cluster potential masses (e.g., in fiber or slit-mask multi-object spectroscopy). This study would not be possible without access to an HPC system such as FLUX.
Simulation of shape-driven assembly
Sam Nola, Sharon C. Glotzer
Self-assembly, Monte Carlo, Computational Materials Science
This study investigates the kinetics of entropy-driven assembly in systems of hard faceted nanoparticles. This understanding will facilitate our ability to self-assemble nanostructures. Real systems have complex energetic interactions at varying length scales, making it hard to isolate the effect of particle shape. We are interested in isolating the effects of entropy, due only to the shape of the particles. Computer simulations facilitate this study by allowing us to consider an idealized system with no energetic interactions, and Monte Carlo simulations are particularly suited to simulating hard interactions.
An IK1-Mediated Excitability Sink in the Posterior Left Atrial Wall Attracts Paroxysmal Atrial Fibrillation Drivers to the Pulmonary Veins: A Numerical Study
Conrado J Calvo, MSc, Makarand Deo, PhD, Sharon Zlochiver, PhD, José Millet PhD, Omer Berenfeld, PhD
Cardiac electrophysiology, Atrial fibrillation, Rotors, Excitable media, Parabolic simulations
Background: The mechanisms underlying the maintenance of paroxysmal atrial fibrillation (AF) by high frequency rotors near or at the pulmonary veins (PVs) are not clear. As IK1 plays a major role in rotor dynamics we surmised that heterogeneous IK1 distribution, with lower density in the PVs than in the left atrium (LA), is also a major ionic mechanism for preferential localization and termination of rotors in the PVs.
Methods: Experimentally observed heterogeneities in conductance of IK1, IKs, IKr, Ito and ICaL between the LA and the PV were fitted and incorporated into 2D and pseudo-3D cylindrical and funnel-shaped numerical models constructed to represent the anatomy of the PV-LA junction (PV- LAJ). The Courtemanche-Kneller voltage kinetics for human atrial cells were implemented and three conditions were modeled to test our hypothesis and unravel drifting mechanisms: 1) all currents varied spatially; 2) all currents except IK1; 3) only IK1.
Results: Spatial gradients in ionic conductances across the PV-LAJ were sufficient to reproduce experimentally observed shorter action potential duration (APD), less negative resting membrane potential, (MDP) slower conduction velocity and shorter rotors wavelength in the PV region than in the LA. More importantly, rotors initiated in model 1 drifted toward the PV and stabilized at the shortest APD and less excitable region. When the IK1 gradient was excluded (model 2), the APD difference across the transition was maintained, but the drift direction reversed toward the slower LA region. However, in model 3, the spatial APD difference was also reversed; yet, the rotor drifted from the LA to the PV. Sensitivity analysis with physiological variations in kinetics in all the models revealed the IK1 role in creating an excitability sink responsible for attracting rotors at the PV-LAJ.
Conclusion: Consistent with experimental and clinical AF studies, simulations in an ionically heterogeneous model of the PV-LAJ showed rotor attraction toward the PV. Our simulations suggest that IK1 heterogeneity is dominant compared to other currents in conveying the drift direction through its strong effect on excitability gradient. These results enhance the understanding of the complex dynamics of rotors and AF
Synthetic Kp Index From Global Magnetohydrodynamics
Daniel Welling, Nicholas Perlongo, Jennifer Gannon, Yiqun Yu, and Aaron Ridley
KP is an important real-time indicator of geomagnetic activity as well as a necessary input for models that have forecasting potential. This paper describes a new, synthetic KP index derived from the BATS-R-US global magnetohydrodynamic (MHD) model. Enabled by high-performance computing, this index is obtained through virtual magnetometers that capture disturbances from global MHD currents, field-aligned currents mapped between the inner boundary of the MHD code and the ionosphere model, and Hall and Pedersen currents in the ionosphere. To validate the synthetic values, they are compared against observed KP for several storm events. Agreement is varied but overall positive. Examples of how synthetic KP and local-K values used in the derivation can be used to monitor local activity are also demonstrated.
Computational Challenges in Representing Multiscale Interactions in the Earth’s Climate System
Derek J. Posselt, Erika L. Roesler, David M. Wright, Gregory Tierney, and Bruce Fryxell
High performance computing, numerical simulation, climate system, uncertainty quantification
Earth’s climate system is inherently complex, and is composed of multiple components that nonlinearly interact across spatial scales that span 13 orders of magnitude (10^-6 – 10^7 meters) and temporal scales ranging from microseconds to centuries. Advances in computational resources, including processing speed, memory and disk space, and visualization are currently making it possible to model weather and climate processes with greater fidelity than ever before. In some cases (in particular, the fluid motions of atmosphere and ocean), increases in computational capacity have made it possible to directly simulate all relevant scales simultaneously. In addition, innovative uses of ensembles and other probabilistic methods are facilitating a more complete characterization of the inherent variability in the system and the uncertainties in models.
This poster presents a brief overview of climate simulation and highlights recent climate system research focused on the topics of (1) representing scale interactions in the Earth’s atmosphere, and (2) the use of ensembles to characterize and quantify uncertainty in models and in the climate system itself. In the process, the simultaneous need for high performance computing, data management, and distributed visualization is emphasized.