The Applied Math department hosts seminars on subjects of general interest both to researchers and students in mathematics, science, and related fields. All are welcome to attend these talks. This page only lists upcoming seminars, but the events page contains a complete listing of all AMath events. To receive email notification of upcoming seminars, subscribe to the Amath-seminars mailing list.
Development of a discontinuous Galerkin coastal ocean model for tsunamis and storm surges
Frank Giraldo and Shiva Gopalakrishnan, Naval Postgraduate School
Tuesday, June 14, 2011, Gug. 415L, 3:00pm
- There has been much interest recently in the modeling of tsunamis. Equally important is the modeling of storm-surges caused by hurricanes. In this talk, we will describe our approach for building a tsunami/storm-surge model on unstructured triangular grids using the discontinuous Galerkin method. We present model verification results using simple 1D and 2D wetting and drying problems and validate our code using tide gauge data from the 2004 Indian ocean tsunami. We will end the talk with our approach to building a model capable of simulating storm-surges.
How to Manage $60B Through a Credit Crisis and Not Get Fired
George Zinn, Microsoft
Friday, May 13, 2011, Loew 202, 4:00pm
Adaptive Methods for Multiscale and Multiphysics Problems in Biophysics
Michael Holst, University of California, San Diego
Wednesday, May 11, 2011, Smith 404, 1:30pm
In this lecture, we give an overview of several related projects based at UCSD involving the design and analysis of high-resolution, high-fidelity mathematical and numerical modeling techniques for solvation and diffusion phenomena in biophysics. We first outline some new theoretical results on the robust numerical discretization of the Poisson-Boltzmann equation using adaptive finite element techniques. We then give an analysis of a coupled solvation-large-deformation nonlinear elasticity model, describe an iterative method for its simulation, and give some convergence results. We then consider reaction diffusion models such as the Poisson-Nernst-Planck (PNP) and Smoluchowski-Poisson-Boltzmann (SPB) models, and describe our recent work on the development of robust simulation tools using modern geometric modeling techniques and adaptive finite element methods.
Market Madness and Microstructure: The warnings in the data
Stephen F. Elston, Quantia Analytics, LLC
Friday, May 6, 2011, Loew Hall 202, 4:00pm
The flash crash of May 6, 2010 presented traders with nearly unprecedented market conditions. From about 1440 to 1500 EDT that day the Dow Jones Industrial Average plunged nearly 1000 points and then quickly returned to nearly the original level. Within a matter of minutes several trillion dollars of equity value was wiped out across an impressive list of large cap stocks. This talk explores the market microstructure and events leading up to the flash crash. Using the example of a single large cap US stock, Accenture, PLC, symbol ACN, we explore the following points: (1) A convergence of several microstructure conditions preceded the flash crash; (2) Each microstructure condition, preceding the flash crash, occurs frequently under normal trading conditions, is detectable, and provides actionable information; (3) Clean and normalized market data are required to understand market microstructure events; (4) In fragment markets clean market data and analytics are as important to successful electronic trading as is low data latency. This is joint work with Dr. Michel G. Debiche.
Data-driven uncertainty quantification, stochastic data reduction and stochastic multiscale modeling for complex systems
Guang Lin, Pacific Northwest National Laboratory
Thursday, April 21, 2011, Guggenheim Hall 218, 4:00pm
Uncertainty plays an important role in quantifying the performance of complex systems and needs to be treated as a core element in modeling, simulation and optimization of complex systems. In this talk, a new formulation for stochastic data reduction and uncertainty quantification will be discussed with extensions to different fields of mechanics and to dynamical systems. An integrated simulation framework will be presented that conducts stochastic data reduction for large-scale noisy data sets and quantifies the uncertainties across scales and establishes "error bars" in numerical simulations. In particular, stochastic formulations based on Galerkin and collocation versions of the generalized Polynomial Chaos (gPC), adaptive ANOVA decomposition, stochastic proper orthogonal decomposition, gPC accelerated Kalman filter techniques will be discussed in some detail. Several specific uncertainty quantification applications in aerodynamic system and power system will be presented to illustrate the main idea of our approach. In the catalytic reactor applications there is often a need to model accurately multiscale reactive transport across several orders of magnitude in space and time scales. Multiscale model in both time and space can overcome this difficulty and provide a unified description of reactive transport in catalytic reactor from nanoscale to larger scales. We propose a multiscale formalism based upon hybrid model, which combines kinetic Monte Carlo (KMC) with continuum model. Thermal diffusion and mass transport of different species are solved in the continuum model. A non-iterative coupling of different-scale models will be presented, which makes it efficient and amenable to applications to the complex problems.
A stable second-order scheme for highly compressible magnetohydrodynamics.
Knut Waagan, University of Maryland
Thursday, April 21st, 2011, Guggenheim 415L, 1:30pm
Ideal magnetohydrodynamics (MHD) is a widely used fluid model for astrophysical plasma. The dynamics often involve shocks, large density fluctuations and vast scale ranges, making numerically stable simulations challenging. The ideal MHD model takes the form of a system of hyperbolic conservation (or balance) laws plus a restriction that the magnetic field has zero divergence. We present stable finite volume schemes for ideal MHD based on techniques of entropy stability, positivity and well-balancing. The schemes have been applied to chromospheric waves and interstellar turbulence.
Calculating GUE random matrix distributions
Sheehan Olver, Oxford University
Thursday, April 14th, 2011, GUG 415L, 11:00am
In random matrix theory, the Gaussian unitary ensembles (GUE) play a very important role in describing generic phenomena. In general, statistics for GUE can be expressed in terms of Fredholm determinants whose kernel depends on orthogonal polynomials with general weights. We can efficiently and accurately evaluate such orthogonal polynomials—and thus the associated Fredholm determinants—by expressing them as Riemann–Hilbert problems. This formulation depends on the equilibrium measure, which can also be readily calculated, using a straightforward Newton iteration.
Geometric flow for biomolecular solvation
Nathan Baker, Pacific Northwest National Laboratory
Thursday, April 14th, 2011, Gug 415L, 10:00am
Implicit solvent models are important components of modern biomolecular simulation methodology due to their efficiency and dramatic reduction of dimensionality. However, such models are often constructed in an ad hoc manner with an arbitrary decomposition and specification of the polar and nonpolar components. In this talk, we review current implicit solvent models and suggest a new free energy functional which combines both polar and nonpolar solvation terms in a common self-consistent framework. Upon variation, this new free energy functional yields the traditional Poisson-Boltzmann equation as well as a new geometric flow equation. We describe numerical methods for solving these equations and comment on future research directions in this area.
Reflection of internal waves in the time domain
Alberto Scotti, University of North Carolina
Thursday, March 3rd, 2011, Applied Physics Building, 2:30pm
Numerical Modeling of Reactive Transport in Porous Media
Emily Ryan - Pacific Northwest National Laboratory
Thursday, February 24, 2011, Guggenheim 218, 4:00pm
The study of reactive transport in porous media applies to many areas of science and engineering including problems in the energy sciences field such as electrochemical devices, radioactive contaminants in the subsurface, and carbon capture technologies. The multi-physics of these systems occur at various spatial and temporal scales and to understand the systems computational modeling is needed at these various scales. In this talk I will discuss multi-scale modeling efforts to investigate high temperature fuel cells, contaminants in the subsurface and post-combustion carbon capture technologies. In particular I will focus on a pore-scale model of reactive transport in porous media, which uses the smoothed particle hydrodynamics (SPH) modeling method to discretely model reactive transport at the pore-scale. SPH is a Lagrangian, particle based modeling method, which uses the particles as interpolation points to discretize and solve flow and transport equations in the porous media. s Lagrangian framework allows for easy implementation of complex chemistry and physics at interfaces and is able to easily model complex geometries such as the porous microstructures considered in this work. In the presentation I will discuss the application of the pore-scale model to two different reactive transport problems; one in the electrodes of high temperature fuel cells and the other in the porous subsurface. The pore-scale reactive transport model has been used to investigate degradation in the air electrode of high temperature fuel cells to understand the physical mechanisms behind degradation. The pore-scale model has also been applied to the reactive transport of contaminants in the subsurface, such as hexavalent uranium at the s Hanford site. The model has been used to investigate the effects of Damkohler and Peclet number on reactive transport in the subsurface and the use of a hybrid model in fractured, porous media to investigate the accuracy of Darcy-scale models in predicting mass and species distributions in a reactive system.
Eulerian Codes, LANL
Bob Robey - Los Alamos National Labs
Tuesday, June 8, 2010, Guggenheim 218, 3:30pm
The Eulerian Codes group is just one of the many groups at Los Alamos National Laboratory that are facing significant technical challenges in their future work plans. For us, these challenges include Exascale computing, improved fidelity of physics, new numerical methods, and innovative experimental work.
A key to successfully accomplishing the goals on our roadmap is the talent of the research scientists in our organization. For this we need top-notch scientists willing to tackle hard challenges.
Just as important is renewing our collaboration with the academic community in pursuit of breakthroughs in scientific capabilities.
Layered Ocean Circulation Models and Multiple Time Scales
Robert L. Higdon - Department of Mathematics, Oregon State University
Thursday, May 27, 2010, Guggenheim 218, 4:00pm
In a layered ocean model, the vertical coordinate is a quantity related to density, and a vertical discretization amounts to dividing the fluid into water masses having distinct physical properties. This talk will begin with an overview of this kind of model and will then describe some issues related to multiple time scales. In a ocean circulation model the fastest motions are typically external gravity waves, and it is commonplace to split the fast and slow motions into separate subsystems that are solved by different techniques. In connection with this time-splitting process, I will outline some work related to numerical stability, time-stepping, and conservation of mass, and I will also mention some issues with spatial discretization that have recently become apparent.
What's Coming (Maybe) in High Performance Computing
Rob Schreiber - Exascale Computing Lab, HP Labs
Thursday, April 29, 2010, Guggenheim 220, 4:00pm
For a few years now we have been focused on the multicore processor and its impact on HPC. That has distracted us from some issues that I argue are more important: getting enough memory bandwidth, getting enough network bandwidth, getting enough storage bandwidth, tolerating hardware and software failures, and finding more productive programming environments.
We have recently completed a study of promising HPC research directions. In this talk, I will focus on memory, storage, interconnect, and programming and discuss some new technologies, specifically photonics and non-volatile memory that may help open some of the bandwidth bottlenecks we now encounter. I will also talk about new programming models that may make clusters less formidable for the programmer.