Research Homepage of Erik Schnetter

Welcome!

My research interests lie in computational science, in using computers as tools to solve scientific and engineering problems. This requires not only correctly (and efficiently) implemented physics models, but also requires tools to build complete applications around these models. For example:

I am research staff at the Perimeter Institute for Theoretical Physics in Waterloo (Canada), where I lead Research Technologies projects, and am also adjunct professor in the Department of Physics at the University of Guelph. In addition, I hold an affiliation with the Center for Computation & Technology at the Louisiana State University in Baton Rouge (USA).

In high performance computing (HPC), I research ways to harness the computing power of current (and future) HPC systems, and making their power available to end users and programmers, so that these systems can be applied towards solving scientific and engineering problems. It is unfortunate that such systems are notoriously difficult to use, and their architecture and programming models change as hardware advances and becomes more powerful.

I use software frameworks as vehicle to implement ideas, test them in realistic environments, and ensure they work together. Frameworks allow application scientists to create large, complex multi-physics applications by coupling independently developed modules. It is important to find abstractions which lead to an overall modular structure while permitting efficient couplings between modules, and to have clear boundaries between application science parts and high performance computing parts.

In addition to the above, I have a long-standing interest in relativistic astrophysics, and I maintain close collaborations with researchers at Louisiana State University, Caltech, and the Albert-Einstein-Institut in Germany. In these collaborations I study compact objects such as black holes, neutron stars, or core collapse supernovae.



Projects and Collaborations

Carpet is a mesh refinement infrastructure for the Cactus framework. Carpet supports adaptive mesh refinement (AMR), multiple grid patches, is parallelised using MPI, and runs on most existing computer architectures. It supports hybrid parallelisation combining MPI and OpenMP in support of the modern multi-core architectures. Carpet is mature and scales to more than 10k processors.

Kranc is an automated code generation system that creates complete Cactus modules from Mathematica equations. It expands of tensorial expressions, discretises derivatives with high-order finite differences, and generates the code. Automated code generation is in a certain sense the equivalent to using libraries of efficient solvers, since no such libraries exist for explicit, stencil-based codes.

The Einstein Toolkit is a collection of Cactus thorns (modules) for relativistic astrophysics, optimized and supported for numerical relativitists studying the physics of black holes, neutron stars, and gravitational waves. The Cactus group at the CCT maintains a set of core thorns ensuring interoperability. Many people and groups worldwide have contributed to the Einstein Toolkit over the years, which contains today a set of high-quality evolution methods, initial data solvers, apparent and event horizon finders, and wave extraction methods.