What Rowan Learned From the NVIDIA Atomistic Simulation Summit

by Corin Wagen and Spencer Schneider · Oct 9, 2025

This week, we had the pleasure of attending the NVIDIA Atomistic Simulation Summit, a two-day event held at NVIDIA's headquarters in Santa Clara. The summit brought together about a hundred researchers and engineers working at the frontier of atomistic simulation, from application developers like us to library builders and people working on low-level compute hardware. In keeping with the current state of the field, a lot of talks discussed neural network potentials (NNPs): how to train them, how to efficiently run inference, and how they could be used to great effect in practical simulations.

Corin in NVIDIA

Contemplating existence inside NVIDIA's Endeavor building.

There were talks on a wide variety of interesting topics. To name just a few:

(There were many other excellent talks, but a full list would make this blog post a bit too long.)

Rowan's Presentation

Rowan was also invited to give a presentation to the NVIDIA audience. We talked about our work using NNPs and other modern computational techniques to drive real-world impact for problems in drug discovery and materials science. Our perspective is a bit different than most of the attendees, since we're able to directly work with a lot of customers on interesting scientific problems, so we tried to bring some "field notes" on how these models were being used right now (and where they weren't).

Corin talking at NVIDIA

Corin's talk at NVIDIA.

We also did our best to state what we felt were controversial or underrated truths in the field. Here's a few:

What We Learned

While we don't feel we can share any of the exciting unpublished results we saw (sorry), here's some of our higher-level takeaways from the conference.

  1. Bridging the gap between nanoscale and mesoscale is incredibly important. A number of talks discussed how to use atom-scale insight and featurization to model mesoscale phenomena, like solid–liquid interfaces, complex polymers in solution, or gas–solid reactions. These systems often have over 100,000 atoms and are complicated to prepare, simulate, and analyze. Still, the complexity of such systems is often unavoidable—the complex interactions between various mixture components and phases are part of what make these systems interesting, and these interactions cannot easily be reproduced in simpler model systems.

  2. Interoperability matters. Both for AI agents and for human scientists, being able to quickly and smoothly combine results from many different underlying packages and scientific paradigms is key to building practically useful models. The best science rarely comes from a single model. Instead, skilled practitioners understand the strengths and limits of different approaches and combine them to get state-of-the-art results. Since very few scientific codes prioritize interoperability, getting the engineering right can be very difficult.

  3. GPUs are getting bigger. Modern GPUs are large and have a ton of memory and parallel ability. It is hard to use all the capacity of the GPU efficiently for many tasks—for instance, using NNPs with small organic molecules doesn't even come close to saturating an H100 or H200. This leads us to our next point…

  4. Batching matters. GPUs are designed for parallel execution; when using a GPU, it's almost always more efficient to run calculations in parallel (where possible). Unfortunately, this creates complex software-design challenges. Batching can be done at many levels: kernel batching, library-level batching, application-level batching, and so on. An MD code might be forced to decide whether to batch at a high level (running multiple replicates at once) or a lower level (batching neighbor-list class or force evaluations), and these choices are decidedly non-trivial from an architecture perspective.

  5. Integrating ML into C++ Applications is difficult. There's not a simple way to integrate a PyTorch model into an existing scientific package written in C++. TorchScript is historically one of the more popular methods—but TorchScript is now deprecated, and models in TorchScript cannot AFAIK take advantage of libraries like cuEquivariance. (The replacement for TorchScript, torch.export(), is not yet stable enough for production usage.) Simple architectures can be rewritten wholesale in C++, but this quickly becomes burdensome. Calling Python from C++ can work, but this creates all sorts of communication headaches. To our knowledge, no satisfactory solution exists here.

  6. Many old scientific packages will have to adapt to the GPU age. Most of the venerable molecular dynamics and quantum chemistry packages were not written with modern hardware in mind. It can be very difficult to get legacy C++ or FORTRAN code to work well on new GPUs, and considerable effort is needed to migrate these code bases—in many cases, wholesale rewrites may prove easier. This is painful but likely necessary for the field to mature.

  7. The optimal architecture for NNPs remains unclear. There have been many contentious debates in the literature about NNP architectures: some authors use explicit long-range physical forces and rotational equivariance, others prefer to implicitly learn physics through a highly expressive equivariant architecture and still other researchers allow models to learn equivariance implicitly from the data. Despite a plethora of studies, no clear conclusion about which is best yet exists. Recent work from Aditi Krishnapriyan suggests that a simple transformer can even suffice in place of a graph neural network (vide supra), a result which we frankly find shocking. (Some of our team has written about these issues before: see Corin's blog post on long-range forces in NNPs.)

We're happy to be a part of the NVIDIA ecosystem, and we're grateful for all the work that the NVIDIA team is doing to advance open science and help translate advances in computing hardware to the simulation ecosystem. Looking forward to future summits!

Banner background image

What to Read Next

What Rowan Learned From the NVIDIA Atomistic Simulation Summit

What Rowan Learned From the NVIDIA Atomistic Simulation Summit

Some notes on how docking can be tuned for different applications.
Oct 9, 2025 · Corin Wagen and Spencer Schneider
Using Implicit Solvent With Neural Network Potentials

Using Implicit Solvent With Neural Network Potentials

Modeling polar two-electron reactivity accurately with neural network potentials trained on gas-phase DFT.
Oct 7, 2025 · Corin Wagen
Preparing SMILES for Downstream Applications

Preparing SMILES for Downstream Applications

How to quickly use Rowan to predict the correct protomer and tautomer for a given SMILES.
Oct 3, 2025 · Corin Wagen
Better Search and Filtering

Better Search and Filtering

the problem of too many calculations; new ways to search, filter, and sort; how to access these tools; future directions
Sep 30, 2025 · Ari Wagen and Spencer Schneider
Boltz-2 Constraints, Implicit Solvent for NNPs, and More

Boltz-2 Constraints, Implicit Solvent for NNPs, and More

new terms of service; comparing IRCs and conformer searches; contact and pocket constraints for Boltz-2; MOL2 download; implicit-solvent NNPs; draft workflows; optimizing docking efficiency
Sep 22, 2025 · Corin Wagen, Ari Wagen, Jonathon Vandezande, Eli Mann, and Spencer Schneider
Controlling the Speed of Rowan's Docking

Controlling the Speed of Rowan's Docking

Some notes on how docking can be tuned for different applications.
Sep 22, 2025 · Corin Wagen
Studying Scaling in Electron-Affinity Predictions

Studying Scaling in Electron-Affinity Predictions

Testing low-cost computational methods to see if they get the expected scaling effects right.
Sep 10, 2025 · Corin Wagen
Open-Source Projects We Wish Existed

Open-Source Projects We Wish Existed

The lacunæ we've identified in computational chemistry and suggestions for future work.
Sep 9, 2025 · Corin Wagen, Jonathon Vandezande, Ari Wagen, and Eli Mann
How to Make a Great Open-Source Scientific Project

How to Make a Great Open-Source Scientific Project

Guidelines for building great open-source scientific-software projects.
Sep 9, 2025 · Jonathon Vandezande
ML Models for Aqueous Solubility, NNP-Predicted Redox Potentials, and More

ML Models for Aqueous Solubility, NNP-Predicted Redox Potentials, and More

the promise & peril of solubility prediction; our approach and models; pH-dependent solubility; testing NNPs for redox potentials; benchmarking opt. methods + NNPs; an FSM case study; intern farewell
Sep 5, 2025 · Eli Mann, Corin Wagen, and Ari Wagen