Computational Mathematics and Machine Learning

Hybrid

1 - 5 November 2021

Venue: Lorentz Center@Oort

If you are invited or already registered for this workshop, you have received login details by email.

Machine learning using neural networks revolutionizes our daily lives such as automating complex tasks like speech recognition. It also finds its way into the simulation of phenomena in physics, chemistry, astronomy and biology. For the latter, a better understanding is essential in order to construct efficient, tailor-made neural networks using properties of the underlying scientific problems. The resulting deeper understanding of neural networks from a mathematical, physical and astronomical perspective is vital for future developments in this rapidly evolving field.

Neural network-based as well as tree-based machine learning (ML) have shown very impressive successes on a variety of tasks in traditional artificial intelligence. This includes classifying images, generating new images such as (fake) human faces and playing sophisticated games such as Go, it became a central tool for data analysis with applications in health and environment e.g. It is also promising in spatial and spatiotemporal modelling, uncertainty quantification.  All these are made possible by the ability to accurately approximate high dimensional functions, using modern machine learning techniques. This opens up new possibilities for addressing problems that suffer from the “curse of dimensionality” (CoD): as dimensionality grows, computational cost grows exponentially fast. This CoD problem has been an essential obstacle for the scientific community for a very long time.

Take, for example, the problem of solving partial differential equations (PDEs) numerically. With traditional numerical methods such as finite difference, finite element and spectral methods, we can now routinely solve PDEs in three spatial dimensions and time. Most of the PDEs currently studied in computational mathematics belong to this category. Well known examples include the Poisson equation, the Maxwell equations, the Euler equations, the Navier-Stokes equations, and the PDEs for linear elasticity. Sparse grids can increase our ability to handling PDEs to, say 8 to 10 dimensions. This allows us to address problems such as the Boltzmann equation for simple molecules. But we are totally lost when faced with PDEs in higher dimensions. This makes it essentially impossible to solve Fokker-Planck or Boltzmann equations for complex molecules, many-body Schrödinger, or the Hamilton-Jacobi-Bellman equations for realistic control problems.

This is exactly where machine learning can help. Indeed, machine learning-based numerical algorithms for solving high dimensional PDEs and control problems has been one of the most exciting new developments in recent years in scientific computing, and this has opened up a host of new possibilities for computational mathematics.

Solving PDEs is just the tip of the iceberg. There are many other problems for which the CoD is the main obstacle, including the classical many-body problem, turbulence and multi-scale modelling. Can machine learning help for these problems? More generally, can we extend the success of machine learning beyond traditional AI?

Other challenges arise when starting from the area of computational science and engineering (CSE). Recently, the following two concepts have gained importance in computational science: (i) machine learning (in particular neural networks) and (ii) structure-preserving (mimetic or invariant-conserving) computing for mathematical models in physics, chemistry, astronomy, biology and more. While neural networks are very strong as high-dimensional universal function approximators, as argued in the foregoing, they require enormous datasets for training and tend to perform poorly outside the range of training data. On the other hand, structure-preserving methods are strong in providing accurate solutions to complex mathematical models from science.

The foregoing stimulates research to better understand neural networks to enable the design of highly efficient, tailor-made neural networks built on top of and interwoven with structure-preserving properties of the underlying science problems that can serve as the simplified models mentioned above. This is largely unexplored terrain, and will lead to novel types of machine learning that are much more effective and have a much lower need for abundant sets of data. This is important, as neural network-based machine learning has also got the reputation of being a set of tricks instead of a set of systematic scientific principles. Its performance depends sensitively on the value of the hyper-parameters, such as the network widths and depths, the initialization, the learning rates, etc. Indeed just a few years ago, parameter tuning was considered to be very much of an art. Even now, this is still the case for some tasks. Therefore a natural question is: Can we understand these subtleties and propose better machine learning models whose performance is more robust? An important development consists of Physics Informed Neural Networks (PINNs) as proposed by George Karniadakis.  

AIM

The aim of this workshop is to formulate a plan for future developments within the area of computational science and engineering (CSE) making use of machine learning techniques. We will discuss the impact that machine learning has already made or will make on computational mathematics, and how the ideas from computational mathematics, particularly numerical analysis, can be used to help understanding and better formulating machine learning models. In the annex, the state of the art is provided in more detail. The question is: which research directions are most promising? What should we concentrate on? How can we combine physics-based and data-based techniques? Can we formulate joint projects? Or maybe a joint organisation for the discussion and dissemination of new developments?

In this workshop, we will address the following two very important questions: (1) How machine learning has already impacted and will further impact computational mathematics, scientific computing and computational science? (2) How computational mathematics, particularly numerical analysis, can impact machine learning? To accomplish the aforementioned aim, in this workshop, we review what has been learned on these two issues.

We will discuss some of the most important progress that has been made on the foregoing two issues, and where new developments should take place. This workshop will be considered a success if we have been able to put things into a perspective that will help to integrate machine learning with computational mathematics, and produced (at the end of the workshop) a sound plan for future research directions in several of the areas mentioned in Section 4. We will identify the most promising research directions, networking activities, as well as building of new collaborations between participants.

 

See for more information the PDF "Computational Mathematics and Machine Learning" of the workshop under "Workshop files".

Read more...


Follow us on:

Niels Bohrweg 1 & 2

2333 CA Leiden

The Netherlands

+31 71 527 5400