Applied Computational Sciences (ACOS) symposium 2018

Photos

Click here for a photo gallery of the symposium of 2018. If for privacy reasons you would like a photo removed you can sent an email to acos@nwo.nl.

Symposium

The Dutch national research funding agency NWO, the Center for Computational Energy Research (CCER) and Shell are joining forces to initiate an annual Applied Computational Science (ACOS) symposium. The symposium will take place on 10 October 2018 at DIFFER, Eindhoven.

The goal of this annual symposium is to bring together the Dutch research community, both academic and industrial researchers, in applied computational science. The symposium is intended for all scientists developing and using computational methods to contribute to solving industrial and societal challenges, and is aimed at sharing and discussing developments in the field whilst building an enduring community of practice of Dutch applied computational science researchers.

Each year the ACOS symposium will focus on a single relevant theme. For 2018, the inaugural event theme will be Applied Machine Learning. This theme will sharpen our view on the rapidly evolving field of data-driven methods. It encompasses subjects as diverse as high-throughput screening in computational materials science, and surrogate models and machine learning for real-time processing and control.

Keynote speakers

The keynote speakers for ACOS18 are: Gábor Csányi (University of Cambridge), Laura Filion (Utrecht University), Bert Kappen (Radboud Universiteit Nijmegen), Joeri van Leeuwen (ASTRON/ University of Amsterdam).

Gabor Csányi

Gábor Csányi (University of Cambridge)

Gabor Csanyi has a B.A. in mathematics from the University of Cambridge and a PhD in computational condensed matter physics from the Massachusetts Institute of Technology. After postdoctoral training under Mike Payne in Cavendish Laboratory, he is now professor of molecular modelling in the Engineering Laboratory in Cambridge. His interests are centered around computational methods of atomic scale simulation. His early work was on concurrent multiscale (QM/MM) simulation of solids, with particular application brittle fracture. More recently he has been one of the instigators the use of machine learning methods in molecular simulation, particularly molecular dynamics. He is also interested in the computational statistical mechanics of materials, e.g. the prediction of phase diagrams from first principles. He was the recipient of the F. W. Bessel award of the Humboldt Foundation.

Abstract

A new dawn of interatomic potentials
I will show our recent work on data driven interatomic potentials. The goal of this research programme is to construct analytic functions that accurately reproduce the Born-Oppenheimer potential energy surface of condensed phase materials. Much progress has been made by an increasing number of groups over the last few years, mostly by borrowing approaches and attitudes from the field of machine learning - even though the mathematical context is rather different. Accurate potentials have been published by us for carbon, silicon, boron, tungsten, iron, that cover a wide range of atomic environments, and for many other materials by other groups. These potentials are beginning to be used in materials science applications.

Laura Filion

Laura Filion (Utrecht University)

Laura Filion has a masters in physics from McMaster University, Canada, and a PhD from Utrecht University, Netherlands. After working as a post-doc at Cambridge University, UK, she moved back to Utrecht University, where she currently works as an assistant professor in soft condensed matter. Her research focuses on using classical statistical physics and computer simulations to examine the self-assembly of colloidal particles, both in and out of equilibrium. In particular, she has explored a range of entropy-driven phase transitions, developed new methods to predict crystal structures, examined the nucleation of crystalline phases, explored crystal defects in colloidal crystals, and studied motility-induced phase separation in active particles. Recently, she has started to explore how machine learning techniques can aid in the study of soft matter systems.

Abstract

Machine learning in soft matter science

Developments in machine learning have opened the door to fully new methods for studying phase transitions due to their ability to extremely efficiently identify complex patterns in manybody systems. Applications of machine learning techniques vary from the use of neural networks in developing order parameters for complex crystal structures, to locating phase transitions in spin systems, and pinpointing weak spots in colloidal glasses. The rapid emergence of multiple applications of machine learning to statistical mechanics and materials science demonstrates that these techniques are destined to become an important tool for soft matter physics. In this talk, I will review briefly some of the recent applications of machine learning to soft matter systems, and show how we have used supervised learning methods to develop order parameters for complex binary crystal phases.

Bert Kappen

Bert Kappen (Radboud Universiteit Nijmegen)

Bert Kappen completed his PhD in theoretical particle physics in 1987 at the Rockefeller University in New York. From 1987 until 1989 he worked as a scientist at the Philips Research Laboratories in Eindhoven, the Netherlands. Since 1989, he is conducting research on neural networks, Bayesian machine learning, stochastic control theory and computational neuroscience. His research has made significant contributions to approximate inference in machine learning using methods from statistical physics; he has pioneerd the field of path integral control methods for solving large non-linear stochastic optimal control problems and their relation to statistical physics. In 1997, his research was awarded the prestigious national PIONIER research subsidy from NWO/STW. Since 1997 he is associate professor and since 2004 full professor at the Radboud University. In 2005 he was Miller visiting professor at the University of California at Berkeley. Since 2009, he is honorary faculty at UCL's Gatsby Computational Neuroscience Unit in London. He co-founded in 1998 the company Smart Research that commercializes applications of neural networks and machine learning. Smart Research has developed forensic software for DNA matching used by the Dutch Forensic institute (MH17 plane crash over Ukraine in 2014), Interpol, the Vietnam government for analysis of victims of the Vietnam war and the Australian Police force. He is director of the Dutch Foundation for Neural Networks (SNN), which coordinates research on neural networks and machine learning in the Netherlands through the Machine Learning Platform.

Abstract

Artificial Intelligence: the machine learning revolution
Over the last 10 years, artificial intelligence has been revolutionarized by deep learning and modern computer architectures.
In this talk I will present an overview of some of the activities in this field. I will then focus on control problem: how a robot can learn to compute actions for complex tasks and how neural networks can help here. Finally I will discuss the problems of energy use
by modern computer systems. Here also we can learn from the brain since it is much more energy efficient. I will outline some current research to build neural circuits at atomic level.

Joeri van Leeuwen

Joeri van Leeuwen (ASTRON/ University of Amsterdam)

Joeri van Leeuwen is an astronomer at ASTRON, the Netherlands Institute for Radio Astronomy, and the Principal Investigator of one of the largest data generators in The Netherlands, the renewed Westerbork Radio Telescope.  He investigates transient phenomena in the Universe. There he aims to understand the gargantuan densities and magnetic fields of neutron stars, and the explosions that occur near pulsars and black holes. Van Leeuwen tackles these high-profile open questions through the design, execution and interpretation of dedicated radio-astronomical experiments - as shown by, e.g., his six Nature/Science papers in the last five years. He has set up and carried out several transient searches that pushed the state of the art over the entire chain of instrumentation innovation: from improved front-end electronics to new algorithms for high-performance computing, and for data-driven astronomy. His innovation and results were recognized in a series of fellowships, grants and prizes, including e.g, the ERC Consolidator and the NWO Vici.

Currently, van Leeuwen leads the 5-yr, all-sky survey with Apertif, the new wide-field high-speed radio cameras that are revolutionizing astronomy.  There, his hybrid FPGA-GPU supercomputer automatically alerts the global community, in real time, when a cosmic explosion goes off.

Abstract

Real-time Machine Learning in Astronomy
Astronomy studies a Universe that is vast and mostly empty -- yet still contains highly interesting objects. Very similarly the astronomical datasets are massive and sparse, but sprinkled with interesting bits. It is finding those significant signals that continues to be the challenge. This is especially the case when searching for the bright, millisecond duration radio pulses that were recently discovered to appear all over the sky. Some of these are emitted by pulsars in our Galaxy, others must be broadcast by some enigmatic, exceedingly powerful sources, detectable out as far as halfway to the edge of the visible Universe.  Finding such short, bursty signals at very low signal to noise, and distinguishing them from man-made radio frequency interference, is currently the main problem in increasing our understanding of the underlying sources.  We use a range a machine learning algorithms designed to identify astrophysical signals in a strong interference environment, optimized to process the large data volumes generated by the new generation of aperture and focal-plane array radio telescopes such as Apertif and LOFAR. We find cases in which ensemble classifiers comprised of Decision Trees work fast, and well. In other cases, a Deep Learning GPU running on our GPU supercomputer shows better recall and fewer false positives.  These algorithms currently power our survey for pulsars and fast astronomical transients with Apertif, sifting, 24/7 in real-time, through more data than the entire internet of the Netherlands.

Contributed talks

Jeannot Trampert
Rapid probabilistic seismic source inversion using machine learning
Determining seismic sources rapidly is an essential component of earthquake early warning (EEW). Ideally, this characterisation should be probabilistic and use a physically-complete theoretical framework. However, implementing these ideals with standard techniques requires significant computational costs, making it unfeasible for EEW applications. Conventional probabilistic inversions involve running many thousand forward simulations after data has been obtained, i.e. posterior sampling. Thus, for EEW, all computational costs must be incurred after the event time. Here, we demonstrate a new approach, based on prior sampling, which is feasible for EEW applications. All forward simulations are conducted in advance, and a neural network algorithm is used to assimilate information about the relationship between source and data. Once observations from an earthquake become available, they can be used to infer probability density functions for seismic source parameters, within milliseconds.


Jonathan Citrin
Fast and accurate fusion plasma turbulence modelling with neural networks
Plasma energy losses due to turbulent transport in magnetically confined fusion plasmas is one of the limiting factors for achieving viable fusion energy. Optimisation of reactor design demands both accurate and tractable predictive turbulence calculations. We outline a pathway to circumvent these conflicting constraints, and develop a fast first-principle-based turbulence model. This pathway is comprised of a calculation hierarchy: high-fidelity nonlinear gyrokinetic simulation, validated by experiments; a reduced (quasilinear) turbulence model, itself verified by nonlinear gyrokinetics; application of neural networks to “learn” the reduced turbulence model. HPC-scale computing is applied to generate extensive datasets of reduced model predictions. Regression with feed-forward-neural-networks trained on these datasets then provides a surrogate model with similar accuracy, but orders of magnitude faster, greatly facilitating scenario optimisation and control-oriented applications.

Paul Gelderblom
Applying machine learning in Shell
The oil and gas industry is seeing an unprecedented data growth. Data sets (for example in seismic monitoring) are reaching PetaByte scale. In a few years, with IoT based sensor networks, our data sets will reach ExaByte level. &#13Most of the current semi-automated workflows do not scale to this size: we need machine learning to use the new data in a meaningful way.&#13Often, machine learning based workflows will replace current workflows that are based on a combination of human interpretation and physics based models. We will show an example where deep learning is used to accelerate seismic interpretation. &#13In other cases, new, valuable, information is created from existing data sources. We will show how in our Predictive Maintenance workflow, we can now predict failure of a pump or valve before the event, which avoids expensive unplanned maintenance and shutdowns of plants. The monitoring data was mostly already there, but it was not combined and the volume was too overwhelming.

Deadlines and registration

  • Abstracts for pitch/poster and contributed talks can no longer be submitted, the deadline was 10 September, 12.00 hrs.
  • Participant registration: Due to large amount of participants, the registration was closed at earlier stage. You cannot register any longer.

Programme Committee

The programme committee is formed by:
Vianney Koelman - TU/e / CCER, chair
Peter Bobbert - TU/e
Suleyman Er - DIFFER
Johan Mentink - RU
Shuxia Tao - TU/e
Federico Toschi - TU/e
Jaap v.d. Vegt - UT

Costs

ACOS18 attendance is free of charge. However, registration is required. If needed, a cap will be applied to the maximum number of registrations.

Location

ACOS18 will take place at the Dutch Institute for Fundamental Energy Research (DIFFER) in Eindhoven, the Netherlands.
Address: De Zaale 20, 5612 AJ, Eindhoven.
Website: www.differ.nl

More information

For further information, please contact the symposium organizers via acos@nwo.nl.


Contact

dr. Maria Sovago +31 (0)30 600 13 68 | +31 (0)61 489 36 72 m.sovago@nwo.nl