Book of Abstracts
Book of Abstracts can be downloaded here.
Incompressible Magnetohydrodynamics (MHD) models are relevant in low Lundquist number liquid metals, high Lundquist number, large guide field fusion plasmas, and low Mach number compressible flows. Due to its complexity, it is crucial to understand the dynamics of electrically conducting flow in the presence of electromagnetic fields via simulation. In this work, we proposed an Embedded-hybridized discontinuous Galerkin (E-HDG) method for solving the incompressible resistive MHD equations.
In particular, the E-HDG method is computationally cheaper than the corresponding HDG method. The benefit is even significant in the three-dimensional scenario. Furthermore, a specific choice of the approximation spaces guarantees that the proposed method is H(div)-conforming, meaning that the velocity and magnetic fields are pointwise divergence-free. We implement our approach in the open-source library MFEM and validate our method using manufactured solutions. The results indicate that convergence rates in the L^2 norm for the velocity and magnetic fields are optimal, and divergence error can be reduced to machine zero. Furthermore, the proposed E-HDG discretization is pressure robust.
Algebraically stabilized finite element discretizations of scalar steady-state convection-diffusion-reaction equations often provide accurate approximate solutions satisfying the discrete maximum principle (DMP). However, it was observed that a deterioration of the accuracy and convergence rates may occur for some problems if meshes without local symmetries are used. We investigate these phenomena both numerically and analytically and the findings are used to design a new algebraic stabilization called Symmetrized Monotone Upwind-type Algebraically Stabilized (SMUAS) method. The SMUAS method is linearity preserving and satisfies the DMP on arbitrary simplicial meshes. Moreover, we present numerical results indicating that the SMUAS method leads to optimal convergence rates on general simplicial meshes.
Biot's model represents the coupling between mechanical deformation and fluid flow in a porous media. The numerical simulation of this multiphysics problem has become a topic of increasing importance due to its wide range of applications. Thus, the study of appropriate discretization techniques and the design of efficient solution methods have to be investigated.
In particular, nonphysical oscillations might appear in the numerical solution of the Biot's model when standard finite element discretizations are considered for the discretization of the model. In this work, we address the issue related to the presence of non-physical oscillations, by proposing a new stabilization scheme. The new scheme allows us to iterate the fluid and mechanics problems in a fashion similar to the well-known fixed-stress split method. We will present numerical results illustrating the robust behavior of both the stabilization and iterative solver with respect to the physical and discretization parameters of the model.
Despite significant recent advancements, the understanding of the 3D glioblastoma invasion patterns in the brain. A particular role in the collective migration of the glioblastoma cells is played by the distribution of both major brain fibres and collagen fibres present at the tumour site. To address this aspect, in this talk we present our recent advances in this direction, focusing on our recent 3D multiscale moving-boundary computational and modelling framework development for glioblastoma invasion.
In this talk we discuss some issues related to root water uptake in the unsaturated water flow framework.
First we discuss different optimal control strategies for enhancing the efficiency of water consumption while at the same time ensuring the suitable root water uptake, and minimizing water percolation beneath the root zone. The characterization of the optimal control in terms of a suitable relation with the adjoint state of the optimality conditions is then used to develop numerical simulations on different hydrological settings.
Afterwards, we focus on a memory term in order to account for water stress in root water uptake function of the Richards' equation.
In this work, we derive a model for a deformable porous medium with a growing interface and with phase change to model eco-hydro-mechanical problems in which there is a continuous deposition of porous substrate on the surface and the simultaneous decay and phase change between solid and fluid. The model will then be simplified for one-dimensional scenarios or in multi- dimension under small deformations, leading to a treatable set of equations. The time and length scales of the problem are discussed, and its limiting behaviour is discussed with the help of numerical simulations. Applications to environmental and manufacturing problems are discussed together with an overview of the split/partitioned numerical discretisation procedure for this coupled problem.
In this work, we investigate numerical methods for nonlinear parabolic equations coupled to other ordinary differential equations, and/or evolution problems. The diffusion coefficient of the first equation is allowed to vanish (degenerate diffusion) at zero concentration and become singular at full concentration. Such systems model many different physical situations from flow of multiple fluids in porous media (possibly compressible or even hysteretic), permafrost melting, to the nutrient dependent growth of biofilms. We first propose an implicit (semi) time-discretization for the system which is well-posed, consistent, positivity preserving, and satisfies a maximum principle. Then an iterative linearization strategy is proposed to solve the nonlinear time-discrete problems which combines the features of the Newton method and the L-scheme, i.e., a modified L-scheme. The linearization scheme is shown to be globally convergent (even for degenerate and singular cases) provided a Holder continuity result holds for the time-discrete solution. Moreover, it is linearly converging in the non-degenerate case with a contraction rate proportional to a power of the time-step size. Finally, guaranteed a-posteriori error estimators are derived that enable us to choose optimal parameters in each time-step and iteration making the convergence of the scheme fast and robust. Numerical results are presented corroborating our claims. The performance of the combined scheme is compared to more conventional schemes which reveals that it is significantly more stable than the Newton method and of comparable speed.
Sugar maple trees have an astounding ability to generate a large positive stem pressure in response to daily freeze/thaw cycles during the maple syrup harvest season. I will present a mathematical model that resolves a long-standing controversy in the tree biology literature over precisely which bio-physical mechanisms are responsible for this phenomenon. Our model incorporates special features of the cellular microstructure of maple sapwood and treats the sap as a two-phase (gas + liquid) mixture whose dynamics are governed by the combined effects of porous media flow, phase change, gas dissolution and osmosis. The governing differential equations on the microscale are then averaged through the process of periodic homogenization to obtain a macroscale equation for heat transport within the tree stem. Numerical simulations of this multi-scale model are then compared with experimental measurements, demonstrating that we are able to capture the essential features of pressure variations observed in actual maple trees.
We present a numerical scheme for a stochastic Stefan problem. After briefly talking about the convergence result, we tackle the question of efficient strategies for solving the nonlinear equation associated with this scheme. We consider several approaches, from a standard Newton technique to linearised solvers. The advantage of the latter is that it uses the same matrix for each time step, each nonlinear iteration and each realisation of the Brownian motion. As a consequence, the system can be factorised once and for all and quickly solved at each iteration. The drawback is a slowest convergence. We compare the computational efficiency of each solver, in the deterministic and stochastic cases. We also discuss the choice of optimal choices of linearisation parameters.
This is a joint work of Jerome Droniou, Muhammad Awais Khan and Sorin Pop.
The simple idea of two-dimensional image segmentation is minimizing the energy of plane curves.
The gradient flow method is a typical approach to achieve the minimization.
In such a flow, the gradient generally depends on the norm, and taking the L^2-norm is the standard way. Recently, a Sobolev gradient flow has been studied. In this talk, we report the comparison study of several image segmentation techniques and apply to track a flame/smoldering frot in combustion.
We consider a nonlinear poroelasticity model that describes the quasi-static mechanical behaviour of a fluid-saturated porous medium whose permeability depends on the divergence of the displacement. Such nonlinear models are typically used to study biological structures like tissues, organs, cartilage and bones, which are known for a nonlinear dependence of their permeability/hydraulic conductivity on solid dilation.
We formulate the fixed-stress split method for the iterative solution of the coupled problem under investigation and prove its linear convergence for sufficiently small time steps under the assumption that the hydraulic conductivity is a strictly positive, bounded and Lipschitz-continuous function.
The Hamilton-Jacobi-Bellman equation arising from the optimal portfolio selection problem is studied by means of the maximal monotone operator method. The existence and uniqueness of a solution to the Cauchy problem for the nonlinear parabolic partial integral differential equation in an abstract setting are investigated by using the Banach fixed-point theorem, the Fourier transform, and the monotone operators technique.
We propose a level set method to reconstruct unknown surfaces from a point cloud, without assuming that the connections between points are known. The formulation of the problem follows the variational one described in [3] and the numerical method is based on the Semi-Lagrangian scheme of E. Carlini and R. Ferretti presented in [1]. The main novelty of this work consists in the use of a local multi-linear interpolator for the update of the solution, instead of a global one, with the aim of saving on the computational cost. To ensure a good behaviour of the solution and finally obtain a signed distance function as a result, a reinitialization step based on [2] is performed. Special attention has been paid to the computational cost of the algorithm. Localization and fast algorithms are developed, resulting in faster reconstruction and thus the opportunity to easily improve the resolution. Numerical tests in two and three dimensions are presented to show how the model recovers concave features and details, depending on the resolution and on the balancing between the advective term and the curvature constraint.
This is joint work with Matteo Semplice.
References
[1] Carlini, E., Ferretti, R.: A semi-lagrangian scheme with radial basis approximation for surface reconstruction. Comput. Vis. Sci. (2017)
[2] Hartmann, D., Meinke, M., Schr ̈oder, W.: Differential equation based constrained reinitial- ization for level set methods. J. Computat. Phys. (2008).
[3] Zhao, H.K., Osher, S., Merriman, B., Kang, M.: Implicit and nonparametric shape reconstruction from unorganized data using a variational level set method. Comp. Vision and Image Understanding (2000)
The seminar builds on the work carried out initially with several collaborators (Y. Gorsse, A. De Brauer, E. Abbate, G. Puppo, A. Fondanèche, M. Bergmann) in which the aim was essentially to design numerical schemes for multilateral interfaces, first for high Mach numbers and then in the incompressible limit. The central idea of these works was to combine an ad hoc numerical model discretized on a Cartesian or hierarchical mesh and an implicitly modeled interface. In the above-mentioned succession of works, it has been possible to move from explicit to linear implicit integration methods that overcome the limitation of time scales imposed by acoustic waves, at the cost of introducing auxiliary integration variables. In this perspective, I will present recent work in collaboration with A. Thomann and G. Puppo on the possibility of further simplifying multilateral models in arbitrary Mach regimes through a semi-implicit approach and a relaxation that does not require the use of auxiliary variables.
Integrodifferential problems arise in many practical situations when the solution of the governing PDE depends on its history. This is usually modelled by a Volterra operator. Such operators include also time-fractional derivatives, which are very in fashion in the last decades. Variable order fractional calculus represents a new direction in the research. There are still many open questions in this area, because the use of known techniques is very restricted. We try to point out the main difficulties and we also state some open problems.
We present a new kind of weighted essentially nonoscillatory (WENO) finite element schemes for hyperbolic conservation laws and systems. In contrast to WENO-based slope limiting for discontinuous Galerkin methods, our approach uses a reconstruction-based smoothness sensor to blend the numerical viscosity operators of high- and low-order stabilization terms.The so-defined hybrid approximation introduces low-order nonlinear diffusion in the vicinity of shocks, while preserving the high-order accuracy of the baseline discretization in regions where the exact solution is smooth. The underlying reconstruction procedure performs Hermite interpolation on stencils consisting of a mesh cell and its von Neumann neighbors. The amount of numerical dissipation depends on the relative differences between partial derivatives of reconstructed candidate polynomials and those of the consistent finite element approximation. All derivatives are taken into account by the employed smoothness sensor. For finite element approximations using polynomials of degree p, we prove that for the consistency error of the proposed nonlinear stabilization is of the order p+1/2. This estimate is optimal for general meshes. For uniform meshes and smooth exact solutions, the experimentally observed rate of convergence is as high as p+1.
This is joint work with Joshua Vedral (TU Dortmund University).
We consider the approximation of the problem - nabla.(Lambda nabla u) = f, with homogeneous Dirichlet boundary conditions, tensor field Lambda bounded by below and above (without more regularity) and right-hand-side f being a measure or belonging to L1. Since a weak solution to the continuous problem is not unique in general, an entropy sense is considered. We first review the theoretical and numerical results available in the literature, with a particular discussion on the choice between two-point flux approximation on particular grids and more general variational approximations on simplicial grids. We then show how nonlinear approximations can show some convergence properties. The most accurate results are nevertheless obtained with the schemes for which the convergence properties are only weak.
This is joint work with Dmitri Kuzmin (TU Dortmund University) and Hennes Hajduk (University of Oslo).
The algebraic flux correction (AFC) schemes presented in this work constrain a standard continuous finite element discretization of a nonlinear hyperbolic problem to satisfy relevant maximum principles and entropy stability conditions. The desired properties are enforced by applying a limiter to antidiffusive fluxes that represent the difference between the high-order baseline scheme and a property-preserving approximation of Lax--Friedrichs type. In the first step of the limiting procedure, the given target fluxes are adjusted in a way that guarantees preservation of local and/or global bounds. In the second step, additional limiting is performed, if necessary,to ensure the validity of fully discrete and/or semi-discrete entropy inequalities. The limiter-based entropy fixes considered in this work are applicable to finite element discretizations of scalar hyperbolic equations and systems alike. The underlying inequality constraints are formulated using Tadmor's entropy stability theory.
The proposed limiters impose entropy-conservative or entropy-dissipative bounds on the rate of entropy production by antidiffusive fluxes and Runge--Kutta (RK) time discretizations. Two versions of the fully discrete entropy fix are developed for this purpose. To motivate the use of limiter-based entropy fixes, we prove a finite element version of the Lax--Wendroff theorem and perform numerical studies for standard test problems. In our numerical experiments, entropy-dissipative schemes converge to correct weak solutions of scalar conservation laws, of the Euler equations, and of the shallow water equations.
This contribution discusses introducing child-related benefits into pension system models and their advantages and disadvantages. The model with child-related pension benefits dependent on the average wage is examined concerning the effects of the child factor on individual fertility and private savings. Subsequently, we estimate the size of the child factor in the current setting of Slovakia's pension system and several other alternatives. Finally, the optimal setting of the above pension system model is presented and compared with the presented alternatives. We show that the current setting of the pension system can be brought closer to the optimum by, for example, more generous awarding of personal wage points for raising children.
The presentation describes the modeling of freezing and thawing in a fully saturated porous medium with general geometry of pores. The research is motivated by the climate changes inducing thawing of permanently frozen land with further environmental impact. During the phase transition the grains remain intact but participate in the heat transfer and in mechanical interaction due to the difference in specific volumes of liquid and solid phase. Freezing and thawing inside the porous medium is accompanied by complex processes affected by the material composition, micro-scale interfaces between phases within the medium, bulk properties of the presented phases, and ambient physical conditions. Volumetric change of the liquid presented in pores subjected to phase change conditions is a crucial phenomenon. We treat the phase transition at microscale to reflect the generic inhomogeneity of volume occupied by the freezing porous medium. The micro-scale model describes mechanical, thermal, and phase change processes within a small sample of a porous medium. The phase change is described in the Lagrangian framework by means of the energy, Navier, and phase-field equations. Current distribution of the phases is obtained by the phase-field equation coupled to multi-physics of the problem. The model provides spatio-temporal distribution of primary variables, the resulting forces exerted on grain surfaces by the change in specific volume due to phase transition, and possibly, the mean values of the key quantities useful for upscaling. We discuss crystal formation in the pores and surficial phenomena accompanying the process of phase transition which can be responsible for secondary vertical transport of the liquid phase. The role of the model is demonstrated on several computational studies which follow the published results (e.g., in Žák, Beneš, and Illangasekare, Comput. Methods Appl. Mech. Engrg. 414, 2023, 116166).
We are developing a Virtual Eye and in silico therapies for the treatment of retinal diseases like age-related macular degeneration. The underlying model consists of a pharmacological model for the drug, which we extend by including spatial resolution, coupled with models describing the physiology of the vitreous medium. The resulting model is a system of partial differential equations, convection-diffusion-reaction equations coupled with Darcy-, Navier-Stokes- or viscoelastic Burgers flow. We are performing long-term three-dimensional Finite Element simulations and with specific output functionals we can perform accurate and efficient treatment testing, calculate the optimal injection position, perform drug comparison, and quantify the effectiveness of the therapy.
We present mathematical models for drugs binding to nuclear receptors such as the pregnane X receptor, a hepatocyte expressed nuclear receptor which is responsible for the regulation of xenobiotic, glucose and lipid metabolism as well as bile acid homeostasis. Because many ligand-activated nuclear receptor proteins inhibit the transcription of their own gene, corresponding reaction-diffusion models exhibit oscillatory behavior for particular parameter value ranges. Combining in vitro data and diffusion estimates based on biophysical properties of nuclear receptors and the chaperones they bind to, we predict when oscillatory behavior can be expected in human hepatocytes. We discuss the benefits of reaction-diffusion models for this application as opposed to time-delay models where spatial heterogeneity is discarded.
The shallow water equations (SWE) are a commonly used model to study tsunamis, tides, and coastal ocean circulation.
There exist various numerical methods to discretize them. Discontinuous Galerkin (DG) methods are receiving much attention in this field due to many favorable properties, including being locally conservative, supporting high-order approximation spaces, being robustness for problems with shocks and discontinuities, and their natural support for h- and p-adaptivity.
We present a new formulation of the SWE that is suitable for integration without employing quadrature rules.
Additionally, we present a p-adaptive DG discretization, including a parameter-free adaptivity indicator that incorporates slope limiting and aims to find a good balance between solution accuracy and degrees of freedom used throughout the simulation.
Numerical results and comparisons to indicators from the literature will be presented.
Subcell limiting strategies for discontinuous Galerkin spectral element methods do not provably satisfy a semi-discrete cell entropy inequality. In this work, we introduce a subcell limiting strategy that satisfies a semi-discrete cell entropy inequality by formulating the limiting factors as solutions to an optimization problem whose exact solution can be computed via a greedy algorithm. We also discuss how the proposed subcell limiting strategy can be modified to accommodate general convex constraints. Numerical experiments suggest that the proposed limiting strategy preserves high-order accuracy for smooth solutions and converge to entropic solutions.
In this presentation, we propose a reaction-diffusion type model of self-propelled objects. Our model can describe both deforming self-propelled objects, such as droplets, and solid motions, such as camphor disks, by adjusting parameters. We also derive an interface model by taking the singular limit. Numerical experiments are performed to compare the behavior of the two models.
This paper will focus on pricing options in marketing with two basic assets with risk and one basic asset without risk. In so doing, the Black-Scholes model and the European options which is applicable at the due date were used. By investigating the European option to find the proper price, it is necessary to solve an equation with partial derivatives which has two spatial variables. The finite differences will be used for such equations. Finite differences for one dimensional equations commonly ends in a three diagonal set which will be solved by calculation costs O(n) in which n is the number of discrete points. But here, since the problems
are two dimensional, the Alternating Direction Implicit (ADI) and Alternating Explicit Method (ADE) are used to reduce the calculation costs. The open cost is at the level of discrete points and this is the advantage of these methods. Moreover, these methods enjoy acceptable stability. Though ADI and ADE are equal and easy in calculations, evaluating these methods in pricing the option indicates that the ADI method is sensitive to discontinuity or non-derivation which is the common property of income function.
The presented research investigates the progression of water freezing in a complex three-dimensional geometry that can be considered a macro-scale saturated porous medium. The used porous structure represents a model of a container filled with spherical glass beads. It has been created artificially by simulation of spheres gradually falling into the container and organizing themselves due to gravity and mutual collisions. For this purpose, an original model has been developed, based on particle interaction forces exponentially decreasing with distance . This model allows for accurate simulation of collisions with the coefficient of restitution anywhere between zero and one. The implementation is notably simple as it employs a readily available high order ODE solver with time step adaptivity, which resolves the moments of collision in a natural way. For partially elastic spheres, the solution leads to a stationary state without jitter. For the modeling of the freezing process, two approaches have been utilized: The phase field approach has been compared to the more simple solution of the heat equation in a heterogeneous medium, with artificial focusing of latent heat release based on temperature. Both models have been implemented using a hybrid parallel (OpenMP+MPI) algorithm based on the finite volume method and the Runge-Kutta-Merson explicit solver with adaptive time stepping. Results comparing both phase transition modeling approaches together with the respective computational costs and parallel speedup are presented.
This contribution presents an alternative way of estimating the blood damage hemolysis index using the viscoelastic stretch tensor. This tensor is obtained by solving a set of coupled partial differential equations describing the rheology of shear-thinning viscoelastic fluid. The main advantage of the chosen approach is that the already calculated and known stretch tensor is just reused and postprocessed to assess the local deformation of red blood cells, that can in certain cases lead to degradation of the cell membranes and consequent release of hemoglobin. The algorithm of solving the viscoelastic stretch tensor equations and hemolysis index postprocessing will be outlined and discussed in detail. Some initial numerical simulations for an axisymmetric idealized stenosed blood vessel, obtained using an in-house finite-volume solver, will be presented.
Model order reduction for parameterized partial differential equations is a very active research area that has seen tremendous development in recent years from both theoretical and application perspectives.
A particular promising approach is the reduced basis method that relies on the approximation of the solution manifold of a parameterized system by tailored low dimensional approximation spaces that are spanned from suitably selected particular solutions, called snapshots. With speedups that can reach several orders of magnitude, reduced basis methods enable high fidelity real-time simulations for certain problem classes and dramatically reduce the computational costs in many-query applications.
While the "online efficiency" of these model reduction methods is very convincing for problems with a rapid decay of the Kolmogorov n-width,
there are still major drawbacks and limitations. Most importantly, the construction of the reduced system in a so called "offline phase" is
extremely CPU-time and memory consuming for large scale systems.
For practical applications, it is thus necessary to derive model reduction techniques that do not rely on a classical offline/online splitting but allow for more flexibility in the usage of computational resources.
In this talk we focus on learning based reduction methods in the context of PDE constrained optimization and inverse problems and evaluate their overall efficiency.
We discuss learning strategies, such as adaptive enrichment as well as a combination of reduced order models with machine learning approaches in the contest of time dependent problems. Concepts of rigorous certification and convergence will be presented, as well as numerical experiments that demonstrate the efficiency of the proposed approaches.
Despite the significant breakthrough of machine learning in science, solving differential equations using deep learning naively usually leads to lack of interpretation, poor generalization, and lots of training data. Although some machine learning approaches, such as physics-informed neural networks, have proved to overcome the above challenges; however, a slight change in the underlying parameters governing the differential equation could result in the retraining of the model. Therefore, in this study, we employ the physics-informed DeepONet (PI-DeepONet) to approximate the solution operator of a fully nonlinear partial differential equation arising from finance. PI-DeepONet incorporates known physics into the neural network, which consists of a deep neural network that learns the solution of the PDE and an operator network that enforces the PDE at each iteration. We consider a fully nonlinear Hamilton--Jacobi--Bellman (HJB) equation arising from the stochastic optimization problem, where the goal of an investor is to maximize the conditional expected value of the terminal utility of a portfolio. The fully nonlinear HJB equation is first transformed into a quasilinear parabolic equation using the Ricatti transform. Then, the solution of the transformed quasilinear equation is approximated using PI-DeepONet.
Data envelopment analysis (DEA) is a non-parametric technique for relative efficiency evaluation. It formulates models in the form of optimisation problems where the objective function can be interpreted as an efficiency measure and the optimal value is the efficiency score. Path-based DEA models represent a subclass of models where the efficiency score is found by following a parametric path running towards the boundary of the technology set. In this presentation we focus on models where the parametric path is characterised by a direction vector defined using the anti-ideal point. We show that these models possess several desirable properties and are applicable even in the presence of negative data. We also formulate the modification of the models for environmental evaluation, where the undesirable outputs are taken into account. The results are demonstrated on numerical examples.
In quantum mechanics, the Rosen-Zener model represents a two-level quantum system. Its generalization to multiple degenerate sets of states leads to a non-autonomous linear system of ordinary differential equations (ODEs). We propose a new method for computing the solution operator of this system of ODEs. This new method is based on a recently introduced expression of the solution in terms of an infinite matrix equation, which can be efficiently approximated by truncation, fixed point iterations, and low-rank approximation. This expression is possible thanks to the so-called $\star$-product approach for linear ODEs. The numerical experiments show that the new method's computing time seems to scale linearly with the model's size.
The solution of initial value problems with the classic approach of neural forms relies on a suitable neural network parameter optimisation. The neural forms employ a trial solution containing a neural network term. An extension of this method, called domain segmentation, splits the solution domain to subsequently solve the neural forms on equidistant subdomains. Depending on the machine learning task, different optimisation methods like, e.g., Adam and quasi-newton BFGS may be more beneficial for a specific problem. In this paper, we investigate differences in first (Adam) and second (BFGS) order optimisation for solving three different initial value problems using the neural forms approach with domain segmentation. The incorporated neural network term will additionally be tested with a normalised and a non-normalised version. An adaptive mechanism is employed to ensure finishing the optimisation with a suitable minimum in each subdomain.
We propose two algorithms to solve a high-order segmentation model resulted by a compactness term involving a squared sum of pairwise potentials. Existing algorithms suffer from the drawback of being computationally costly. And they are hard to reach a local minimum because lots of parameters need to be fine-tuned. To address the problems, we use the Fenchel-Legendre transformation to rewrite the model as a min-max problem and propose a simple primal-dual optimization algorithm PD-TD. Moreover, we relax the constraint on the solution and propose another algorithm PD-STD which achieves better performance. Based on the variational explanation of softmax layer, the PD-STD algorithm can be combined with DCNNs to integrate the spatial prior into the DCNNs and guarantee the outputs of DCNNs are in compact shape. Our methods are faster and more powerful, as evidenced by several image segmentation experiments. In particular, we applied our method to the popular DeepLabV3 and the state-of-art IrisParseNet image segmentation networks, the results show that our method can achieve greater mIoU, dice, and compactness on noisy Iris dataset.
In this talk we study numerical approximations to the incompressible Navier-Stokes equations by means of proper orthogonal decomposition (POD)
methods. It is known that in case the set of snapshots is based on finite differences or time derivatives then pointwise in time error bounds can be proved. In this talk we show how to get second in order in time estimations in both cases. In particular, we prove that using snapshots based on first order time derivatives we can obtain second order estimates in time.
We consider parametrized linear-quadratic optimal control problems and provide their online-efficient solutions by combining greedy reduced basis methods and machine learning algorithms. To this end, we first extend the greedy control algorithm, which builds a reduced basis for the manifold of optimal final time adjoint states, to the setting where the objective functional consists of a penalty term measuring the deviation from a desired state and a term describing the control energy. Afterwards, we apply machine learning surrogates to accelerate the online evaluation of the reduced model. The error estimates proven for the greedy procedure are further transferred to the machine learning models and thus allow for efficient a posteriori error certification. Numerical examples highlight the potential of the proposed methodology.
To tackle parametric partial differential equations with highly heterogeneous coefficients, we propose an adaptive localized basis construction procedure based on both offline training and online enrichment. First, in the offline phase, a set of problem-adapted local basis functions is precomputed. Next, in the online phase, we use a localized residual-based a posteriori error estimator to investigate the accuracy of the reduced solution for any given new parameter. As the error estimator is localized, we can exploit it to adaptively enrich the reduced solution space locally where needed. The approach thus guarantees the accuracy of reduced solutions given any possibly insufficient reduced basis that was constructed during the offline phase. Numerical experiments demonstrate the efficiency of the proposed method.
In data envelopment analysis (DEA) each model for efficiency evaluation can be formulated in two forms - the envelopment and the multiplier form. These two forms are in a primal-dual relationship, which allows to establish links between the primal and dual optimal solutions through the optimality conditions and provide an economical interpretation for the models. In this presentation we derive the multiplier form for the general class of path-based DEA models. It is known that path-based models do not project on the efficient frontier, in general. We use the envelopment and the multiplier models to form a single-stage optimisation procedure for finding an efficient target. We also extend the single-stage approach for environmental evaluation and demonstrate the results on numerical examples.
Co-authors: Mária Trnovská, Margaréta Halická
We propose a new method for remap of material volume fractions, densities, and specific internal energies in the context of multi-material compressible ALE hydrodynamics. The remap is based on advection in pseudotime. As the volume fraction approach may diffuse materials over many mesh elements, we introduce a sharpening modification on PDE level. We explain the effects of the modification and how it produces results that are still conservative for volume, mass, internal energy, and bounded with respect to the volume fraction, density, and specific internal energy. The latter involves FCT-type methods. The second major contribution, in addition to sharpening, is that all remap methods avoid assembly of global matrices. This avoids data motion and enables higher computational efficiency.
Performed under the auspices of the U.S. Department of Energy under Contract DE-AC52-07NA27344 (LLNL-ABS-847808).
This contribution discusses our findings on fitting material parameters for rock masses. These parameters characterize rock after the tunnel excavation process. We utilize measurements from the Tunnel Sealing Experiment (TSX) in Canada. The adopted model is poroelastic, featuring a nonlinear relationship between stresses and permeability, implemented using the Flow123d library. The Bayesian inversion of the unknown parameters is carried out with the SurrDAMH library. The Delayed Acceptance Metropolis-Hastings (DAMH) algorithm is at the heart of our approach, where a surrogate model accelerates the sampling process. Specifically, we employ a Neural Network as the surrogate model, which is dynamically refined during the sampling process using newly acquired data.
In this paper, the improvement of Frankot and Chellappa method is presented with the help of which the surfaces can be reconstructed using normal vectors which some of them are uncertain or ambiguous. Since it is often not possible to calculate the surface analytically, numerical methods must be used. This leads to the Poisson equation, the solution of which can be estimated numerically using various approaches such as the finite difference approach. This modified method leads to better results in terms of accuracy, noise suppression and gap reconstruction. In fact, a fuzzy extension is presented for a discrete sine transform method that is fast, accurate and nearly robust to noisy data and can be used for surface reconstruction. To solve Poisson's fuzzy equation with fuzzy function is used as the right-hand function, a fuzzy sine discrete transform is introduced. In addition we use F-transform as a filter in order to remove noise after we solve the Poisson's fuzzy equation with them.
We use the general framework of summation-by-parts operators to construct structure-preserving numerical methods for some nonlinear dispersive systems of shallow water models including the coupled Benjamin-Bona-Mahony (BBM) equations and a recently proposed model by Svärd and Kalisch (2023). In particular, we focus on dispersive models inheriting the energy/entropy functional of the classical hyperbolic shallow water equations and discuss how to preserve the energy for dispersive systems. To obtain fully-discrete energy/entropy-stable schemes, we employ the relaxation approach. We present improved numerical properties of our schemes in some test cases.
Motion of the curves has the great importance in scientific and engineering problems. The paper presents the results of numerical solution of the evolution law for the constrained mean-curvature flow. This law originates in the theory of phase transitions for crystalline materials and describes the evolution of closed embedded curves with constant enclosed area. It is reformulated by means of the direct method into the system of degenerate parabolic partial differential equations for the curve parametrization. This system is solved numerically and several computational studies are presented as well.
During this talk, I will present one approach currently developed to perform PinT time-integration for numerical weather prediction. It is based on Spectral Deferred Correction (SDC), a time-integration method that iteratively computes the stages of a fully implicit collocation method using a preconditioned iteration. SDC allows to generate a variety of methods with arbitrary order of accuracy . While there are several parameters that can be used to optimize a SDC method, the main one is the choice of preconditionner. In particular, one can build diagonal SDC preconditioners, either to improve convergence speed or numerical stability for larger time-step size. One important aspect of those diagonal preconditioners is that they allow computations for SDC iterations to be performed in parallel. I will present some recent results on building optimized diagonal preconditionner for a split implicit-explicit formulation of SDC and show their application to test problems based on the shallow water equations.
The numerical approach for solving the fixed gravimetric boundary value problem based on the finite element method with mapped infinite elements is developed and implemented. In this approach, the 3D semi-infinite domain outside the Earth is bounded by the triangular discretization of the whole Earth's surface and extends to infinity. Then the BVP consists of the Laplace equation for unknown disturbing potential which holds in the domain, the oblique derivative boundary condition given directly at computational nodes on the Earth's surface, and regularity of the disturbing potential at infinity. In experiments, a convergence of the proposed numerical scheme to the exact solution is tested and then the numerical study is focused on a reconstruction of the harmonic function above the Earth's topography.
We propose a novel shape optimization approach to address the problem of identifying an obstacle immersed in a fluid within a larger bounded domain, using boundary measurements on the accessible surface. The fluid motion under consideration is governed by the Stokes equations.
In particular, we delve into the inverse problem of identifying the obstacle, employing tools from shape optimization through the coupled complex boundary method. The main idea of this method is to transform the over-specified problem into a complex boundary value problem, incorporating a complex Robin boundary condition derived from coupling the Dirichlet and Neumann boundary conditions along the accessible boundary. To reconstruct the unknown boundary, we optimize the cost function constructed based on the imaginary part of the solution across the entire domain.
Then, we calculate the shape gradient of this cost function to iteratively solve the optimization problem using a Sobolev gradient descent algorithm. We illustrate the feasibility of the method by performing numerical experiments in both two and three spatial dimensions.
Variational models represents a classical tool, with an established theoretical background, to solve a variety of inverse problems in imaging. One of their drawback is the absence of guidelines to select their parameters. Bilevel optimization can exploit the basics idea of machine learning to train a task-dependent variational model through the use of a limited amount of data and computational cost. In this talk we discuss a strategy to develop an unsupervised loss function for one of these Bilevel optimization problems. Without using a ground truth, the loss function needs to be able to quantify directly how good an image is to a human eye. For this reason, we present and discuss a MSSIM predictor based on a differentiable version of the features of BRISQUE, a No-Reference image quality measure [Mittal, Moorty, Bovik. "No-Reference Image Quality Assessment in the Spatial Domain", 2012]. In order to reduce the uncertainty of the score predictor towards unseen images, we perform a data augmentation procedure to enrich the dataset that is used to train it. This model is tested on standard imaging problems, and although some of its elements are task-dependent, its framework is easily transposable to many other applications.
Adaptive Finite Element Method (AFEM) for solving numerical PDEs is a well-established technique that generates a sequence of locally refined meshes together with a sequence of associated algebraic systems. Standardly, the approximation computed on the previous mesh is interpolated to the new one as an initial guess for an iterative solver. Then, however, the algebraic error (represented as a function in the solution domain) can be dominated by smooth components yielding a slow convergence of the solver. We also illustrate numerically that the algebraic error can affect the local error indicators such that the mesh may be improperly refined in the regions with small discretization error.
We present a machine learning technique based on physics-informed neural networks [1] to predict the electrostatic field in a grounded box for free placement of the charge-carrying rod. The simulations are obtained from the Ansys ElectroMagnetics suite [2] and are then passed to a PyTorch automatic differentiation and machine learning tool. The use of an elaborate loss function taking into account the potential and electric field on the boundary and within the box allows to keep the required number of simulations to a minimum [3]. The machine learning model achieves a speed-up factor of 600, while keeping the mean error below 3%.
[1] 'Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations', M. Raissi, P. Perdikaris and G.E. Karniadakis, Journal of Computational Physics, 2019.
[2] 'A Robust Maxwell Formulation for All Frequencies' R. Hiptmair, F. Kramer and J. Ostrowski, IEEE Transactions on Magnetics, 2008.
[3] 'On Splitting Training and Validation Set: A Comparative Study of Cross-Validation, Bootstrap and Systematic Sampling for Estimating the Generalization Performance of Supervised Learning', Y. Xu and R. Goodacre, Journal of Analysis and Testing, 2018.
In this talk we introduce a model order reduction approach for an advection-reaction problem with parametrized reaction function. Such problems arise e.g. in reactive flow problems where the transport field is given as the solution of an additional (non-parametric) equation. For our approach we utilize an ultraweak formulation with `optimal' test space which allows for a symmetric reformulation of the problem in terms of a dual solution. Classic model order reduction techniques can then be applied to the space of dual solutions which also immediately gives a reduced primal space. We show that the necessary computations do not require the reconstruction of any primal solutions and can instead be performed entirely on the space of dual solutions. We prove exponential convergence of the Kolmogorov N-width and show that a greedy algorithm produces quasi-optimal approximation spaces for both the primal and the dual solution space. Numerical experiments based on the benchmark problem of a catalytic filter confirm the applicability of the proposed method.
In this talk I will present a Model Order Reduction approach for transport dominated problems, in the non-parametrized and in the parametrized setting, with a particular focus on the SOD problem in 1D and the DMR problem in 2D. The approach is based on the definition of suitable deformation maps from the physical domain into itself: these maps are obtained by means of an optimization procedure. Once the map is found, a standard POD on the modified snapshots is performed. For the online phase, an Artificial Neural Network approach is used to compute the coefficients of the online solution.
We extend the monolithic convex limiting (MCL) methodology to nodal discontinuous Galerkin spectral element methods (DGSEM). The use of Legendre–Gauss–Lobatto (LGL) quadrature endows collocated DGSEM space discretizations of nonlinear hyperbolic problems with properties that greatly simplify the design of invariant domain preserving highresolution schemes. Compared to many other continuous and discontinuous Galerkin method variants, a particular advantage of the LGL spectral operator is the availability of a natural decomposition into a compatible subcell flux discretization. Representing a high-order spatial semidiscretization in terms of intermediate states, we perform flux limiting in a manner that keeps these states and the results of Runge–Kutta stages in convex invariant domains. Additionally, local bounds may be imposed on scalar quantities of interest. In contrast to limiting approaches based on predictor-corrector algorithms, our MCL procedure for LGLDGSEM yields nonlinear flux approximations that are independent of the timestep size and can be further modified to enforce entropy stability. To demonstrate the robustness of MCL/DGSEM schemes for the compressible Euler equations, we run simulations for challenging setups featuring strong shocks, steep density gradients and vortex dominated flows.
The local stretching of viscoelastic fluids can be evaluated from the eigenvalues of the conformation tensor that represents in certain sense the continuous-level molecular deformation of these types of fluids. Although from the physical point of view the conformation tensor is (symmetric and) positive-definite, in the numerical simulations of viscoelastic fluids flows at larger values of the Weissenberg number, the eigenvalues can become negative. This numerical problem, known as the High Weissenberg Number Problem (HWNP), causes instability of numerical solution and remains an extremely challenging issue. In some cases, HWNP can be avoided (or at least postponed to higher Weissenberg numbers) by adding an artificial stress diffusion term to the transport equation for viscoelastic tensor. Such a numerical diffusion can however result in excessive smoothing of the solution which the becomes unphysical. In this work we present an alternative approach based on a localized stabilization by added artificial diffusion that is only applied in the critical regions where the numerical instabilities can occur. By this local approach is possible to avoid issues with over-smoothing the numerical solution, which is then more accurate and closer to the true solution of the original non-diffusive problem. To illustrate the proposed algorithms, some simulations obtained by applying the finite element method using the FreeFem++ library in a two-dimensional axisymmetric corrugated domain will be presented.
In this talk a general strategy to desing explicit, implicit and semi-implicit well-balanced high-order finite volume numerical methods for 1d nonlinear hyperbolic systems of balance laws will be presented. These methods are obtained by extending a general strategy based on the computation of local steady states. Different applications will be shown and some further developments will be discussed.
In contemporary bioinformatics and systems biology, Flux Balance Analysis (FBA) remains a pivotal computational tool for predicting metabolic flux distributions across genome-scale networks. However, current methods for FBA are computationally demanding and unoptimized, resulting in flux distributions that include thermodynamically infeasible loops. Given the expanding complexity of biological datasets and the accompanying demand for computational efficiency, there arises a pronounced need for advanced high-performance computational methodologies in FBA. Hence, the present research introduces a novel approach to FBA by employing dual-form Anderson Acceleration (AA) integrated with loopless constraints for enhanced computational efficiency and biological feasibility. Utilizing dual-form AA in tandem with the Alternating Direction Method of Multipliers (ADMM), the approach strategically enhances computations in extensive metabolic analyses, facilitating the precise determination of flux distributions. By further integrating loopless constraints, the methodology eliminates the challenges posed by biologically infeasible cyclic pathways, which often persist as latent limitations in conventional FBA techniques. The biGG dataset from UCSD was used for experimentation and validation. Through leveraging dual-form AA with loopless constraints, the novel approach enhances computational efficiency for the linear-programming based FBA problem while improving biological relevance by ensuring biological plausibility, enhancing the interpretation of large-scale genomic data and its translational implications in biotechnology and medicine.
The Steklov eigenvalue problem is a boundary value problem where the eigenvalues are obtained from the boundary and has been extensively studied for the Laplacian. We impose this boundary condition on the Helmholtz equation with a real wave number for curves having a smooth and 2-pi periodic parametrization in the plane. After reformulating this problem into a boundary integral equation we implement a numerical method with the help of layer potentials approximated using quadratures having at least exponential convergence. On discretizing, we obtain a generalized eigenvalue problem for the Steklov-Helmholtz eigenvalues and eigendensities, where the eigenfunctions are reconstructed using the single layer potential. We observe exponential convergence of the numerically computed Steklov-Helmholtz eigenvalues and show construction of the eigenfunctions. A Weyl type law is observed for the spectrum and we verify some known properties of this problem. We also perform experiments with weight functions on the boundary and pose a shape optimization problem for boundaries of fixed length with the verification of some preliminary results. The numerical scheme is tested on a variety of domains of genus 0 and 1 and the layer potentials that we have built can be used in other problems where our assumptions are met.
The new regulation in the Slovak pension system has imposed a default investment strategy and individual guarantees. In the past, the guarantee was related to a rolling 10-year performance of the entire guaranteed bond fund. The new system has brought individual guarantees for each saver, where the pension management companies guarantee that each saver would get at least the sum of the contributions. We simulate the guaranteed and default non-guaranteed funds in the upcoming years to examine how is the volatility (risk) of the fund connected with the utility of the saver and pension company, to compare the previous guarantees and the newly imposed, and to compare two payout schemes: immediate withdrawal and the default payout scheme defined in the legislature.
Nonlocal dispersal equations are used to model the dynamics of populations having a long-range dispersal strategy. This model of spatial spread is obtained by replacing the Laplacian in the usual reaction-diffusion equation with an integral operator leading to an integro-differential equation. In this talk, we will discuss our study on employing such integro-differential equations to model the persistence dynamics of a two-stage structured population. Our studies show that the principal spectrum point of the linearization of the model at the trivial solution solely determines the persistence and extinction of the species.
The basic question of necessary and sufficient conditions on triangular finite element meshes to guarantee convergence of the method remains unanswered to this day. As it turns out, the standard maximum angle condition is only sufficient, but by no means necessary for the finite element method to converge. In fact, finite element meshes can contain many arbitrarily degenerated elements and still exhibit optimal convergence rates. In this talk we will focus on more general sufficient, as well as necessary, conditions for finite element convergence of various orders. This seemingly academic topic has recently found a practical application in the newly developed X-Mesh method of Moës, Remacle, et al., which uses extremely deformed meshes to follows sharp interfaces.
The K-SVD algorithm is a generalization of the K-means clustering process and allows us to produce an overcomplete dictionary of prototype atoms.
In the application of this algorithm, we noticed some problems with the gray value preservation. Now, we want to analyse the algorithm to get a deeper understanding of its conservation variables.
The finite volume method (FVM) is used to solve the altimetry-gravimetry boundary-value problem. 3D computational domain between an ellipsoidal approximation of the Earth's surface and artificial upper boundary chosen at altitude of 200 km is discretized with the high horizontal resolution 1 x 1 arc min. It leads to large-scale parallel computations with enormous memory requirements that are reduced by domain decomposition methods, namely the additive Schwarz method. The altimetry-derived marine gravity data are evaluated as the first, second or higher derivatives of the disturbing potential obtained in the finite volumes. A crucial impact on achieved accuracy has the process of preparing the Dirichlet boundary conditions over ocean/seas. It is based on nonlinear filtering of the geopotential generated on a mean sea surface from the GRACE/GOCE-based satellite-only global geopotential models.
We analyze the convergence for GMRES type solvers that combine sketching and recycling for a sequence of PDE-based problems. The combination of sketching and recycling leads to iterative solvers that converge rapidly and have (relatively) low cost per iteration. We provide convergence results for recycling an approximate invariant subspace and show how sketching leads to a modified projection for the recycling with only a slightly reduced rate of convergence.
This is joint work with Fei Xue (Clemson University)
Optimised Schwarz methods use Fourier analysis to find transmission conditions between subdomains that provide faster convergence over standard Schwarz methods. However, this requires significant upfront analysis of the operator, and may not be straightforward for all problems. This work presents a black box methods for adaptively optimising the transmission conditions, which is equivalent to GMRES.
The work to be presented in this talk focuses on an accelerated global-in-time Oseen solver, which highly exploits the augmented Lagrangian methodology to improve the convergence behavior of the Schur complement iteration. The main idea of the solution strategy is to block the individual linear systems of equations at each time step into a single all-at-once saddle point problem. By elimination of all velocity unknowns, the resulting pressure Schur complement (PSC) equation can be solved efficiently on modern hardware architectures using a space-time multigrid algorithm and customized preconditioners for smoothing purposes. However, the accuracy of these PSC preconditioners deteriorates as the Reynolds number increases and, hence, causes convergence issues of the overall multigrid scheme.
To improve the robustness of the solution strategy and accelerate its convergence behavior, the augmented Lagrangian approach is exploited in a global-in-time fashion by modifying the velocity system matrix in a strongly consistent manner. While the introduced discrete grad-div stabilization does not modify the solution of the discretized Oseen equations, the accuracy of the adapted PSC preconditioners drastically improves and, hence, guarantees a rapid convergence. If the stabilization parameter is chosen sufficiently large, the coarse grid acceleration of the multigrid algorithm may not even be worth the effort. This strategy comes at the cost that the involved auxiliary problem for the velocity field becomes ill conditioned so that standard iterative solution strategies are no longer efficient. This calls for highly specialized multigrid solvers, which are based on modified intergrid transfer operators and block diagonal preconditioners.
The potential of the proposed overall solution strategy is discussed in several numerical studies. Benefits and drawbacks of the approach are summarized and the influence of the stabilization parameter on the PSC iteration and the involved velocity solver is illustrated.
Due to recent breakthroughs in the realm of scientific machine learning, harnessing AI capabilities for challenging endeavors such as intricate physics simulations and predictions has become feasible. In this talk, We will introduce the burgeoning toolkit within the domain of PhysicsML and offer actionable insights on its application in technology advancement.
The amalgamation of computational fluid dynamics (CFD) with artificial intelligence (AI) represents a revolutionary paradigm shift that is reshaping the landscape of technology development. This abstract delves into the transformative influence of AI-powered fluid dynamics on the efficiency, precision, and innovative aspects of fluid behavior analysis. Through the incorporation of AI methodologies, including machine learning and deep neural networks, within CFD simulations, this approach expedites computational processes, elevates precision levels, and illuminates previously concealed insights within the realm of fluid dynamics. Employing real-world illustrations and ongoing research endeavors, this abstract underscores the profound potential of AI-driven fluid dynamics across diverse industries, spanning aerospace to renewable energy, steering the course toward more intelligent and advanced technological solutions.
In this work, we present a piece-wise symplectic model reduction (MOR) method for Hamiltonian systems on quadratically embedded manifolds. For Hamiltonian systems which suffer from slowly decaying Kolmogorov N-widths, linear symplectic MOR methods, e.g. proper symplectic decomposition (PSD), can be potentially inefficient. The recently proposed quadratic manifold cotangent lift (QMCL) is a symplectic MOR method that achieves higher accuracy than PSD by adding quadratic terms. In this paper, we improve the efficiency of the QMCL by introducing the truncated quadratic projection. In addition, we propose a piece-wise symplectic MOR approach. First, the nonlinear symplectic projection is approximated by a piece-wise linear symplectic projection, then the symplectic Galerkin projection is applied piece-wisely to obtain a series of reduced-order Hamiltonian systems. For polynomial Hamiltonian systems, the variation of the Hamiltonian of ROMs can be bounded by a pre-given tolerance if the average vector field method is used. In the numerical example, the accuracy and energy-preservation of the proposed algorithm are demonstrated.
We propose a reconstruction-based a posteriori error estimate for linear advection problems in one space dimension. In our framework, a stable variational ultra-weak formulation is adopted, and the equivalence of the L2-norm of the error with the dual graph norm of the residual is established. This dual norm is showed to be localizable over vertex-based patch subdomains of the computational domain under the condition of the orthogonality of the residual to the piecewise affine hat functions. We show that this condition is valid for some well-known numerical methods including continuous/discontinuous Petrov–Galerkin and discontinuous Galerkin methods. We prove that this reconstruction provides a guaranteed upper bound on the L2 error. Moreover, up to a generic constant, it also gives local lower bounds on the L2 error, where the constant only depends on the mesh shape-regularity. This, in particular, leads to robustness of our estimates with respect to the polynomial degree. All the above properties are verified in a series of numerical experiments, additionally leading to asymptotic exactness. Motivated by these results, we finally propose a heuristic extension of our methodology to advection-reaction problems and any space dimension.
The training of neural network is a highly non-convex optimization problem, which is usually attacked by means of first order methods, which in practice show long training times and a strong dependency on hyper-parameters. In this talk, we discuss the construction and design of significantly more efficient training methods, which are inspired by multi-level and domain decomposition methods. The resulting training methods can be imbedded into convergence control strategies, such as trust region, thereby giving rise to highly efficient and parallel training methods. We present different approaches for the underlying multi-level or domain decompositions and discuss the properties of the resulting training methods by providing illustrative examples.
Computational modelling of behaviour of composite materials and structures is based on principles of classical thermodynamics, supplied by appropriate constitutive relations. In the case of quasi-brittle materials, as ceramics, rocks or building composites including concrete, the initiation and development of micro-fractured zones, up to the formation of systems of macroscopic cracks, can be handled using the nonlocal smeared damage factors, respecting different behaviour in tension and compression, non-scalar in general anisotropic configurations, whose determination from well-designed experiments is needed. This contribution shows the mathematical background of such modelling approach, with references to computational experiments.
In tasks where many similar parameter identification problems must be solved, such as quality control or nondestructive testing, inversion methods based on computationally expensive forward models are impractical. Leveraging the power of machine learning and replacing the forward model with cost-effective surrogate models such as Gaussian Process Regression, is attractive.
While in such an online-offline decomposition the computational resources available for generating training data are large, they are still bounded. We consider the design of computer experiments problem of selecting a suitable number of training data points, their position in the parameter space, and their forward model simulation accuracy for a given computational budget.
Based on work models for the simulation accuracy given some tolerance, and accuracy models for the parameter identification error given some surrogate model accuracy, we consider a theoretically backed greedy strategy for a close to optimal choice of the computational design. We show a significant efficiency improvement over a priori designs such as Latin hypercube or low discrepancy sequences as well as over position-adaptive designs at some numerical examples, demonstrating the importance of taking simulation accuracy into account for the efficient generation of simulated training data. We also apply the proposed method to problems in optical metrology, reconstructing geometric parameters of nanophysical structures, and civil engineering, reconstructing material parameters. The experimental results demonstrate both efficiency of the setup and effectiveness of the surrogate model, allowing for successful reconstruction with a high degree of accuracy.
Partial differential equations involving high contrast and oscillating coefficients are common in scientific or industrial applications. Numerical approximation of these PDEs is a challenging task that can be addressed by homogenisation, multi-scale finite element analysis or multi-level domain decomposition. For linear problems, multi-scale finite element method (MsFEM) and domain decomposition (DD) methods are well established and some viable extensions to non-linear PDEs are known. However, some features of these methods seem to be intrinsically based on linearity. In particular, traditional MsFEM methods and, to some extent, domain decomposition methods rely on the reuse of computations. For example, the stiffness matrix of the MsFEM method can be computed once, while being used for several right-hand sides, or as part of an iterative multigrid algorithm. Similarly, iterative DD substructuring methods could benefit from the precomputed LU decomposition of local matrices. In both methods, this amounts to pre-assembling the local linear Dirichlet-to-Neumann (DtN) operators. In this talk, we revisit traditional MsFEM and DD with machine learning tools, the extension to nonlinear problems is achieved by means of learning local nonlinear Dirichlet-to-Neumann maps. The date for the training of the surrogate DNN is generated by local finite element calculations. The resulting learning-based multi-scale method is tested on a set of model nonlinear PDEs involving the p-Laplacian and degenerate nonlinear diffusion.
Ionic models are among the most extensively studied dynamical systems in biomathematics, as they play a crucial role in modeling electrophysiology at the cellular scale. For example, they are a key component in cardiac modeling because they account for the excitability of the cellular membrane and are responsible for the action potential. In 3-dimensional biological models, they significantly contribute to computational time. Therefore, it is important to develop alternative techniques that minimize their impact on the global solution time. In this presentation, we will introduce a strategy for exploiting Operator Learning techniques, such as DeepONet and Fourier Neural Operator (FNO), to solve these systems more efficiently. Specifically, we will compare the accuracy of the trained model with the one solved numerically and discuss the capabilities of these architectures in reconstructing the desired dynamics.
We present a novel method for passivation of linear time-invariant dynamical systems. The method is based on the spectral factorization of the Popov function. In more detail, we fit a spectral factor to frequency data of the system. This is done by solving a constrained linear system of equations. From the spectral factor, we construct a passive realization of the system by modifying the output matrix and, optionally, the feedthrough matrix.
The method can be easily extended to identifying a passive system from frequency data by combining it with, for example, the Loewner framework or vector fitting. We present several numerical examples to demonstrate the power of the method.
I will describe two algorithms for an optimal control problem governed by the wave equation. One is called inherited, and relies on the structure of the underlying optimality system. The other is inspired by Dirichlet-Neumann algorithms for steady problems. We provide a new tool of analysis, using trigonometric matrix calculs. We also show numerical experiments and open new questions.
In this talk, we will discuss a novel approach to enhance the training of physics-informed neural networks (PINNs). In our method, we introduce nonlinear additive and multiplicative preconditioning strategies specifically designed for the widely used L-BFGS optimizer. These advanced preconditioners are constructed by leveraging the Schwarz domain-decomposition framework, where we decompose the network's parameters in a layer-wise manner.
Through numerical experiments, we will demonstrate that both the additive and multiplicative preconditioners substantially enhance the convergence of the standard L-BFGS optimizer. Notably, these preconditioners also yield more accurate solutions to the underlying partial differential equations.
Neural networks are increasingly used in scientific computing. Indeed, once trained, they can approximate highly complex, non-linear, and high dimensional functions with significantly reduced computational overhead compared to traditional simulation codes based on finite-differences methods. However, unlike conventional simulation whose error can be controlled, neural networks are statistical, data-driven models, for which no approximation error guarantee can be inherently provided. This limitation hinders the use of neural networks on par with finite elements-based simulation codes in scientific computing. In this presentation, we show how to leverage the Lipschitz property of Lipschitz neural networks to establish strict post-training – instance dependent -- error bounds given a set of validation points. We show how to derive error bounds using Voronoï diagrams for a Lipschitz neural network approximating a K-Lipschitz function by taking advantage of recent parallel algorithms. Yet, in most scientific computing applications, the Lipschitz constant of the target function remains unknown. Therefore, we explore strategies to adapt and extend these bounds to the case of unknown Lipschitz constant and illustrate them on simple physical simulation test cases.
In this talk we propose a new deep learning approach to solve parametric partial differential equations (PDEs) approximately. In particular, we introduce a new strategy to design specific artificial neural network (ANN) architectures in conjunction with specific ANN initialization schemes which are tailor-made for the particular PDE under consideration. In the proposed approach we combine efficient classical numerical approximation techniques such as higher-order Runge-Kutta schemes with sophisticated deep (operator) learning methodologies such as the recently introduced Fourier neural operators. Specifically, we introduce customized adaptions of existing standard ANN architectures together with specialized initializations so that, at initialization, the ANNs closely mimic an efficient classical numerical algorithm for the considered approximation problem. To show the potential of the proposed approach we numerically test it in the case of some parametric semilinear heat PDEs.
We present new algorithms with a time domain decomposition applied to unconstrained parabolic optimal control problems. After a spatial semi-discretization, we use the Lagrange multiplier approach to derive a coupled forward-backward optimality system, which can then be solved using a time domain decomposition. In case of a spatial decomposition, there is only one standard way to apply these algorithms. However, due to the forward-backward structure of the optimality system, different variants can be found for these algorithms. We show their convergence behavior and the optimal relaxation parameter. Our analysis reveals that the most natural algorithms sometimes are only good smoothers, and there are better choices which lead to efficient solvers.
In this talk, we will introduce a novel approach for constructing robust, fully well-balanced numerical methods for the one-dimensional shallow water models including Saint-Venant system of shallow water equations, shallow water flows in channels with arbitrary cross-section, and two-layer shallow water equations. The solution is globally continuous and described by a combination of point and average values. The point and average values will then be evolved by the conservative and primitve formulations of the same PDE, respectively. We show how to combine the conservative and primitive formulations of the studied hyperbolic system in a natural way and demonstrate how to deal with both the conservative and primitive forms of PDEs in a well-balanced manner. The developed schemes are capable of exactly preserving both the still-water and moving-water equilibria. We demonstrate the behavior of the proposed new scheme on several challenging examples.
In this work, we propose a novel differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations (BSDEs) with differentiable coefficients, where the deep neural network (DNN) models are trained not only on the inputs and labels but also the differentials of the corresponding labels. This is motivated by the fact that differential deep learning can provide an efficient approximation of the labels and their derivatives with respect to inputs. The BSDE is reformulated as a differential deep learning problem by using Malliavin calculus. By applying the Malliavin derivative to the BSDE, the Malliavin derivatives of the BSDE solution themselves satisfy a BSDE, resulting in a system of BSDEs. Such formulation also requires the estimation of the Hessian matrix of the solution. All the integrals within this system are discretized by using the Euler-Maruyama method. Subsequently, DNNs are employed to approximate the unknown solution of the discretized BSDE system by estimating the solution, its gradient, and the Hessian matrix. The DNN parameters are backwardly optimized at each time step by minimizing a loss function defined as a weighted sum of the dynamics of the discretized BSDE system. An error analysis is carried out to show the convergence of the proposed algorithm. Various numerical experiments up to 50 dimensions are provided to demonstrate the high efficiency of our method not only on the solution but also its gradient and the Hessian matrix. Our proposed algorithm holds promise for applications in pricing and hedging financial derivatives in high-dimensional settings.
The high resolution compact implicit numerical scheme is used for the hyperbolic system of balance laws such as shallow water equations with topography. The scheme is required to be well-balanced for the various equilibria for its successful application.
In this talk, we discuss a probabilistic method called self-organized gradient percolation as an alternative approach to numerical simulation of non-reactive impregnation phenomena in porous media. Instead of solving the Richards equation using finite element or finite volume method, we model the propagation of the saturation in the medium by the flow direction through a cluster using the gradient percolation method and discretized time. Using a specific probability defined at every timestep together with a convolution operator we make grow a cluster of black squares. We then apply an interpolation-index method, to obtain macroscopic saturation. Several 2d numerical examples show interest of our approach. This is a joint work with C. B. Trang, A. Batakis, T. Sayet, E. Blond and E. de Bilbao.
A novel approach to numerically solve a scalar conservation laws in two-dimensional space through a compact implicit discretization method is presented. We introduce a second-order accurate numerical scheme based on the finite volume method, with the compact implicit form of the discretization stencil which simplifies the solution of resulting algebraic systems. The fast sweeping method is introduced to find the solution.
We present numerical experiments for linear advection equation and nonlinear Burgers equation. To avoid unphysical oscillations in the case of nonsmooth solutions, we incorporate variant of Weighted Essentially Non-Oscillatory (WENO) approximations.
The contribution presents an overview of numerical methods and mathematical models designed for the NaturaSat software. The application allows botanists, environmentalists, and nature conservationists across Europe to explore protected areas of Natura 2000 habitats using Sentinel-2 optical data. The presented methods are designed for accurate area identification through semi-automatic and automatic segmentation of European protected habitats, enabling the monitoring of their spatio-temporal distribution and quality.
Many parallel-in-time (PinT) paradigms have been developed in the last decades to efficiently solve time-dependent partial differential equations (PDEs). In this talk we focus on the ParaDiag scheme whose most peculiar feature consists in the explicit diagonalization of the matrix stemming from the adopted time integrator. However, certain classes of time integrators lead to a discrete operator which is not diagonalizable. For instance, this is the case of Backward Differentiation Formulas (BDFs) like the backward Euler scheme. With the aim of overcoming such a drawback, different approaches have been developed in the literature. In this talk, we illustrate a novel technique. In particular, we show how to exploit the possible circulant-plus-low-rank structure of the discrete time integrator to design a new, successful implementation of the ParaDiag paradigm. Notice that this peculiar structure arises in many different families of time integrators. The efficiency of our original scheme is displayed by several numerical examples.
Usually time dependent evolution equations are solved time step by time step. In each step, a system of equations corresponding to the spatial discretization has to be solved. Such methods can be parallelized only in space, but the size of the spatial problem limits the
strong scaling behavior. The use of multigrid methods, which treat multiple time steps in an all-at-once system, can significantly improve parallel scaling compared to a time-stepping application with geometric multigrid solvers. Due to the improved communication
pattern between the parallel processes, this is also true when using a time-simultaneous multigrid method without temporal parallelization.
Here, we numerically analyze how multigrid waveform relaxation methods and space-time multigrid methods behave for convection-diffusion-reaction equations found in fluid flow problems as the problems become increasingly transport dominated. Some global-in-time solution strategies of the Oseen equations benefit from an Augmented Lagrangian (AL) acceleration but in each linear velocity subproblem a modified PDE has to be solved.
Even in a time-sequential setting, special smoothers and transfer operators are required to efficiently handle the resulting ill-conditioned systems. We analyze how the convergence behavior of these space-time multigrid methods is affected by the AL approach. Furthermore, we consider problems with space- and time-dependent diffusion and convection parameters, that arise in global-in-time solution strategies of the Navier-Stokes equations with non-Newtonian fluids.
Presenting innovative unconventional implicit finite difference schemes, this research focuses on studying schemes derived using the inverse Lax-Wendroff procedure. This approach results in numerical stencils spanning multiple time levels, aiming to eliminate non-physical oscillations.
Restricted Additive Schwarz (RAS) is a highly succesful algebraic Schwarz method, to which the abstract Schwarz framework from the Additive Schwarz method does not apply. We show first that a natural interpretation of RAS at the continuous level allows one to identify the iterates of RAS with the parallel Schwarz method of Lions, and thus the historical maximum principle proofs of Lions apply to RAS, leading to a complete convergence proof for RAS. This interpretation holds, provided that the restriction matrices in RAS are chosen in a classical way to obtain geometric subdomains represented by the computational mesh. Motivated by recent studies in multigrid methods for DG discretizations when using point or cell block-Jacobi smoothers, we then investigate what happens when one violates in RAS geometric conformity, and cuts spectral elements with its restriction matrices. We show that in this case, RAS uses new transmission conditions, different from the classical Dirichlet ones, and we present an interpretation of these transmission conditions at the continuous level.
The Dual Weighted Residual (DWR) Method has attracted researchers' interest in many fields of application problems since it was introduced by Becker and Rannacher at the turn of the last millennium. With regard to an efficient numerical approximation of the underlying model problem, the DWR approach yields an a posteriori error estimator measured in goal quantities of physical interest, that can be used for adaptive mesh refinement in space and time. Here, we apply this goal-oriented error control combined with residual-based stabilization techniques to convection-dominated (transport) problems. We consider challenges regarding this type of model problems as well as the practical realization of the underlying approach. The performance properties of the space-time adaptive algorithm are studied by means of well-known benchmarks for convection-dominated problems and examples of physical relevance. We show robustness and computational efficiency results and demonstrate the importance of stabilization in a strongly convection-dominated case. Furthermore, we give insight into the application of the DWR approach to nonstationary incompressible flow problems in combination with efficient iterative solver technologies using a flexible geometric multigrid preconditioner.
Poroelasticity inspects the interaction among fluid flow and elastic deformations within a porous medium. In several applications in the context of environmental sustainability, such as geothermal energy production and CO2 sequestration, temperature plays a key role in the description of the physical phenomena. Moreover, the thermo-poroelastic model finds application in the context of seismicity and induced seismicity.
In order to correctly describe these processes, the differential problem should also take into account the influence of the temperature, leading to a fully-coupled thermo-poroelastic (TPE) system of equations. To correctly describe the seismic case, the study of the fully-dynamic TPE problem is necessary. We present and analyze a a high-order discontinuous Galerkin method on polygonal and polyhedral grids (PolyDG) for the wave propagation model in TPE media. For the semi-discrete problem, we derive stability and $hp$-version error estimates in suitable energy norms. The fully-discrete scheme is then obtained by coupling the space discretization with an implicit Newmark-$\beta$ time integration scheme.
Finally, we report a wide set of numerical simulations, both for verification of the theoretical estimates and for examples of physical interest, such as the case where the medium is heterogeneous. A comparison with the results of the poroelastic model is provided too, highlighting the differences between the predictive capabilities of the two models.
In this talk, the eikonal equation is used as the boundary condition when advective or normal flow equations in the level-set formulation are numerically solved on polyhedral meshes. Since a common choice of the initial condition in the level set method is a signed distance function, the eikonal equation on the boundary is compatibly correct at the starting point. Enforcing the eikonal equation on the boundary along time can effectively eliminate an inflow boundary condition which is typically necessary in the transport equation. For the chosen examples, the numerical results confirm that the eikonal boundary condition provides comparable accuracy and robustness in the evolution of the surface using the exact Dirichlet boundary condition.
In this talk, a cell-centered finite volume method is used to solve the G-equation on polyhedral meshes in three-dimensional space, that is, a general type of the level-set equation including advective, normal, and mean curvature flow motions. Numerical experiments quantitatively show that the size of time step proportional to an average size of computational cells is enough to obtain the second-order convergence in space and time for smooth solutions of the general level set equation. A qualitative comparison is presented for a nontrivial example to compare numerical results obtained with hexahedral and polyhedral meshes. Finally, an example of solving numerically the G-equation used in combustion literature is given.
Dynamics of hydrodynamically unstable premixed flames are studied.
The nonlinear hydrodynamic model and the Sivashinsky equation describing the dynamics of flame fronts in premixed combustion are considered to extract intrinsic nature of nonlinear flame morphology through numerics and the bifurcation theory.
In this talk, we summarize our observations in bifurcation analysis of asymptotic premixed flame profiles in wide ranges of 2 parameters; the Markstein number describing the diffusion properties of chemical compounds as well as controlling hydrodynamic instability caused by thermal expansion, and the gravitational parameter representing the ratio of buoyancy to inertial forces.
If the time permits, we will also provide similar arguments of flame bifurcation structures taking other types of physicochemical interactions into account.
This talk is based on Joint works with Prof. Moshe Matalon (University of Illinois at Urbana-Champaign).
We propose a mathematical model to achieve automatic cell tracking of macrophages in time-lapse data for a zebrafish. First, we design a segmentation method employing space-time filtering, local Otsu's threshold, and subjective surface segmentation (SUBSURF). Based on the segmented images, the partial trajectories are extracted first when segmented cells overlap in time. Finally, the extracted trajectories are linked by considering their direction of movement. The automatic tracking achieved 97.4% accuracy for macrophage data under challenging situations, feeble fluorescent intensity, irregular shapes, and motion of macrophages. Furthermore, we will show the method's applications in analyzing macrophage behaviors.
The multigrid reduction in time (MGRIT) method is a parallel multigrid-in-time solver designed to be as non-intrusive as possible and take advantage of existing simulation codes and techniques. This has worked well for parabolic equations, but parallel-in-time methods for advection-dominated hyperbolic problems have proven difficult to develop. In previous work, we demonstrated the effectiveness of a modified semi-Lagrangian coarse-grid operator for speeding up the parallel solution of high-order discretizations of variable-wave-speed linear advection problems in both 1D and 2D. We have also recently extended this technique for solving nonlinear hyperbolic conservation laws, including the inviscid Burgers and Buckley-Leverett equations. In this talk, we will present further developments for solving linear and nonlinear systems of hyperbolic PDEs such as the acoustic equations, shallow water equations, and Euler equations.
Sparse linear iterative solvers are a fundamental building block of large-scale parallel-in-time methods. Often in many of these simulations, the majority of the time is spent in matrix power kernels (MPK), which compute the product between a power of a sparse matrix A and a dense vector x, i.e., A^{p}x. Current state-of-the-art implementations perform MPK by calling repeated back-to-back sparse matrix-vector multiplications (SpMV), which requires to stream the large A matrix from the main memory p times. Using RACE, we can accelerate the MPK computations by keeping parts of the matrix A in cache for successive SpMV calls. RACE uses a level-based approach to achieve this: Levels are constructed using breadth-first search on the graph related to the underlying sparse matrix. These levels are then used to implement cache blocking of the matrix elements for high spatial and temporal reuse. The approach is highly efficient and achieves performance levels of 50-100 GF/s on a single modern Intel or AMD multicore chip, providing speedups of typically 2x - 4x compared to a highly optimized classical SpMV implementation. After demonstrating RACE's cache blocking approach, the talk sheds light on the application of the cache-blocked MPK kernels in iterative solvers. We discuss the benefit of integrating RACE library to Trilinos framework and demonstrate the speedups achieved in communication-avoiding s-step Krylov solvers, polynomial preconditioners, and algebraic multigrid (AMG) preconditioners.
In this talk, we present a class of multi-level algorithms for unconstrained nonlinear optimization, which does not require the evaluation of the objective function. The choice of avoiding the evaluation of the objective function is intended to make the algorithms less sensitive to noise, while the multi-level feature aims at reducing their computational cost. The proposed algorithmic class encompasses several novel methods as well as the well-known AdaGrad method as a specific (single-level) instance. The global convergence rate of the proposed algorithms is analyzed and their numerical behavior in the presence of noise is illustrated in the context of training deep neural networks for supervised learning applications. Comparison with the state-of-the-art optimizers is also performed, demonstrating encouraging performance.
We propose a new class of very high-order compact schemes in time based on the structural equations' technology combined with the physical ones. We present the method for the ODE linear and non-linear problems and assess its accuracy and stability. Then, we adapt the methodology for hyperbolic problems where solutions are assumed to be smooth enough. We address this issue by providing very accurate and stable methods with high spectral resolution that dramatically reduce the dispersion of the solution.
The design of shock-capturing methods still heavily relies on human intuition, particularly in nonlinear elements like smoothness indicators and weighting functions. In this context, Machine learning (ML) has emerged as a robust tool within Computational Fluid Dynamics (CFD), offering advancements in simulation accuracy, efficiency, and automation. In this work, we introduce some possibilities to diminish the human intuition on the design of WENO schemes, by using both, supervised and unsupervised learning techniques. The proposed WENO5 schemes improve the performance of the conventional WENO5-JS scheme.
In numerous applications, the reliability of a numerical approximation hinges not only on accuracy but also on the preservation of crucial structural properties inherent to the equation and its physical meaning. These properties may include positivity or long-time behaviour of the solutions, as well as the existence of specific steady states. Here, we focus on the approximation of advection-diffusion equations, which are a base block for many dissipative systems.
We introduce an arbitrary-order (in space) nonlinear scheme based on the Hybrid High-Order (HHO) methodology, which can handle unconditionally anisotropic diffusion and general polyhedral meshes. The method is designed to provide positive solutions while ensuring correct discrete long-time behaviour. The key feature of the scheme lies in the discrete-level preservation of an entropy structure, mimicking its continuous counterpart.
Numerical evidence is presented to point out the robustness of our method, as well as to investigate its usefulness with respect to low-order schemes.
Cardiac modeling for precision cardiology is an emerging technology in clinical practice. Thanks to the sophistication of state-of-the-art electrophysiology models, it is possible to tailor treatments to patient characteristics, thus improving the therapeutic outcome. Patient-specific modeling requires a deep integration of clinical data into existing models. This aspect is, however, not straightforward. Cardiac models are computationally expensive, with several patient-specific parameters of difficult identification. Clinical data is scarce, multi-modal, and sparse in space-time. Thus, neither purely data-driven nor model-driven approaches are optimal in the digital twinning process.
In this talk, we will address the issue using a physics-informed strategy. The first problem consists of recovering the conduction properties--conductivity tensor, early activation sites, and Purkinje network--in the heart, starting from sparse electric recordings collected by clinicians. The second application concerns atrial fibrillation inducibility in a complex anatomical model of human atria. We propose a multi-fidelity classifier that learns the inducibility map on a manifold. Finally, we generalize the classifier so that it does not require any new simulation when the anatomy changes.
Minimal paths have been used for long as an interactive tool to find contours or segment tubular and tree structures, like vessels in medical images. These minimal paths correspond to minimal geodesics according to some relevant metric defined on the image domain. Finding a geodesic distance and geodesic paths can be solved by the Eikonal equation using the fast and efficient Fast Marching method. We will present various applications to image segmentation.
We take a look at the equations governing the unconstrained steady motion of a droplet in an unbounded fluid reservoir. We look for suitable ways to linearize such systems in order to analyze them both analytically and numerically.
It is reported (O.Zik et al. PRL 1998) that combustion in a narrow channel shows a variety of char patterns depending on the airflow rate. As the airflow rate decreases gradually, the pattern changes from planar combustion, wavy combustion into fingering combustion. Our research motive is why the combustion pattern changes depending on the airflow rate. In this talk, we consider a mathematical model which describes the combustion experiment. In particular, we numerically study the instability of planar combustion wave which is the onset of the variety of char patterns.
Originally, the method of spectral deferred corrections (SDC) was developed by Dutt et al. in 2000 for solving ordinary differential equations (ODEs) and partial differential equations (PDEs). SDC is a high-order method of iterative nature where a series of correction equations will be solved to improve the solution in each iteration by using spectral quadrature. It can be written as a preconditioned Richardson method, where a preconditioner accelerates the convergence.
Recently, the idea of SDC were applied to differential-algebraic equations (DAEs) by Minion et al. in 2007. DAEs naturally arise in many fields such as power system simulation and chemical engineering. The differential part describes the dynamics of the system, where algebraic equations provide conditions to the components and make sure that the dynamics follow certain laws. However, the mixture of differentiation and integration in solving DAEs is a difficult act for numerical solvers. For stiff ODEs, DAEs represent their stiff limit. Thus, explicit solvers could not found a solution in the foreseeable future due to the very small time step size selected. Implicit solvers, on the other hand, have proven very successful in solving DAEs and also provide certain order of accuracy.
In this talk, SDC for solving DAEs is presented. Since for semi-explicit DAEs the algebraic part can be excluded from the integration process, also a semi-explicit version of SDC for DAEs will be explained. Numerical experiments show their performance in terms of accuracy and costs. Further, first steps toward parallelization across the method are also made by applying SDC with different diagonal preconditioners to DAEs, which was already done in [Speck, 2018] for solving ODEs and PDEs.
Fluorescence microscopy is a fundamental tool to investigate biological structures. Acquisitions are affected by blurring, arising from light diffraction, and noise, making the reconstruction of fine scale details challenging. In the recent years, several variational optimisation methods have been formulated in the continuous setting of measures spaces. Off-the-grid approaches are endowed with a solid mathematical theory and very good performance in applications for reconstructing point sources.
Most of these approaches consider an $\ell_2$ data term usually coupled with the TV norm (BLASSO problem) of the unknown measure.
For better modeling the presence of a Poisson photon-counting noise process, we consider a non-quadratic data fidelity term, i.e. the Kullback-Leibler divergence and a non-negativity constraint.
For the numerical computation of the solution, we adapt the Sliding Frank Wolfe algorithm to such scenario and perform a grid-search strategy for the regularisation parameter to compare the results obtained with the two data terms.
We then propose an homotopy strategy for the automatic selection of the regularisation parameter, provided that an estimation of the noise level is known, which gives good quality reconstructions and significantly speeds up the algorithm. We present some comparative results in 1D and 2D and show a 3D real data reconstruction obtained with Homotopy-SFW with the KL.
This talk will discuss the application of the hybrid high-order (HHO) methodology to the approximation of (three-dimensional) eddy current problems, as they can be encountered in the non-destructive testing of conductive materials. HHO methods hinge on hybrid polynomial unknowns, which are attached to both the cells and the faces of the mesh. They have several assets: (i) they are applicable on general polyhedral meshes, (ii) they allow for an arbitrarily large approximation order, and (iii) they are amenable to static condensation, thus allowing for a reduced computational burden. The well-posedness of HHO schemes in electromagnetics crucially hinges on hybrid Weber inequalities, whose statement strongly depends on the topology of the problem at hand. We will discuss in this talk how such discrete functional inequalities can be established in all generality.
We present an innovative approach to optimizing multigrid methods for singularly perturbed convection-diffusion equations employing space and time parallelization techniques. We discretize in space and time using continuous linear finite elements and linear one-step methods, respectively. This yields a global linear system of equations spanning all time steps. A geometric multigrid method with temporal coarsening is applied to this global system. Our approach subdivides the time domain into intervals, allowing independent smoothing for each subinterval. For smoothing purposes, the degrees of freedom are sophisticatedly arranged such that the time steps are blocked for each spatial unknown within these subintervals. The resulting spatial problems are solved numerically using the time-simultaneous multigrid method, with space-only coarsening closely related to multigrid waveform relaxation. Utilizing this technique additionally enables a high degree of parallelization in space. In numerical experiments, we optimize both space and time parallelization strategies to efficiently solve the global problem. The advantages of both multigrid methods are exploited by choosing the subintervals and thus the number of blocked time steps. This approach achieves a high degree of parallelization in both space and time, akin to space-time multigrid algorithms in its most refined state.
This is joint work with Stefan Turek (TU Dortmund University).
The development of multicellular organisms is a dynamic process in which cells divide, rearrange, and interpret molecular signals to adopt specific cell fates. This results in the emergence of gene expression patterns, that later on give raise to different body parts and organs. We still lack full understanding of how these patterns could emerge in precise and reproducible way during embryonic development. In this talk I will present theoretical and computational strategies to study pattern formation in the developing spinal cord. The cell-based description of growing tissue that is consistent with high-resolution images of spinal cord neuroepithelium can be potentially utilized in designing optimal growth conditions for neuroregenerative therapies.
Experimental observations of flame spread of a thin solid within a narrow channel are presented with an emphasis on quantifying the unstable behaviors. A thin paper was horizontally placed in a channel, with oxidizer gas flowing up and down along the paper. The paper was ignited at its upstream edge, and subsequent flame spread behaviors were observed. The experiments were conducted under various conditions of oxygen flow rates and channel heights. At lower flow rates, it was noted that the flame front exhibited a breakup phenomenon, resulting in the propagation of smaller flames. Additionally, two distinct behaviors were identified: one involving the random motion of separated, smaller flames, and the other featuring the sustained movement of the flame(s) in a consistent direction. Utilizing image analysis techniques, the observed behaviors were quantified. Based on the obtained data, the physical distinctions between these modes will be discussed.
(joint work with Keisuke Suzuki (TUT), Tsuneyoshi Matsuoka (TUT), Hirofumi Izuhara (University of Miyazaki), Kazunori Kuwana (Tokyo University of Science ), Yuji Nakamura (TUT))
We propose a way to maintain strong consistency and facilitate error analysis in the context of dissipation-based WENO stabilization for continuous and discontinuous Galerkin discretizations of hyperbolic problems. Following Kuzmin and Vedral (J. Comput. Phys. 487:112153, 2023) and Vedral (arXiv preprint arXiv:2309.12019), we use WENO shock detectors to determine appropriate amounts of low-order artificial viscosity. The key novelty of our approach lies in the use of residual-based weights for candidate polynomials of a WENO reconstruction. The shock capturing terms of our stabilized Galerkin methods vanish if residuals do. This enables us to preserve the Galerkin orthogonality property and achieve improved accuracy compared to weakly consistent alternatives. Moreover, our methodology is applicable to convection-diffusion-reaction (CDR) equations. For strongly consistent Galerkin-WENO discretizations of CDR problems, we perform rigorous theoretical studies and derive a priori error estimates. Additionally, we demonstrate the stability and accuracy of the proposed method through one- and two-dimensional numerical experiments for hyperbolic conservation laws and systems thereof. The numerical results for representative test problems are superior to those obtained with traditional WENO schemes, particularly in scenarios involving shocks and steep gradients.
A common task in inverse problems and imaging is finding a solution that is sparse, in the sense that most of its components vanish. In the framework of compressed sensing, general results guaranteeing exact recovery have been proven. In practice, sparse solutions are often computed combining $\ell_1$-penalized least squares optimization with an appropriate numerical scheme to accomplish the task. A computationally efficient alternative for finding sparse solutions to linear inverse problems is provided by Bayesian hierarchical models, in which the sparsity is encoded by defining a conditionally Gaussian prior model with the prior parameter obeying a generalized gamma distribution. An iterative alternating sequential (IAS) algorithm has been demonstrated to lead to a computationally efficient scheme, and combined with Krylov subspace iterations with an early termination condition, the approach is particularly well suited for large scale problems. Here, we will discuss two hybrid versions of the original IAS that first exploit the global convergence associated with gamma hyperpriors to arrive in a neighborhood of the unique minimizer, then adopt a generalized gamma hyperprior that promote sparsity more strongly. The proposed algorithms will be tested on traditional imaging applications and to problems whose solution allows a sparse coding in an overcomplete system such as composite frames.
Limited-Angle Computed Tomography is a significant example of an ill-posed imaging problem, in which one aims at reconstructing the inner properties of a body by illuminating it via X-rays, from a restricted wedge of angles.
Model-based regularization techniques for such inverse problems often leverage prior knowledge on the exact solution, such as sparsity with respect to a suitable representation; on the other side, in recent years, many data-driven methods have been developed, based on statistical learning techniques. Hybrid strategies aim at blending these two approaches, making the best out of the two worlds, i.e., providing satisfying numerical results and sound theoretical guarantees.
In this context, we recently proposed a convolutional neural network designed for sparsity-promoting regularization of ill-posed inverse problems. The key idea motivating the proposed architecture is to unroll an iterative algorithm for sparsity promotion, introducing a learnable correction on the forward operator. By employing a wavelet representation of the signals, we can represent the learned correction as a (suitably designed) convolutional layer, and by microlocal analysis, we can interpret it as a pseudodifferential operator, motivating the name of our architecture: PsiDONet.
In this talk, I will present some recent extensions of the project, in particular at the interface between unrolling and Plug-and-Play techniques, which also allows learning the proximal operator of the unfolded scheme, hence allowing for a data-driven sparse representation. Particular effort is also put into the efficient formulation of such an algorithm: a drastic reduction of the number of learnable parameters is achieved both by a compact representation of the learnable correction and by introducing an extrapolation strategy in the unrolled scheme.
This is a joint project with T. A. Bubba (University of Bath), S. Mukherjee (IIT Kharagpur), and A. Sebastiani (University of Bologna).
The freezing of water in porous media depends on various material characteristics such as pore size, distribution of particles etc. In this contribution we model thawing and freezing thermal processes of a soil sample (fine sand with relatively low porosity and water) using the finite element method (FEM). Mathematically speaking, we study a time dependent model of a two-dimensional two-phase system. We also study the sensitivity of this model on various parameters such as material properties or on initial and boundary conditions. This contributes to a deeper understanding of the freezing and thawing which greatly affects the permafrost and attracts considerable attention. This topic has also a number of applications such as infrastructional, agricultural and technological. Our finite element method model is compared to available experimental data and implications are discussed.
In this talk, I construct and analyze the decay to equilibrium of a finite volume scheme for a 1D nonlinear kinetic relaxation model describing a recombination-generation reaction of two species, proposed in [Neumann, Schmeiser, KRM 2016]. The study is based on the adaptation of the L² hypocoercivity method of [Dolbeault, Mouhot, Schmeiser, Trans. Amer. Math. Soc. 2015] for the discretization of the linearized problem. Then, we establish a local result for the discrete nonlinear system. As in the continuous framework, this requires maximum principle estimates, which necessitates the use of monotone numerical fluxes.
This is a joint work with Tino Laidin (Univ. Lille) and Thomas Rey (Univ. Nice).
We discuss the motion of closed non-intersecting space curves driven by curvature in binormal and normal directions coupled with
advection-diffusion equation for a scalar quantity defined on a curve. We formulate the general motion law in space in binormal and normal directions by curvature and mention some known analytical properties. The finite-volume scheme allows us to solve both the advection-diffusion problem defined on the curve as well as the motion of the curve itself. The numerical scheme is stabilized by the tangential velocity redistributing discretization nodes. We demonstrate the behavior of the solution on several computational studies combining the motion in normal and binormal velocity with the evolutin of the scalar quantity.
The Kuramoto--Sivashinsky equation is well-known as a mathematical model describing the interfacial dynamics of combustion phenomena. In this talk, we focus on flame front evolution on an accordion folded paper and report that the equation well-captures qualitative properties.
This talk is concerned with representation issues associated with the numerical solution of a unified mathematical model of continuum mechanics, due to Godunov, Peshkov, and Romenski, which can describe ideal fluids, viscous fluids and elastoplastic solids as special cases of a general continuum. The different regimes are characterized solely by the choice of material parameters and the resulting PDE system is of hyperbolic nature, with clearly defined finite wave speeds, in contrast to the standard formulation of viscous fluids via the Navier--Stokes equations.
The description of such a general continuum hinges on the evolution of a matrix-valued field called distortion, which is a generalization of the inverse deformation gradient in solid mechanics. In the fluid regime, this quantity can no longer be recovered as a gradient of displacements and encodes very rich information, in particular due to the different orientations that ideal fluid parcels can be found in. The fine features of the distortion field can be challenging (or outright impossible) to resolve with standard well-tested Finite Volume methods. Degenerate situations are routinely encountered where unphysical states are generated simply as a result of taking a convex combination of two data points.
We show how changing to an alternative representation of the same distortion field, obtained via polar decomposition, can be used to solve such discretization issues.
Instead of the original PDE system, one can instead evolve the rotational and stretch components of the distortion matrix separately, which allows the description of the rotational components through a quaternion-valued partial differential equation. We discuss the peculiarities of quaternion PDEs and some of the discretization strategies that they enable. We present numerical examples of high-Reynolds number simulations which could not be carried out with the previous formulation of the model.
Stochastic Runge-Kutta methods are a popular class of numerical methods used to solve stochastic differential equations. While strongly convergent methods approximate single solution paths accurately but have typically very low convergence order, weakly convergent methods are taylored at approximating the expected value of a functional of the solution at a given time and can achieve higher order, typically order two. In this talk, we introduce some new stochastic Runge-Kutta methods that are weakly convergent of order two while needing to fulfill less restrictive order conditions than previously known methods, thus allowing for methods with a lower number of stages than previously seen.
The contents of this talk are joint work with Adrien Laurent (INRIA Rennes) and Anne Kværnø (NTNU Trondheim).
For problems in image processing and many other fields, a large class of effective neural networks has encoder-decoder-based architectures. Although these networks have made impressive performances, mathematical explanations of their architectures are still underdeveloped. In this talk, we study the encoder-decoder-based network architecture from the algorithmic perspective and provide a mathematical explanation. We use the two-phase Potts model for image segmentation as an example for our explanations. We associate the segmentation problem with a control problem in the continuous setting. Then, the continuous control model is time discretized by an operator splitting scheme, the PottsMGNet, and space discretized by the multigrid method. We show that the resulting discrete PottsMGNet is equivalent to an encoder-decoder-based network. With minor modifications, it is shown that a number of the popular encoder-decoder-based neural networks are just instances of the proposed PottsMGNet. By incorporating the Soft-Threshold-Dynamics into the PottsMGNet as a regularizer, the PottsMGNet has shown to be robust with the network parameters such as network width and depth and achieved remarkable performance on datasets with very large noise. In nearly all our experiments, the new network always performs better or as good on accuracy and dice score than existing networks for image segmentation.
This talk is based on a joint work with Prof Hao Liu (HKBU) and Prof Raymond Chan (HK CityU).
Evolution equipped animals with versatile visual systems in order to enable perception and understanding the surrounding world. We know that humans are very good at detecting and correctly grouping various important features in visual data and often are able interpret seemingly random videos acquired by imaging dynamic scenes with highly noisy sensors such as ultrasound imaging. Remarkably, this happens even if perception completely fails when the same information is presented at a slow rate, frame by frame, rather than in a video sequence. We undertook a study this property, of surprising dynamic perception with the first goal of proposing a new detection and spatio-temporal grouping algorithm for such signals when, per frame, the information on the objects of interest is both random and sparse and embedded in random noise. The algorithm is based on the succession of temporal integration and spatial statistical tests of unlikeliness, the so-called a-contrario framework. The algorithm not only manages to handle such signals but exhibits a striking similarity in its performance to the perception by human observers, as witnessed by a series of psychophysical experiments on image and video data, This fact leads us to see in our algorithm a simple computational Gestalt model of human perception with only two parameters: the time integration and the visual angle for candidate shapes to be detected.
The talk is based on joint research with Thomas Dages and Michael Lindenbaum
We present two finite volume approaches for modeling the diffusion of charged particles in constrained geometries (typically crystals) using a degenerate Poisson-Nernst-Planck system with cross-diffusion and volume filling. Both methods utilize a two-point flux approximation and are part of the exponentially fitted scheme framework. The only difference between the two is the selection of a Stolarsky mean for the drift term originating from a self-consistent electric potential. The first version of the scheme uses a geometric mean and is an extension of the squareroot approximation (SQRA) scheme. The second scheme utilizes an inverse logarithmic mean to create a generalized version of the Scharfetter-Gummel (SG) scheme. Both approaches ensure the decay of some discrete free energy. Classical numerical analysis results -- existence of discrete solution, convergence of the scheme as the grid size and the time step go to 0 -- follow.Numerical simulations show that both schemes are effective for moderate Debye lengths, with the Scharfetter-Gummel scheme demonstrating greater robustness in the small Debye length limit.
New pricing methods for Renewable Energy Certificates (RECs) and associated derivatives products are presented. Starting from a system of forward-backward stochastic differential equations (FBSDEs) and using Ito lemma, we pose a semilinear PDE arising from the consideration of two stochastic factors: the accumulated green certificates sold by an authorized generator and the natural logarithm of the renewable electricity generation rate. One main novelty of the work comes from the numerical treatment of the nonlinearity in the first order derivative term. The nonlinear advection term is formulated in terms of a maximal monotone operator. Semi-Lagrangian techniques are used for the time discretization of the PDE and combined with either finite differences schemes or finite element methods for the spatial discretization. Also PDEs mathematical models for European derivatives on RECs are posed, mathematically analized and numerically solved.
Attention is focused on the damage occurrence in heterogeneous materials, especially on the issue of modelling crack formation and propagation.
XFEM (Extended finite elements method) method and its suggested modifications are shown. Main effort is devoted to fibre composites and the search for new methods for their more accurate modelling, especially close to stress concentrators. Next, some numerical problems associated with XFEM method are discussed.
The protein actin is found in essentially all eukariotic cells. In the polymerized form of actin filaments it is an important part of the cytoskeleton which, among other functions, regulates cell morphology and cell motility. In particular, the crawling movement of various cell types is driven by polymerization of actin filaments. In this presentation two mathematical models for actin driven cell motility will be presented with an emphasis on their discretization and simulation algorithms.
A prototypical motility organelle is the lamellipodium, a flat cell protrusion supported by a dynamic actin filament network. The activity and mechanical stability of the network is regulated by a plethora of different other proteins. The Filament Based Lamellipodium Model (FBLM) is a modelling framework based on these molecular interactions, but upscaled to a two-dimensional anisotropic continuum model. Besides various modeling aspects, a simulation approach will be presented, including discretization based on an anisotropic finite element space.
Whereas lamellipodium based motility strongly relies on adhesion induced friction between the cell and its environment, leukocytes (white blood cells) are known to be able to move through confined environments also in the absence of adhesion. An attempt to find a theoretical explanation of this phenomenon will be presented. It uses a two-dimensional model for the evolution of the cell boundary and, in an extended version, also of the boundary of the nucleus. Again a simulation procedure will be presented, with a discetization of the dynamics of the nucleus boundary based on an approach by Mikula and Sevcovic (2004), where the movement of boundary points is split into tangential and normal components.
The numerical solution of time-dependent partial differential equations is computationally demanding. With today's modern computers, the time-to-solution can be decreased through massive parallelization, which is traditionally done in the spatial dimensions. In addition, parallelization in time has received increasing interest in recent years to overcome scaling limits. Time-parallel methods like Parareal rely on a computationally cheap coarse integrator to propagate information forward in time, while a parallelizable expensive fine propagator provides accuracy. Typically, the coarse method is a numerical integrator using lower resolution, reduced order or a simplified model. As machine learning-based methods for approximating PDEs are becoming more and more successful, in this talk, we propose to use physics-informed neural operators as coarse propagators instead. We demonstrate that Parareal with such a coarse propagator provides better speedup than a numerical coarse propagator. We show that moving the neural operator coarse propagator to a GPU while running the numerical fine propagator on the CPU further improves Parareal's single-node performance, and investigate space-time parallel scaling.
This is joint work with Abdul Qadir Ibrahim and Daniel Ruprecht.
We consider the use of multipreconditioning, which allows for multiple preconditioners to be applied in parallel, on high-frequency Helmholtz problems. Typical applications present challenging sparse linear systems which are complex non-Hermitian and, due to the pollution effect, either very large or else still large but under-resolved in terms of the physics. These factors make finding general purpose, efficient and scalable solvers difficult and no one approach has become the clear method of choice.
In this work we take inspiration from domain decomposition strategies and, in particular, sweeping methods which have gained notable interest for their ability to yield nearly-linear asymptotic complexity and which can also be favourable for high-frequency problems. While successful approaches exist, such as those based on higher-order interface conditions, perfectly matched layers (PMLs), or complex tracking of wave fronts, they can often be quite involved or tedious to implement. We investigate the use of simple sweeping techniques applied in different directions which can then be incorporated in parallel into a multipreconditioned GMRES strategy. Each sweep consists of solving smaller Helmholtz problems, which need not be done to high accuracy, for which we can use the same strategy recursively or else any other effective Helmholtz solver; this allows the potential for a matrix-free approach. Numerical results will demonstrate the strengths of our overall solver strategy.
Crystalline mean curvature flow is a model of the growth of crystals. It is formally a gradient flow of an anisotropic surface energy with a convex polytope as its optimal shape. The evolving surface develops corners and flat features (edges and facets) that are usually preserved in the evolution, but they can also break and bend. We discuss a notion of viscosity solutions of the level set formulation and its wellposedness, and a numerical method based on this formulation in two and three dimensions. This talk is based on joint works with Yoshikazu Giga (U. of Tokyo), Inwon Kim (UCLA) and Dohyun Kwon (U. of Seoul).
Numerical simulation of fluid dynamics plays an important role in a wide range of modern industrial and real life applications, such as vehicle engineering, engines design, aerospace or weather forecast. In the past decades, great effort was put in the development of efficient and accurate numerical methods.
We will present convergence results for stable and consistent approximations of a compressible fluid flow model obtained by a finite volume scheme belonging to the class of the so-called invariant domain preserving methods or structure preserving methods. Our approach to numerical analysis of underlying nonlinear equations in the spirit of the Lax Equivalence Principle: stability + consistency = convergence (originally stated for linear problems) is based on dissipative solutions, weak-strong uniqueness principle, K-convergence and related concepts.
This talk reports on our first experiences in using techniques from machine learning for the numerical solution of convection-dominated convection-diffusion
problems. First the question is studied whether a feed forward neural network (multilayer perceptrons), which is trained on the basis of standard limiters for discontinuous Galerkin discretizations, is able to limit spurious oscillations as well. In the second part of the talk, various loss functionals for physics-informed neural networks (PINNs) are studied that are novel in the context of PINNs and are especially designed for convection-dominated convection-diffusion problems.
In many applications based on hyperbolic systems, one is interested in tracking particular phenomena. This means that the time step should be tailored on the speed of the phenomena one would like to follow, instead of being determined by stability conditions. This consideration is the main motivation to study implicit schemes.
Implicit schemes for hyperbolic systems have been studied especially in the very stiff regime characterized by low Mach numbers. This particular regime has prompted the development of Asymptotic Preserving Low Mach numerical schemes, which have been applied to many models of interest with great success. Another field of interest is implicit schemes which require only to know a particular time scale of interest. The difficulties arising in this context are quite interesting, and provide insight on the effect of the time integrators we use.
In this talk, I will discuss the construction of implicit schemes, in the low Mach case and in the new paradigm of implicit schemes for conservation laws.
Standard spectral methods for solving differential equations are based on approximating target functions in the linear span of basis sets. These methods enjoy convergence guarantees and are very popular in fields such as quantum physics. However, they suffer from the curse of dimensionality which limits their applicability.
I present our recent work, where we enhance standard basis sets of L^2 via composition with invertible neural networks and show that the resulting families are dense under mild conditions on the networks. This construct is then used to solve molecular time-independent Schrödinger equations. Our results demonstrate several orders-of-magnitude increased accuracy over the use of standard spectral methods.
References:
[1] Y. Saleh, A. Iske, A. Yachmenev, and J. Ku ̈pper, "Augmenting basis sets by normalizing flows", Proc. Appl. Math. Mech. 23, e202200239 (2023).
[2] Y. Saleh, A ́. F. Corral, A. Iske, J. Küpper, and A. Yachmenev, "Computing excited states of molecules using normalizing flows", (2023), arXiv: 2308.16468 [physics].
In the talk, we study different constitutive relations for (non)Newtonian fluids. In particular, we introduce implicit relations which allow us not only to unify the approach to various constitutive equations but also to describe much broader class of relations. Moreover, in a similar way we study also the boundary conditions and explain the possible applications. We will present results of collaborations with M. Bulíček, J. Málek; P. A. Gazca Orozco, F. Gmeineder, T. Tscherpel.
This investigation applies computational fluid dynamics simulations to support the design development of an impaction-based particle collector. This collector aims at submicron aerosol sampling in the supersonic free stream radially aside a sounding rocket body in the mesosphere (altitude of 85 km). The overall goal of the development process is to collect mesospheric aerosols for their physico-chemical analyses to gain further insights into high atmosphere processes: e.g. of aerosol particles from meteoric ablation and their potential impact on noctilucent cloud (NLC) formation. However, sampling of particles at these heights and their analyses is only possible by considerable costs and great effort by using sounding rockets, which is why only a sparse database is available so far. The development process and efficiency analyses are based on numerical simulations achieved by the software COMSOL Multiphysics®, where the simulation workflow is divided into two independent studies: First, with the CAD Import Module, the detailed rocket geometry (as far as relevant for the particle collection efficiency) is implemented into the model. Then, the supersonic flow field surrounding the rocket under varying flight attitudes is simulated by the High Mach Number Laminar Flow Interface by solving the Navier-Stokes equations for compressible fluids, whereby a steady state solution is obtained. Of particular interest for the design and arrangement of the probe collector is (a) the evaluation of the evolving flow field around the sounding rocket at the free stream Mach number of Ma = 1.75 (at 85 km height) and (b) the localization as well as the impact of the occurring shockwave on the particle sampling. Furthermore, the thickness of the boundary layer around the rocket body is investigated to ensure a particle sampling well away of perturbing influences. Additionally, for modeling the particle trajectories and thus the aerosol impactions on the probe collector, Particle Tracing for Fluid Flow is utilized, where Newtons second law is applied. For this purpose, a preliminary investigation of possible particle forces and their magnitude is performed and corresponding forces (drag and Brownian force) are considered in the model. With the final collector design, the number of impacted particles on collector surfaces are analyzed in a parameter study for aerosol sizes of 1.2 nm diameter at various number concentrations. Therefrom, the potential sampling efficiency of the probe collector is estimated. In conclusion, impactions onto designated collector surfaces are highly probable according to simulation results. Moreover, our COMSOL® model can be validated by measurement results of the future planned rocket flight.
In this study, we consider particle-laden flows on an inclined plane under the effect of gravity. Previous experimental works have noted that a particle-rich ridge is generated near the contact line. We investigate the bump structure observed in particle-rich ridges in terms of Lax's shock waves in the mathematical theory of conservation laws, explicitly considering the effect of particles with nontrivial radii on morphology of particle-laden flows. We also extract the dependence of radius and concentration of particles on the bump structure.
In microscopy and other imaging techniques, the acquired measurement, together with some information about the imaging system and the object captured, enable us to unravel pertinent information of interest, e.g. a clean image. Accurate reconstruction of this underlying information requires the use of specialized reconstruction techniques. This presentation examines reconstruction techniques in different microscopy modalities, incorporating both conventional model-based methods and their integration with modern machine-learning approaches. Concrete examples with real data will be presented during the presentation.
We propose a two-stage variational model for the additive decomposition of images into piecewise constant, smooth, textured and white noise components. The challenging separation of noise from texture is successfully achieved by including a normalized whiteness constraint in the model, and the selection of the regularization parameters is performed based on a novel multi-parameter cross-correlation principle. The two resulting minimization problems are efficiently solved by means of the alternating directions method of multipliers. Numerical results show the potentiality of the proposed model for the decomposition of textured images corrupted by several kinds of additive white noises.
The frequencies of a vibrating planar membrane correspond to the Dirichlet Laplacian eigenvalues on the planar domain. In 1966, Mark Kac posed the famous question "Can One Hear the Shape of a Drum?". In other words: is the shape of a planar domain uniquely determined by its Dirichlet Laplacian spectrum? In 1992, Gordon, Webb, and Wolpert constructed a pair of non-isometric domains with identical Laplacian eigenvalues, answering Kac's question in the negative. In this talk, we present a perspective on the inverse eigenvalue problem as an ill-posed inverse problem, and present some preliminary work in applying Bayesian inference techniques to this question. In addition, we will present the perturbation theory for Dirichlet Laplacian eigenvalues, as well our modifications to the state-of-the-art algorithm for computing Dirichlet Laplacian eigenvalues: Trefethen and Betcke's improved Method of Particular Solutions.
Kernel methods are a class of versatile tools for machine learning, nu- merical approximation and scientific computing. Classical kernel methods, which use a fixed a priori kernel, do not allow for feature learning, which makes them less suitable for high dimensional real world tasks. In this talk, we review so-called two-layered kernels, which already allow for some feature learning. Based on insights of these two-layered kernels, we subsequently introduce deep kernels that leverage invertible neural networks, discuss their optimization and showcase their use for challenging tasks from machine learning and scientific computing.
Joint work with: Armin Iske.
References
[1] T. Wenzel, F. Marchetti, and E. Perracchione. Data-driven kernel designs for optimized greedy schemes: A machine learning perspective. arXiv preprint arXiv:2301.08047, 2023. Accepted for publication in SISC.
The main objective of the presentation is to introduce a novel method for PDE-based supervised deep learning data classification, in our case, the classification of Natura 2000 protected habitats using Sentinel-2 satellite optical data. The Natural Numerical Network [1] is based on the numerical solution of nonlinear partial differential equations of forward-backward diffusion type on a semi-complete directed graph [2]. Due to the model nonlinearity, for time discretization, we use a semi-implicit scheme, and for discretization of the graph diffusion, we use the finite volume method principles, i.e. the mass balance of diffused quantity in graph vertices. In the Natural Numerical Network, the forward diffusion causes the movement of points in a feature space toward each other and thus clustering of desired vertices. The opposite effect, keeping the points away from each other, is caused by a slight backward diffusion and gives a repulsion of vertices belonging to different clusters. The Natural Numerical Network contains a few parameters that are optimized in the learning phase of the method. After learning parameters and optimizing the topology of the network graph, the classification of Natura 2000 habitats is performed. A relevancy map for each habitat is created as a tool for validating the classification, and then it is used for automatic identification of Natura 2000 habitat areas in satellite images as well as for finding new protected habitat appearances by using satellite imagery.
This is a joint work with K. Mikula, M. Kollár, M. Ambroz, J. Šibík and M. Šibíková.
[1] Karol Mikula, Michal Kollár, Aneta A. Ožvat, Martin Ambroz, Lucia Čahojová, Ivan Jarolímek, Jozef Šibík, Mária Šibíková, Natural numerical networks for Natura 2000 habitats classification by satellite images, Applied Mathematical Modelling, Volume 116 (2023) pp. 209-235
[2] Karol Mikula, Michal Kollár, Aneta A. Ožvat, Lucia Čahojová, Mária Šibíková, Natural numerical networks on directed graphs in satellite image classification, L.Calatroni et al.(Eds.): Scale Space and Variational Methods in Computer Vision. SSVM 2023. Lecture Notes in Computer Science, Vol 14009, pp. 339–351, Springer, Cham, 2023
In this talk we discuss recent progress in using randomization for solving linear systems of equations and eigenvalue problems. We present first randomized versions of processes for orthogonalizing a set of vectors and their usage in the Arnoldi iteration. We then present associated Krylov subspace methods for solving large scale linear systems of equations and eigenvalue problems. The new methods retain the numerical stability of classic Krylov methods while reducing communication and being more efficient on modern massively parallel computers.
We consider a nonlinear, possibly degenerated parabolic equation, modelling e.g. unsaturated flow in a porous medium, or biofilm growth. Implicit time stepping methods are popular due to their stability, allowing to avoid severe restrictions on the time step. This leads to nonlinear, time-discrete elliptic equations, for which linear iterative schemes are needed for approximating the solution. Due to the degeneracy, the Newton scheme may fail to converge, unless very small time steps are used, possibly after applying a regularisation step.
In this talk, we present an iterative scheme that does not require any regularization, for which the linear convergence can be proved rigorously under a mild restriction (if any) on the time step. This convergence is obtained at the level of the elliptic problem, so it is not restricted to any spatial discretisation or mesh. Moreover, it features an improved the convergence behavior, being linearly convergent in cases when the Newton diverges, and requiring at most a comparable number of iterations whenever the Newton scheme does converge.
In this talk, we will present recent results on numerical methods for some representative hyperbolic problems that are used in level set methods for tracking interfaces and/or in models based on conservation and balance laws. Our goal is to offer numerical schemes that exhibit higher order and high resolution accuracy with superior stability properties and possess convenient properties for the algebraic systems that need to be solved. The primary tool for deriving these schemes is the Lax-Wendroff procedure, which is not utilized in its standard direct form but in a partial or inverse form. Consequently, we are able to present compact implicit and semi-implicit numerical schemes, enabling applications of efficient iterative solvers such as the deferred correction method and the fast sweeping method. Several examples will be provided to demonstrate the potential of these methods.
We describe a parallel and quasi-explicit Discontinuous Galerkin (DG) kinetic scheme for solving systems of balance laws. The solver is CFL-less (i.e., the CFL number can be arbitrary) and has the complexity of an explicit scheme. It can be applied to any hyperbolic system of balance laws. In this work, we assess the performance of the scheme in the particular cases of the three-dimensional wave equation and of Maxwell's equations. We measure the benefit of the unconditional stability by performing experiments with very large CFL numbers. In addition, the parallel possibilities of the method are investigated, and a real-life simulation is presented.
In this talk, I will consider a nonlocal version of the Shigesada-Kawazaki- Teramoto (SKT) cross-diffusion system, which arises in population dynamics. This system has entropy dissipation properties on which one can rely to design a robust and convergent numerical scheme for its numerical simulation. In terms of nume- rical analysis, I will present discrete compactness techniques, entropy-dissipation estimates and a new adaptation of the so-called duality estimates for parabolic equations in Laplacian form. I will also present numerical experiments illustrating the influence of the nonlocality in the system: on convergence properties, as an approximation of the local system and on the development of diffusive instabilities. This is a joint work with Maxime Herda (Inria Lille).
In this talk, we focus on the numerical resolution of partial differential equations in the context of Image Processing and 3D printing for a couple of problems. The first is related to the resolution of the image segmentation problem through a high-order accurate scheme based on the level-set method. In this approach, the curve evolution is described as the 0-level set of a representation function, but we modify the velocity that drives the curve to the boundary of the object in order to obtain a new velocity with additional properties that are extremely useful to develop a more stable high-order approximation with a small additional cost.
After that, we talk about some peculiar issues related to 3D printing of an object. In particular, having in mind 3D printers based on fused deposition modeling (FDM) technology that create solid object layer by layer, starting from the lowest one, how to build object-dependent infill structures for saving material while keeping the desired rigidity and printable features, solving an eikonal equation. The level-sets of the solution correspond to the desired infill structure, which, by construction, follows the boundary of the object.
In this contribution, we present a Lagrangean approach to forest fire modelling. The fire perimeter is represented by a 3-D discrete curve on a surface. Our mathematical model is based on the empirical laws of the fire spread influenced by the fuel, wind, terrain slope, and shape of the fire perimeter with respect to the topography (geodesic and normal curvatures). Motion of the fire perimeter is governed by the intrinsic advection-diffusion equation. To obtain the numerical solution, we use the semi-implicit scheme to discretize the curvature term, and the so-called inflow-implicit/outflow-explicit approach combined with the implicit upwind is applied to the advection term. A fast treatment of topological changes (splitting and merging of the curves) is also incorporated and is briefly described .
The propagation model is applied to artificial and real experiments. To adapt our model to wildfire conditions, we tune the model parameters with the Hausdorff distance as a criterion. Using data assimilation, we are able to estimate the normal velocity of the fire front (rate of spread), dominant wind direction and selected model parameters.
Combustion problems are usually described by a set of Navier-Stokes equations (NSE) together with Advection-Diffusion-Reaction equations (ADRE) for scalar quantities. A Lattice Boltzmann Method (LBM) is able to, more or less, efficiently solve NSEs together with ADREs for passive scalars. We use the LBM to solve a system of equations related to simplified models of combustion; for example, we neglect radiation and assume ideal gases. The talk will briefly tackle the 3D version of the LBM-LES solver based on a cumulant kernel for the NSE, together with a central moment-based kernel for ADREs. A few examples of gas and solid combustion simulated by the LBM will be presented, and a small review of recent efforts on the LBM's applicability in combustion simulations will also be part of the talk.
In this talk, we present a novel concept for the determination of the Earth's gravity field by solving the nonlinear satellite-fixed geodetic boundary value problem. Interestingly, the model formulation contains a generalized eikonal boundary condition for the norm of the gradient of the gravity potential on the Earth's surface. The nonlinear satellite-fixed geodetic boundary value problem formulation consists of a solution of the Laplace equation defined in the 3D bounded domain outside the Earth, the nonlinear eikonal-type boundary condition prescribed on the Earth's surface, and the Dirichlet BC given on a spherical boundary placed approximately at the altitude of satellite mission and additional four side boundaries. Our solution is based on an iterative approach in which in every iteration we solve an oblique derivative boundary value problem for a disturbing potential by the finite element method. We will present the mathematical formulation, numerical scheme and discuss numerical results.
The jet marching method (JMM) solves the eikonal equation by marching its jet using semi-Lagrangian updates (i.e., local raytracing). Marching jets allows a compact Hermite interpolation-based scheme, further enabling the use of paraxial raytracing for marching the amplitude of the associated geometric optic wave. We use the JMM to solve high-frequency wave problems on unstructured meshes, with applications motivated by computational room acoustics.
Shadowed regions on airless bodies receive energy in the form of scattered light, and the availability of increasingly large shape models of planets calls for the development of fast algorithms for modeling surface temperatures, relevant for future manned or robotic missions. We present a fast algorithm for solving the radiosity integral equation in the context of modeling thermal radiation or Lambertian illumination. This involves the hierarchical, block low-rank factorization of a large matrix of view factors. Unlike the dense kernel matrices which are often compressed using these methods, the view factor matrix is dense with O(N^2) nonzero entries, but with a significant fraction of zeros due to ray occlusion. We are able to assemble this matrix in roughly O(N^2) time; afterwards, matrix-vector multiplies take approximately O(N) time, and storage costs are also O(N). This allows us to solve long-running time-dependent problems using O(N) space, each step requiring only O(N) FLOPs. Numerical examples for a permanently shadowed crater at the Lunar south pole and for comet 67P/Churyumov-Gerasimenko will be presented.
When analyzing macrophage trajectories, we often have to deal with noisy data due to the random motion of the cells and possible imperfections in cell center detection. To smooth these trajectories, we present a mathematical model and numerical method based on an evolving open-plane curve approach in the Lagrangian formulation. The model for trajectory smoothing allows us to distinguish between the directional and the random parts of the trajectories and to extract the smoothed velocity vectors. We present an analysis of the behavior of random parts of trajectories based on the mean squared displacement; the result of our analysis shows that random parts obey the subdiffusion process. On the other side, we use the smoothed velocity vectors as prescribed Dirichlet boundary condition for the reconstruction of the velocity vector field driving the macrophages to the wound.
This talk aims at presenting a subcell monolithic DG/FV convex property preserving scheme solving system of conservation laws on 2D unstructured grids. This is known that discontinuous Galerkin (DG) method needs some sort of nonlinear limiting to avoid spurious oscillations or nonlinear instabilities which may lead to the crash of the code. The main idea motivating the present work is to improve the robustness of DG schemes, while preserving as much as possible its high accuracy and very precise subcell resolution. To do so, a convex blending of high-order DG and fist-order finite volume (FV) scheme will be locally performed at the subcell scale where it is needed. To this end, we first prove that it is possible to rewrite DG scheme as a subcell FV scheme on a subgrid provided with some specific numerical fluxes referred to as DG reconstructed fluxes. Then, the monolithic DG/FV scheme will be defined as following: to each face of each subcell will be assigned two fluxes, a 1st-order FV one and a high-order reconstructed one, that will be in the end blended in a convex way. The goal is now to determine, through analysis, optimal blending coefficients to achieve the desire properties (for instance positivity, non-oscillatory, entropy inequalities) while preserving the high accuracy of the scheme. Numerical results on various type problems will be presented to assess the very good performance of the design method.
This talk addresses multi-domain strategies to numerically solve linear and nonlinear elliptic problems on non-periodic, perforated domains. With the domains representing realistic urban geometries and containing numerous polygonal perforations, our main goal is to model floods in urban areas. For the linear model, we introduce a novel coarse space that is spanned by locally discrete harmonic basis functions that are piecewise polynomial along subdomain boundaries. We combine the coarse approximation with local subdomain solves in a two-level domain decomposition method. For the nonlinear Diffusive Wave model, we present nonlinear preconditioning techniques that allow us to significantly reduce iteration counts when compared to Newton's method. These nonlinear preconditioning techniques use the aforementioned coarse space and provide scalability with respect to the number of subdomains.
This paper describes a novel subface flux-based Finite Volume (FV) method for discretizing multi-dimensional hyperbolic systems of conservation laws of general unstructured grids. The subface flux numerical approximation relies on the notion of simple Eulerian Riemann solver. The Eulerian Riemann solver is constructed from its Lagrangian counterpart by means of the Lagrange-to-Euler mapping. This systematic procedure ensures the transfer of good properties such as positivity preservation and entropy stability. In this framework, the conservativity and the entropy stability are no more locally face-based but result respectively from a node-based vectorial equation and a scalar inequation. The corresponding multi-dimensional FV scheme is characterized by an explicit time step condition ensuring positivity preservation and entropy stability. The application to gas dynamics provides an original multi-dimensional conservative and entropy-stable FV scheme wherein the numerical fluxes are computed through a nodal solver which is similar to the one designed for Lagrangian hydrodynamics. The robustness and the accuracy of this novel FV scheme are assessed through various numerical tests in 2D and 3D on unstructured meshes for hypersonic demanding flows. We observe its insensitivity to the numerical pathologies that plague classical face-based contact discontinuity preserving FV formulations.
The GMRES algorithm of Saad and Schultz (1986) is an iterative method for approximately solving linear systems Ax = b, with initial guess x_0 and residual r_0 = b − Ax_0. The algorithm employs the Arnoldi process to generate the Krylov basis vectors (the columns of V_k. It is well known that this process can be viewed as a QR factorization of the matrix B_k = [ r_0, AV_k ] at each iteration. Despite an O(u)κ(B_k) loss of orthogonality, for unit roundoff u and condition number κ(B_k), the modified Gram-Schmidt formulation was shown to be backward stable in the seminal paper by Paige et al. (2006). We present an iterated Gauss-Seidel formulation of the GMRES algorithm (IGS-GMRES) based on the ideas of Ruhe (1983) and Swirydowicz et al. (2020). IGS-GMRES maintains orthogonality to the level O(u)κ(B_k) or O(ε), depending on the choice of one or two iterations. For two Gauss-Seidel iterations, the computed Krylov basis vectors remain orthogonal to working precision and the smallest singular value of V_k remains close to one. The resulting GMRES method is thus backward stable. We show that IGS-GMRES can be implemented with only two synchronization points per iteration, making it relevant to large-scale parallel computing environments.
Methods of time-parallel time integration have gained increasing interest in the last decade due to the advent of massively parallel computers. We will introduce into the challenges of modern hardware and the problems arising for efficient numerical methods. Constantly increasing parallelism and vectorisation are only accessible with suitable numerical algorithms. One particular approach to introduce sufficient parallel structure in the numerical problem is to use time-parallel (PinT) methods. The time direction then yields additional structure and increases the computational load to take advantage of the available hardware. By introducing the main challenges and ideas we will set the stage for the talks in this mini-symposium, ranging from the mathematical aspects of PinT to computer science aspects in HPC.
In many applications we face a sequence of linear systems and obtaining a solid preconditioner for each of them separately is often computationally prohibitive. Unfortunately, the naive approach of calculating a powerful preconditioner for the first system and using it ``as is'' also for the subsequent ones often leaves a lot to be desired. We work with the so-called preconditioner maps to bridge this gap and make efficient use of the first preconditioner also for the following systems, using the hierarchical matrix formats along the way and proposing the so-called Hierarchical Approximate Maps for preconditioner updating. We analyze the resulting preconditioners and numerically investigate further simplifications to enhance efficiency. This is a joint work with Eric de Sturler.
During wound healing, macrophages often show directional and random motion. We assume that the drift of macrophage motion toward the wound is caused by the danger signal gradient and thus, although we do not know the danger signal gradient explicitly, we use the drift part of macrophage motion to reconstruct the corresponding velocity vector field. The prescribed Dirichlet conditions are the sparsely distributed smoothed velocity vectors obtained from macrophage trajectories smoothing. The method consists of applying the Laplace equation to the two components of the vectors and the vector lengths with the prescribed Dirichlet conditions.
In computational inverse problems, the forward model operator H encodes the information related to the geometry and the physical properties of the acquisition process. In many applications, H may be poorly known. This problem is particularly relevant in microscopy or astronomical imaging, where the data can be subject to the action of an unknown point spread function, or in computed tomography, where the view angles used in the acquisition phase may be known with a certain degree of uncertainty. In this talk, we propose a framework for the calibration of of the point spread function in imaging inverse problems. Our approach is based on the solution of a bilevel learning problem, which estimates the blur kernel by solving a nested optimization problem where a variational model acts as a constraint. Here, a closed-form expression for the gradient of the upper-level loss is derived, enabling the use of a line-search based proximal gradient method in the outer steps of the bilevel algorithm, combined with spectral step length selection rules. A residual whiteness principle is also exploited for the automatic selection of the regularization parameter in the lower-level variational model. Numerical tests on different imaging problems will be presented to validate the proposed numerical scheme.
The segmentation of the aorta is crucial for many medical images analysis such as the diagnosis of large vessels vasculitis. In this work, we are presenting the extraction of a path inside the aorta from CT data. We use fast marching and 3D path tracking. The CT aorta data are complex, and it is a challenging task to keep the path inside the biggest vessel. We simplified the problem by automatic detection of key points in the descending aorta (localized according to the liver centroid), in the aortic arch and the ascending aorta respectively (according to the carina). The key points are found using specific combination and modification of several segmentation approaches (thresholding, labeling), morphological operation (dilatation, erosion), image filtering and pattern recognition (circular Hough transform). We then built the path step by step in each part of the aorta using the key points.
The work deals with the multichannel curve segmentation of planar point clouds, scanned by a terrestrial laser scanner (TLS), which usually represent walls of buildings. Information derived from the point cloud can be used for quality check of a building (e.g. wall flatness or mutual perpendicularity of walls) or creation of virtual models of buildings. Therefore, segmentation of the planar point cloud into subsets representing the surface of the wall and other structures (e.g. door, electrical outlets) is desired. We describe a mathematical model of evolving planar curves, which is used for the segmentation of the point cloud. The evolution of curves is controlled by the properties of the scanned walls, such as colour and intensity. We discretize the model using the finite volume method and the semi-implicit Inflow-Implicit/Outflow-Explicit (IIOE) scheme. We demonstrate the functionality of segmentation on real data.
Affine morphological scale space model is investigated from numerical point of view. Numerical schemes of Crank-Nicolson type and finite volume methodology are presented. Numerical experiments concerning accuracy, experimental order of convergence, effectivity and affine invariant property for proposed schemes are presented.
The assessment of volumetric blood flow in bypass grafts stands as a pivotal parameter indicative of the overall success of surgical intervention. In cases where blood flux diminishes significantly, the efficacy of the bypass may be compromised, potentially leading to failure. In this poster, we present a novel bypass graft simulation tool (currently under development :). This tool comprises three integral components: segmentation of X-ray coronary angiography, bypass graft planning, and simulation of blood flow, alongside an anastomosis optimization module. Our presentation deals primarily with the blood flow modeling aspect, which is performed by the Lattice Boltzmann Method. We conduct simulations on three scenarios involving the left coronary artery -- healthy, stenosed and stenosed with a bypass -- and verify our findings with those obtained from the SimVascular - a specialized software for medical image data segmentation and patient specific blood flow simulation and analysis.
The frequencies of a vibrating planar membrane correspond to the Dirichlet Laplacian eigenvalues on the planar domain. In 1966, Mark Kac posed the famous question "Can One Hear the Shape of a Drum?". In other words: is the shape of a planar domain uniquely determined by its Dirichlet Laplacian spectrum? In 1992, Gordon, Webb, and Wolpert constructed a pair of non-isometric domains with identical Laplacian eigenvalues, answering Kac's question in the negative. In this talk, we present a perspective on the inverse eigenvalue problem as an ill-posed inverse problem, and present some preliminary work in applying Bayesian inference techniques to this question. In addition, we will present the perturbation theory for Dirichlet Laplacian eigenvalues, as well our modifications to the state-of-the-art algorithm for computing Dirichlet Laplacian eigenvalues: Trefethen and Betcke's improved Method of Particular Solutions.
Modified proportioning with gradient projections (MPGP) is an active set method employing conjugate gradient method to minimize quadratic objective function on the free set, gradient projection to expand the active set, and proportioning to reduce the active set. Preconditioners are widely used to accelerate the solution of systems of linear equations. However, applying preconditioners is not straightforward in the case of conjugate gradient-based algorithms for constrained quadratic programming because preconditioning transforms variables and consequently converts the constraints into more general ones, e.g., bound constraints into linear inequality constraints. Preconditioning applied to the submatrix of Hessian given by the free set (preconditioning in face) is a successful strategy that does not transform the constrained variables. However, the preconditioner has to be recomputed every time the free set changes. We propose a new strategy to integrate preconditioning into MPGP, avoiding both the transformation of the constrained variables as well as the recomputation of the preconditioner based on the free set.
We propose a geometric interpretation and limiting of one-step, semi-implicit, fully discrete schemes for the one-dimensional linear advection equation with positive constant speed. The schemes are based on the unconditionally stable, first order implicit upwind scheme. The high-order numerical fluxes are expressed as the first order implicit upwind flux plus a high-order correction term. To interpret the semi-implicit schemes geometrically, we translate the values from the new time level back to the old time level along characteristics. The numerical fluxes are then computed by a polynomial reconstruction using old and new values. To avoid oscillations, we suggest limiting the high-order correction terms by enforcing bounds for the updated cell value. In comparison, for fully explicit schemes, if we bound the updated cell value by the upwind pair, taken from old values, we can derive well known second-order TVD schemes. These explicit schemes are stable up to CFL 1. In case of semi-implicit schemes, the bounds are taken from both new and old values, resulting in semi-implicit TVD schemes, which can be unconditionally stable. Our formulation of the limiter is general in a sense that it can be applied to arbitrary high-order correction terms.
For the Poisson equation posed in a planar domain containing a large number of polygonal perforations, we propose a low-dimensional approximation space based on a coarse polygonal partitioning of the domain. Similar to other multi-scale numerical methods, this coarse space is spanned by basis functions that are locally discrete harmonic. We provide an error estimate in the energy norm that only depends on the regularity of the solution over the edges of the coarse skeleton. For a specific edge refinement procedure, this estimate allows us to establish superconvergence of the method, even if the true solution has low general regularity. Combined with the Restricted Additive Schwarz method, the proposed coarse space leads to an efficient two-level iterative linear solver which achieves the fine-scale finite element error in few iterations. The numerical experiment showcases the use of this coarse space over test cases involving singular solutions and realistic urban geometries.