1 Introduction
Strong industrial development of the last century has led to a significant increase in public demand for different types of energy and, as a consequence, to an enormous increase in demand for natural resources. The subsurface is being used as storage plan for carbon dioxide (CO), nuclear waste or energy. In order to ensure efficient, safe and sustainable resource management, our society needs a better understanding and improved predictive capabilities for subsurface problems. In particular, the ability to predict how the subsurface will react to planned interventions is indispensable. However, subsurface flow and transport phenomena are complex and nonlinear. Moreover most subsurface systems are dominated by uncertainty where external driving forces and material properties are observable only to a limited extent at high costs. Overall, this leads to an inherent uncertainty in all modeling endeavors and in modelbased predictions or decision support.
1.1 Modeling carbon dioxide storage
Great research efforts have been directed towards understanding the processes of CO storage in geological formation (GCS). It is currently being discussed intensively as an interim technology with high potential for mitigating CO emissions (e.g. IPCCSpecial ). GCS comprises capturing CO at industrial facilities, compressing it into a fluid or supercritical state and disposing it in deep underground formations. The multiphase flow and transport processes involved are strongly nonlinear. They include phase changes in the region of the critical point, effects such as gravityinduced fingering and convective mixing as well as geochemical and geomechanical processes, etc. In order to describe the spacetime evolution of injected CO plumes and to investigate possible failure mechanisms of subsurface CO, (semi)analytical solutions have been derived in Nordbotten2005 . A study that compares various simplifying semianalytical models with complex numerical simulation tools was performed in Ebigbo2007 . The analysis in Birkholzer2009 focused on the effects of largescale CO leakage through lowpermeability layers. Changes in pressure due to migration of fluids into the Above Zone Monitoring Interval of a geologic CO site was studied in namhata2016probabilistic . These studies are cited here merely to provide a few examples. More detailed reviews are provided in, e.g., Bench ; Ebigbo2007 ; IPCCSpecial . The current status of CO storage in deep saline aquifers with emphasis on modeling approaches and practical simulations is presented in celia2015status . However, modeling underground CO storage involves uncertainty Hansson2009 due to the limited knowledge on subsurface properties (porosity, permeability, etc.), uncertainty in physical conceptualization, uncertainty in boundary conditions and also human subjectivity in data interpretation OladNowak_AWR2010 . Thus, quantification of uncertainty plays a key role in the development of CO storage as a largescale interim solution.
1.2 Uncertainty quantification
The main challenge in uncertainty quantification (UQ) is that bruteforce stochastic simulation techniques (e.g. Helton2003 ) are infeasible for largescale problems. Attempting to speed up uncertainty quantification can be subdivided into two principal ways: (1) developing analytical solutions, semianalytical solutions, conceptual simplifications, etc.; or (2) accelerating the forward modeling itself, e.g., using surrogate forward models such as response surfaces, emulators, metamodels, reducedorder models, etc. The current paper focusses on the 2 way. A reasonably fast and attractive approach to quantify uncertainty in CO storage was pioneered in Olad_al_CG2009 via polynomial chaos expansion (PCE). This approach was further exploited during the last years. However, there are other promising alternatives such as kernel methods and sparse grids that are discussed and employed in our current study. The polynomial chaos expansion gained its popularity during the last decades due to an efficient massive reduction of computational costs in uncertainty quantification, see e.g. foo_pcm_JCP2010 ; Ghanem_Spanos_1991_SFEM_book ; Lin2009 ; zhang2016evaluation . The key idea of PCE theory has been established by Wiener Wiener1938
and consists of projecting a fullcomplexity model onto orthogonal or orthonormal polynomial bases over the parameter space. Intrusive and nonintrusive approaches can be applied to estimate the involved projection integral in order to determine the form of the PCE. The nonintrusive approaches can be directly applied to the system of governing equations without any changes in simulation codes, however the intrusive approach demands rearranging of the governing equations.
Nonintrusive approaches like sparse quadrature Keese2003 and the probabilistic collocation method (Isukapalli1998 ; Li2007 ) were applied to complex and computationally demanding applications. PCE was combined with sparse integration rules blatman2008sparse , and an optimal sampling rule for PCE was proposed sinsbeckoptimal . The adaptive multielement polynomial chaos approach MR2151997 was used to assure flexibility in treating the input distribution. A generalization of classical PCE was introduced in OladNowak_RESS2012
as arbitrary polynomial chaos (aPC) and provides a highly parsimonic and yet purely datadriven description of uncertainty. A recent extension to sparse approximation via the momentbased aPC was presented in
ahlfeld2016samba and a multielement aPC was introduced in alkhateeb2017data . Additionally, a stochastic model calibration framework was developed elsheikh2014efficient ; OladClasNow_CG2013 ; Olad_EP2013 for CO storage based on strict Bayesian principles combined with aPC.Not only the approximation of models via various expansions, but also sampling of the parameter space is a challenging procedure when the parameter space is highdimensional. Sampling is directly addressed via adaptive sparse grid techniques in the literature Franzelin16Data ; Jakeman11Characterization ; Ma09adaptive
. Sparse grids construct a potentially highdimensional surrogate model using Archimedes’ hierarchical idea for quadrature. Each degree of freedom adds the difference between the current approximation and the true solution at the actual grid point to the approximation. In contrast to global PCE techniques, for example, each degree of freedom has local effect and the approximation does not suffer from the Gibbs phenomenon even for basis functions of high polynomial order
Bungartz98Finite . Moreover, highly efficient and parallel implementations for the construction and the evaluation of the sparse grid approximation are available Pfander16New ; Pflueger10Spatially . Sparse grids are very flexible and, hence, attractive to a large variety of applications that arise in the context of uncertainty quantification: density estimation Franzelin16Data ; Peherstorfer13Model , optimization Valentin16Hierarchical , etc.As an alternative to polynomial or gridbased representation of the original physical model, other functions or kernels can be used. Kernel methods are well established techniques that found broad application in applied mathematics Wendland2005
and machine learning
SS02 . They are employed, e.g., for function approximation, classification and regression. Since they are capable of working with meshless, i.e., scattered data in very high dimension, such methods are particularly attractive in the construction of surrogate models, where no restriction at all is imposed on the arbitrary location of the input data. In this context, greedy methods DeMarchi2005 ; SchWen2000 have the additional advantage of providing sparse and thus fasttoevaluate surrogate models SH2017a ; Wirtz2015a , while having provable error bounds and convergence rates SH16b ; Wirtz2013 .The most wellknown intrusive approach is the stochastic Galerkin technique, which originated from structural mechanics Ghanem_Spanos_1991_SFEM_book and has been applied in studies for modeling uncertainties in flow problems (see e.g., Ghanem_Spanos_1991_SFEM_book ; Matthies2005 ). Several authors applied stochastic Galerkin methods to hyperbolic problems. Apart from the hyperbolicity of the stochastic Galerkin system MR2501693 , extensions to multielement or multiwavelet based stochastic discretizations MR1199538 ; MR2151997 and also adaptivity for the multiwavelet discretization were provided tryoen2012 . The multielement based hybrid stochastic Galerkin (HSG) discretization used in this work and related stochastic adaptivity methods were introduced in Barth201611 ; Buerger2012 . The application of HSG to twophase flow problems in two spatial dimensions was addressed in knr2012 and extended to hyperbolicelliptic systems in Koeppel2017 . In Pettersson2016367 further improvements of intrusive stochastic Galerkin methods were suggested for the multiwavelet discretization.
1.3 Scope of the paper
This work studies uncertainty quantification analysis for CO storage using the modeling approaches discussed above. It seeks to offer a comparison that could be useful for further develepment considering uncertainty of boundary conditions, uncertainty of conceptual model definition and uncertainty of material properties. Section 2 describes the physical model and Section 3 presents the case study setup employed for the analysis. The key ideas of arbitrary polynomial chaos expansion, spatially adaptive sparse grids, kernel greedy interpolation and hybrid stochastic Galerkin are briefly described in Section 4, which also demonstrates the performance of the introduced approaches against a reference solution. All mentioned methods have different nature and have their origins in different research areas. However, we expect the identification of similarities in their performance. Additionally, Section 5 presents the comparison between the methods in terms of precision and corresponding computational effort.
We would like to invite the scientific community to participate and follow up on this work by comparing and evaluating other available methods in this field based on the presented benchmark. Therefore, we provide the corresponding input data and result files as well as the executables of the deterministic code in uqBenchmarkData .
2 Physical problem formulation
We consider a multiphase flow problem in porous media, where is injected into a deep aquifer and then spreads in a geological formation. This leads to a pressure buildup and a plume evolution. In the current paper we consider a relatively simple model based on a benchmark problem defined by Class et al. Bench and reduce it considering the radial flow in the vicinity of the injection well to illustrate the performance of different methods for uncertainty quantification. The simplicity of the physical model is solely motivated by the high computational demand of our reference statistics based on Monte Carlo simulations, which we use for validation purposes. We assume that fluid properties such as density and viscosity are constant, all processes are isothermal, and brine are two separate and immiscible phases, mutual dissolution is neglected, the formation is isotropic rigid and chemically inert, and capillary pressure is negligible. In the following we describe the deterministic base model in more detail.
The initial conditions in the fully saturated domain include a hydrostatic pressure distribution which depends on the brine density. The aquifer is initially filled with brine and CO is injected at a constant rate at the center of the domain. The lateral boundary conditions are constant Dirichlet conditions and equal to the initial conditions. All other boundaries are noflow boundaries. All relevant parameters used for the simulation are given in Table 1.
Parameter  Value 

CO density,  479 kg/m 
Brine density,  1045 kg/m 
CO viscosity,  3.950 Pas 
Brine viscosity,  2.535 Pas 
Aquifer permeability,  210 m 
Porosity,  0.15 
Brine residual saturation,  0.2 
CO residual saturation,  0.05 
Injection well radius  0.15 m 
Injection rate,  8.87 kg/s (1600m/d) 
Dimension of model domain,  m 
Simulation time,  100 days 
Saturation on the left boundary  
Injection pressure  320 bar 
Pressure right boundary  300 bar 
Mean mobility value  (Pas) 
For time , domain and , the wellknown twophase flow equations obtained from mass balances of both fluid phases and the multiphase version of Darcy’s law can be reformulated by means of the fractional flow formulation Nordbotten2011 given by the following system of equations
(1)  
(2)  
(3)  
(4) 
where the subscript stands for the brine (water) phase ( and the rich (gas) phase (, respectively, and the absolute permeability , porosity and the sources/sinks are given parameters. Combined with the contraint , the primary variables of the system (1)(3) are the phase saturation , the total velocity and the global pressure . The fractional flow function and the mean mobility function are nonlinear functions of the saturation . Both are defined via the mobilities , , with dynamic viscosities and the relative permeabilities and given by
(5)  
(6) 
Moreover, is the effective saturation, where denotes the residual saturations of the fluid phases. Insertion of (2) in (3) yields
(7) 
2.1 Radial flow equations
We consider radial flow in the vicinity of the injection well in the homogeneous reservoir with scalar absolute permeability . Hence, the governing equation for pressure (7) can be written in the following form
(8) 
where is the radial coordinate and controls the injection rate in the well. Since only is injected, i.e. , equation (8) can be integrated as
(9) 
with constant . The solution of equation (8) can be written in the closed analytical form
(10) 
with injection pressure and given by
Using the parameters in Table 1, we get . We reformulate equation (1) for the gas phase using the radial coordinate system and (9) to obtain
(11) 
Because the velocity is constant, equation (11) does not depend on the absolute permeability as the porous medium is assumed to be homogeneous.
2.2 Hyperbolic solver
In order to discretize the hyperbolic transport equation (11) in the physical space, we apply a semidiscrete centralupwind finite volume scheme introduced in NUM:NUM20049 . Centralupwind schemes are typically characterized by robustness and high accuracy up to second order. In contrast to, e.g., Godunovtype solvers leveque1992 , where analytical knowledge about the front propagation is essential, centralupwind schemes only require information about propagation speeds. By construction, the artifical viscosity inherent to the scheme is adapted to the discrete solution and thus leads to lower numerical dissipation compared to other schemes such as LaxFriedrichs leveque1992 . For the temporal discretization the RungeKutta method of second order is applied.
Let , , with the number of elements and , where represents the uniform, radial mesh size in the physical space. For the sake of brevity we will denote the unknown by and . Then the semidiscrete scheme reads
(12) 
and the numerical flux function is given by
with the cell averages , the right and leftsided local speeds , and the piecewise linear reconstructions at the interface points . The initial values of the cell averages can be computed by . Note that the CFL condition and the local speeds depend on the derivative of the flux function . For a more detailed description of the used finite volume scheme we refer to NUM:NUM20049 .
3 Benchmark case study setup: modeling parameters and quantity of interest
In our benchmark case, we analyze the joint effect of various sources of uncertainty. Typically, the following types of uncertainty can occur during the reservoir screening stage: uncertainty of boundary conditions, uncertainty of conceptual model definition and uncertainty of material properties.
We consider the uncertainty of boundary conditions via the injection rate . The reservoir pressure can thus be seen as a function of the injection rate ,
(13) 
where and
denotes the random variable. Conceptual model uncertainty is introduced via uncertainty in the relative permeability definitions
and (see Court2012134 ) which we extend to(14)  
(15) 
with random variable . Generally, variations of the relative permeability degree have a strong impact on the fractional flow function. Uncertainty of material properties are represented via uncertainty of reservoir porosity. In the current study, we have aligned the distribution of the reservoir porosity with data from the U.S. National Petroleum Council Public Database (see also Kopp2009 ). Thus, the reservoir porosity can be written in the form with random variable .
The uncertain parameters represent the input parameters of equation (11) and can be written as an
dimensional random vector
, with for the current case study. We will assume that each random variable () is independent andon the probability space
, where is a sample space with a algebra and probability measure . The distributions are chosen to reflect the situation of site screening, where sitespecific data and data that allow detailed description of injection strategy, fluid properties and geology are not yet available. In this stage, one has to resort to databases and expert elicitation that represent properties of supposedly similar sites as prior knowledge.From the random vector , we generate a set of 10.000 samples denoted to construct an exact reference solution for the moments of the quantity of interest via the statistics, and to construct a datadriven framework for the methods of consideration. Fig. 1 shows univariate histograms of . We stress that the data set is deployed by all methods without prior knowledge on the distribution.
This study quantifies stochastic characteristics of the flow using mean value and standard deviation of saturation as a function of space and time. Fig. 2 shows the statistical reference solution for the mean value and standard deviation of the saturation after days based on the set of samples . Apart from the global influence of the uncertain parameters onto the output statistics, Fig. 2 also illustrates the individual impact of each analyzed parameter. One can observe that the uncertainty in the degree of the relative permeability does not influence the dynamics of the saturation significantly. The injection rate and the reservoir porosity are the main cause of uncertainty in the saturation.
4 Uncertainty quantification methods
In this section we briefly introduce four different methods for uncertainty quantification and discuss some of their properties. We compare them with the databased statistics generated by Monte Carlo sampling for mean value and standard deviation of the quantity of interest (black line in Fig. 2).
4.1 Nonintrusive arbitrary polynomial chaos expansion
We briefly introduce arbitrary polynomial chaos (aPC) techniques that are employed to construct a global response surface which captures the dependence of the model on the data set. We consider space and time dependent model response of the saturation. According to Wiener Wiener1938 the dependence of the model output on all input parameters is expressed via projection onto a multivariate polynomial basis (see e.g. Ghanem_Spanos_1991_SFEM_book ), such that the model output can be approximated by the polynomial chaos expansion
(16) 
where is the number of multivariate polynomial basis functions (see e.g. Ghanem_Spanos_1991_SFEM_book ) and corresponding coefficients . It depends on the total number of input parameters and on the order of the polynomial representation: . The coefficients in equation (16) quantify the dependence of the model response on the input parameters for each desired point in space and time .
We follow a recent generalization of the polynomial chaos expansion known as the arbitrary polynomial chaos (aPC). The aPC technique adapts to arbitrary probability distribution shapes of the input parameters and can be inferred from limited data through a few statistical moments
OladNowak_RESS2012 . The necessity to adapt to arbitrary distributions in practical tasks is discussed in more detail in OladNowak_AWR2010 . Thus, we explore a highly parsimonic and purely datadriven description of uncertainty via aPC and directly incorporate the available data set of size illustrated in Fig. 1without any use of exact forms of probability density functions. For that, we compute
raw statistical moments from realisations and then we construct the orthonormal polynomial basis of order according to the matrix equation introduced in OladNowak_RESS2012 . Note that the orthonormal basis can be also obtained via recursive relations (see Chapter 22 of Abramowitz1965 ), via GramSchmidt orthogonalization (see Witteveen2007 ) or via the Stieltjes procedure Stieltjes1884 .The polynomial representation in equation (16) is fully defined via the unknown expansion coefficients . These coefficients can be determined using intrusive or nonintrusive approaches. The intrusive approach requires manipulation of the governing equations and will be discussed in Section 4.4 via hybrid stochastic Galerkin. In the current Section 4.1 we follow the nonintrusive way where no modifications are required for the system of governing equations
As the computationally cheapest version we apply the nonintrusive probabilistic collocation method (PCM) Li2007 ; Olad_al_CG2009 . The method is based on a minimal chosen set of model evaluations, each with a defined set of model parameters (called collocation points) that is related to the roots of the polynomial basis via optimal integration theory Villadsen1978 . Fig. 3 shows mean and standard deviation of the CO saturation estimated via aPC expansion based on the probabilistic collocation method and also shows the statistical reference solution. As expected, the strong discontinuity of the original physical model introduced due to the CO
displacement front poses challenges for the global polynomial representation. Nevertheless, the estimation of the mean value is acceptable. However, increasing the expansion order does not necessary lead to improvement of the results, especially for the variance estimation. Hence, a moderate expansion order can be seen as adequate compromise between accuracy and computational efforts. Additionally, the expansion order is only justified if accompanied by reliable statistical information, because incomplete statistical information limits the utility of polynomial chaos expansions
oladyshkin2018incomplete .As a computationally very demanding alternative to the probabilistic collocation method, we also employ the leastsquares collocation method (e.g. Moritz1978
) for constructing the expansion coefficients on a full tensor grid of collocation points. Fig.
3 also shows mean and standard deviation of CO saturation estimated via aPC expansion based on the leastsquares collocation method against the statisticalreference solution. The leastsquares collocation method based on the full tensor (FT) grid helps to overcome the typical oscillation problem of polynomials for high order expansions. However, due to the curse of dimensionality for tensor grids, this approach has an extremely high computational effort if more than a single parameter is of interest.
4.2 Spatially adaptive sparse grids
In this section we introduce regular sparse grids according to Zenger91Sparse . We follow the approach of higherorder basis functions that have been presented in Bungartz98Finite and extended in Pflueger10Spatially with proper extrapolation schemes. Furthermore, we present the concept of spatially adaptive refinement following Pflueger12Spatially and provide refinement criteria in the context of datadriven uncertainty quantification Franzelin15Non ; Ma09adaptive .
Let be a multiindex with and dimensionality . We define a levelindex set for some as
(17) 
that defines grid points located at . The multivariate basis functions are centered at the grid points and are defined as the tensor product of onedimensional, local polynomials
(18) 
where and is the polynomial degree in direction .
Sparse grids use these functions to form a hierarchical basis in order to overcome the curse of dimensionality to some extent. The levelindex sets define a unique set of hierarchical increment spaces that add the difference between the approximation on smaller levels (componentwise) and the actual level. Due to this hierarchical character, one can sort the increment spaces according to their benefit to the approximation of functions from various function spaces and leave out the less important ones. If the contribution of each to the approximation is measured with respect to the norm, then the optimal sparse grid levelindex set is obtained as
(19) 
with being the regular level of the grid. A regular sparse grid function that approximates the model output is written as
(20) 
where the are called hierarchical coefficients.
The number of grid points is significantly reduced compared to a full grid with the same spatial resolution in each direction. At the same time the interpolation error of a sparse grid differs just by a logarithmic factor compared to a full grid and is, hence, only slightly worse Bungartz98Finite ; Zenger91Sparse .
One can interpret such a regular sparse grid as the result of an apriori adaptivity. Spatially adaptive sparse grids add a second level of refinement: Grid points are added iteratively where the local error of the approximation is largest. Refinement criteria estimate these local errors with respect to some target quantity. In this paper we use a weighted refinement method and enforce balancing Bungartz03Multivariate . It defines a ranking for all levelindex pairs as
(21) 
where is an adaptive sparse grid index set. To refine, we add all the hierarchical successors of ,
(22)  
that are not yet part of starting with the one that has the largest rank.
To describe the uncertainty, we use a sparse grid probability density function Franzelin16Data based on the input data to approximate . This way, we can start with a purely datadriven description and arbitrary densities without any need for derived analytical forms or independence of the respective probability functions.
For this model problem we distinguish two types of sparse grids: The first model spends grid points directly at the boundary (Pflueger10Spatially, , p.15) to which we refer to as aSG. As a second model, we consider modified piecewise polynomial basis functions (Pflueger10Spatially, , p.24) with linear extrapolation, which we write as aSG. Both sparse grid surrogates are constructed as follows:
 Step 1:

Start with a regular sparse grid of .
 Step 2:
 Step 3:

Make sure that all hierarchical ancestors of each grid point exist and that each grid point has either none or two hierarchical successors in each direction.
 Step 4:

Run the model problem at the new grid points and construct the new interpolant.
 Step 5:

Continue with step 2 until a maximum number of grid points is reached.
Sample results for the expectation value and the variance of the model problem are shown in Fig. 4. Both estimated quantities using a sparse grid with, for example, boundary points differ significantly from the statistical reference value. Most of the grid points are located at the boundary of the domain, which makes this approach unfeasible for problems with small computational budget. This problem can be solved with aSG. Hence, we observe a better approximation of the expectation value and the variance already for smaller grids. Nevertheless, the accuracy of the moments with respect to the reference solution increase with increasing grid size in both cases.
4.3 Kernel greedy interpolation
To start, we emphasize that we work here with the discretization of the saturation provided by the hyperbolic solver, thus we understand as a function , , , mapping the uncertain parameters to the spatial discretization consisting of 250 cells at final time. Kernelbased approximation methods construct a surrogate of based on a set of input parameters and the corresponding output computed by the solver. The surrogate model can then be rapidly evaluated on the large set of input parameters of the reference solution, and the mean and variance are calculated as the mean and variance of these evaluations.
We give only a brief overview of kernel methods, and refer to Wendland2005 for a detailed treatment. The saturation is approximated as a vectorvalued linear combination
(23) 
where is a symmetric kernel function, are the centers, and the coefficient vectors are determined by imposing interpolation conditions on , i.e., (23) is exact when evaluated at . These conditions result in a linear system, which has a unique solution whenever the kernel function is chosen to be strictly positive definite, i.e., for any choice of pairwise distinct points the matrix is positive definite. Therefore, it is possible to construct an approximation (23) for arbitrary sample locations and input and output dimensions. In practice, orders of hundreds for input and output dimensions are realistic.
We use in the following a Wendland kernel Wendland1995a , which is a compactly supported radial kernel of polynomial type, and where the radius of the support is controlled by a shape parameter . Each kernel is associated to a native Hilbert space , which is in this case a Sobolev space.
The quality of the kernel approximation in (23) depends on both the choice of and the set of centers . Given a large set of possible sample locations , we want to select of a subset such that the surrogate based on is as good as , while . This ensures that the evaluation of the approximation (23) is as fast as possible, so that it can be efficiently used in surrogate modeling.
For this, we apply VKOGA DeMarchi2005 ; Wirtz2013 (Vectorial Kernel Orthogonal Greedy Algorithm with greedy), to iteratively generate a nested sequence of centers , . We give a rough outline for the motivation and structure of VKOGA and refer to the aforementioned references for further details: The interpolation error can be bounded by means of the power function as
(24) 
where the norm on the lefthand side is the maximal absolute value of all entries of the dimensional vector . Observe that, when the kernel is understood as a covariance function, is precisely the expected prediction error of the corresponding simple Kriging, as discussed in scheuerer2013 .
Motivated by (24) we chose the centers iteratively by adding to the previous set of centers, i.e. . Using the Newton basis Muller2009 of the space , the power function can be efficiently updated via Once a sufficient set is generated, the coefficients can be computed by solving the system (23) with interpolation conditions restricted to the points . This method requires only the knowledge of a sampling of the input space to select the few sampling points , and the solver is run only on the parameters in . We remark that it has been recently proven that VKOGA, although simple, has a quasioptimal convergence order in Sobolev spaces SH16b , i.e., it gives the same approximation order of full interpolation.
In the present setting, the surrogate model is constructed on the set defined as the convex hull of the reference data points . We run VKOGA starting from a fine discretization of obtained by intersecting with a uniform grid of equally spaced points in the minimum box enclosing . The resulting set contains data points. We remark again that this set is used to perform the greedy optimization, and no function evaluation (i.e., no model run) on is needed at this stage. Since in this setting the number of model runs is the crucial computational constraint, we avoid to select the kernel width parameter via validation, and instead train different models for values .
Each run of the algorithm selects incremental sets of point locations, and for each of the parameters we consider logspaced values of in , i.e., . On these sets of input parameters, the full model is evaluated and for each of them the interpolant is computed. The resulting different models are denoted as VKOGA, where the lower index is the number of model runs and the upper one is the kernel shape parameter. Sample results are shown in Fig. 5 for values and for an increasing number . Although the surrogates are not accurate when using only model runs, it is evident that an increase in leads to a satisfactory convergence of the approximate mean and standard deviation to the reference ones. Indeed, the surrogates obtained with model runs are close enough to the reference solution, with some oscillations around the exact values. For runs, the reference and the surrogates are almost equal. A more quantitative error analysis is discussed in the next sections.
4.4 Hybrid stochastic Galerkin
Contrary to the previous methods, the hybrid stochastic Galerkin (HSG) approach is an intrusive method which changes the deterministic system by means of the polynomial chaos expansion and a multielement decomposition of the stochastic space, hence, does not construct a surrogate. We briefly summarize the discretization following Buerger2012 ; Koeppel2017 . For the sake of brevity we again use .
If all sources of uncertainty are considered, c.f. Section 3, the radial transport equation (11
) yields the following randomized partial differential equation
(25) 
For the multielement discretization we decompose the stochastic domain into stochastic elements with refinement level . For reasons of readability we describe the discretization based on the domain . By rescaling, the method can be easily extended to arbitrary domains. Let be a set of indices and with multiindex . Then we define the dimensional stochastic element with support , for , . Furthermore let be the space of piecewise polynomial functions on each stochastic element with maximal polynomial order . The space is spanned by the multivariate polynomials
with , , and the truncated polynomial chaos polynomials . For the latter we use Legendre polynomials. The polynomials satisfy the following orthonormality relation where denote the Kronecker delta symbol for , and . The finite number of basis functions is given by . Based on these considerations the projection of a random field , is obtained by
(26) 
with deterministic coefficients , for and . We note that without the multielement decomposition the expansion (26) would be similar to (16). For more details concerning the convergence of for , we refer to MR1199538 . Moreover, we refer to Buerger2012 ; Koeppel2017 for a more detailed description of the HSG method.
We apply the stochastic discretization to (25) by replacing the unknown random field, i.e. the saturation , with its HSG projection and by testing the equation with the HSG basis functions . Then we obtain a partly decoupled system for the deterministic coefficients which reads
(27) 
with initial values . On we impose deterministic boundary conditions.
Mean and standard deviation are then computed during the postprocessing. In contrast to the common SG/HSG ansatz, in this work the simulations are performed with Legendre polynomials representing a uniform distribution even if the actual distribution might be different. In view of datadriven UQ this guarantees that every point is uniformly approximated. Instead of using the coefficients of (
26) directly mean and standard deviation are obtained by a reconstruction using the parameters provided in the data set . This is possible since equation (26) allows to compute the approximation of the unknown random field for an arbitrary parameter set. However this approach is likely to be at the expense of the accuracy. On the other hand we obtain mean and standard deviation for arbitrary input data without a rerun of the simulation.Fig. 6 shows the results of the HSG method compared with the reference solution. An increase of leads to a greater improvement of the results than an increase of only. Since in the test case the value of the perturbed porosity has a significant influence on the dynamics of the system, some of the shown simulations are decoupled in two parts w.r.t. the value of . By merging the decoupled results during the postprocessing the accuracy can be improved. Even though the time steps on different stochastic elements differ from each other, the time step size on each stochastic element of the considered problem does not change during the whole computation time. Therefore, we improved the computational efficiency significantly by rearranging the stochastic elements on several Message Passing Interface (MPI) ranks in terms of the expected number of time steps.
5 Discussion
This section discusses the results obtained by the four different methods in more detail and compares them with respect to five different criteria. The results are visualized in Fig. 7 to facilitate the reading. Moreover we report in Table 2 a summary of the discussion to provide the reader with an immediate comparison of the strengths and weaknesses of the different techniques. However we emphasize that the comparison needs to be evaluated with caution due to the inherent differences of the methods.
5.1 Usage for UQ in geosciences
The polynomial chaos expansion is a known technique in the field of uncertainty quantification and its generalization to arbitrary distributions has been applied already to CO storage problems. Similarly, spatially adaptive sparse grids are a common stochastic collocation method in uncertainty quantification that has been applied to a large variety of realworld problems. Kernelbased methods are successfully applied in a variety of databased tasks, in particular in function approximation, but still novel in uncertainty quantification in geosciences. The use of kernel models in the context of uncertainty quantification in principle poses no particular issues, as the mean and variance predictions can be simply obtained by evaluating the surrogate model and computing the corresponding empirical mean and variance.
By construction, intrusive stochastic Galerkin methods change the structure of the problem. Consequently, the quantification of uncertainties goes along with the underlying deterministic problem to be solved. In the early 90’s, intrusive stochastic Galerkin methods have been successfully used for the quantification of uncertainties of elliptic problems, e.g. by Ghanem and Spanos Ghanem_Spanos_1991_SFEM_book . In the last decades these methods have been extended further to cope with several applications of different complexity. In particular, nonlinear hyperbolic problems require an additional stochastic discretization, such as the multiwavelet approach or the HSG method, involving the decomposition of the stochastic space in order to avoid or reduce the generally occurring Gibbs phenomenon.
5.2 Comput. costs to evaluate & reconstruct surrogate
Polynomial chaos expansions and its recent datadriven aPC generalization can be seen as a cheap way for the estimation of uncertainty. Therefore the computational costs involved are comparatively low.
Due to its nature, the same applies to sparse grid methods. All sparse grid methods used in this paper are publicly available in SG Pflueger10Spatially . This toolbox includes stateoftheart sparsegrid algorithms that allow for very efficient and parallel construction, evaluation and quadrature of sparse grid surrogates.
Greedy methods prove to be particularly effective in general surrogate modeling since they produce sparse, hence cheaptoevaluate models. This is the case also in the present setting, while greedy dataindependent methods, as the presently used greedy algorithm, have the further advantage of not requiring an apriori knowledge of the full model evaluations. Indeed, one of the measures for the success of a method is the number of model runs required to construct the surrogate.
The results of the HSG method with low resolution can be computed with appropriate computational effort, but the complexity of the problem and also the computational costs increase rapidly with increasing resolution. However, the method facilitates a change of the probability distribution provided that it is still defined on the same interval, almost without any computational costs during the postprocessing. Furthermore the partially decoupled structure of the HSGdiscretized problem allows for efficient parallelization on MPI and Open MultiProcessing clusters. Additional improvements of the computational efficiency can be achieved by the stochastic adaptivity Barth201611 ; Buerger2012 and load balancing based on the stochastic elements. It is also promising to exploit the vector structure of the discretization by using GPUbased architectures.
5.3 Accuracy for low number of model runs/resolution
Loworder aPC representations such as 1 and 2 order seem to be efficient in terms of computational costs and corresponding accuracy (see Fig. 7). However, the aPC approach covers the parameter space globally and as a consequence has not enough flexibility to represent special features in parameter space such as strong discontinuities, shocks, etc.
As shown in Fig. 7, sparse grids are efficient with respect to accuracy and the corresponding computational costs. For a very limited computational budget with less than model runs, the convergence of the sparse grid with the modified basis is limited by the grid structure from which the interpolation points are taken.
The results of the greedy method show that a certain minimal number of data is required to have meaningful predictions (at least model runs). This is motivated by two factors: first, the model does not incorporate any knowledge on the input space distribution. Second, the greedy algorithm selects the input points in a targetfunctionindependent way, so it is not specialized on the approximation of this specific model.
The HSG method provides suitable accuracy for low resolutions of mean and standard deviation, cf. Fig. 7. This applies in particular to the bisection level , i.e. the number of stochastic elements. The choice of the polynomial order has no significant bearing on the accuracy of the considered problem.
5.4 Accuracy for high numb. of model runs/resolution
As pointed out in Section 4.1, an increase of the expansion order in aPC does not necessarily lead to an improvement. This fact is very well illustrated in Fig. 7. Apparently the 0 order estimation of the standard deviation which suggests by definition a value of and demands one run of the original model only, is more accurate than the 5 order expansion. Alternatively, this artifact can be mitigated via leastsquares projection onto the full tensor grid, as presented in Fig. 7. The aPC based on leastsquares collocation helps to overcome the problem of representing discontinuities and assures the reduction of the error. However this is at the expense of the computational costs involved.
The efficiency of the sparse grids method with modified basis increases significantly when the adaptive refinement comes into play, as both the level of the grid and the accuracy of the surrogate increase in the regions of high probability. The error keeps converging as expected with an increasing number of model runs. Sparse grids with boundary points converge as well, however with higher computational costs (see Fig. 7). Consequently, they should not be considered for datadriven uncertainty quantification problems in higher dimensions or with small computational budgets.
The results and comparisons of Fig. 7 demonstrate a good behavior of the greedy algorithm in the present task and, in particular, suitable numerical convergence as the size of the dataset increases. This behavior is particularly evident for larger datasets (i.e. when at least model runs are utilized), as the surrogate model improves its accuracy, which turns out to be an advantage over other methods.
The HSG method provides fair accuracy for high resolutions of mean and standard deviation for increasing and . An increase of the bisection level usually leads to higher accuracy than the increase of the highest polynomial order , cf. Fig. 7. Because of the considered setup, numerical quadrature needs to be performed in each time step. Depending on the desired accuracy, the discretization of the setup includes up to several hundreds of thousands time steps. This may result in numerical difficulties and also accuracy limitations.
5.5 Applicability for large number of parameters
In general, the different aPC approaches used in this work are not very suitable for highdimensional problems due to the curse of dimensionality. Only an extension to sparse polynomial representation can be feasible for applied tasks ahlfeld2016samba .
In contrast, spatially adaptive sparse grids have three main advantages compared to the other approaches. First, as pointed out in Section 4.2, they are suited for higherdimensional problems. Second, they adaptively allow to change the approximation locally wherever local features in the parameter space appear. Large hierarchical coefficients describe large local changes and, hence, serve as a basis for refinement criteria. It is even possible to cope with functions that include kinks or jumps. And third, they are very flexible and allow adaptive refinement. This means one can choose the polynomial degree of each basis function separately in order to exploit local smoothness of the model function.
The greedy method can potentially work on problems with many more input dimensions, although a slower convergence rate should be expected due to the curse of dimensionality. Nevertheless, the complexity of the model construction and evaluation remain essentially the same, up to the computation of Euclidean distances between points in a larger input space.
As mentioned, all intrusive techniques base on a transformation of the randomized problem into a deterministic system of equations and, at least in the general case, they require the application of a quadrature in each time step. Compared to nonintrusive methods this methodology therefore changes the structure of the problem. Consequently, intrusive methods including HSG suffer from the curse of the dimensionality.
Usage in geoscience UQ  Comput. costs to reconstruct & evaluate surrogate  Accuracy for low number of model runs/resolution  Accuracy for high number of model runs/resolution  Highdimensional applicability  
aPC  Very common  Suitable  Suitable  Not Suitable  Not Suitable 
aPC  Common  Suitable  Suitable  Acceptable  Not Suitable 
aSG  Common  Suitable  Acceptable  Acceptable  Acceptable 
VKOGA  Uncommon  Suitable  Acceptable  Suitable  Acceptable 
HSG  Very common  Acceptable  Suitable  Acceptable  Not Suitable 
6 Summary and Conclusions
In this benchmark study we have compared four promising techniques for datadriven uncertainty quantification of nonlinear twophase flow in porous media. The flow problem is motivated by an injection scenario of supercritical into a porous and saline aquifer where the consideration of uncertainty in the boundary conditions, uncertainty in material parameters as well as conceptual model uncertainty has a significant bearing on the overall flow and storage behavior of the system. Therefore, we have considered uncertainty in the injection rate related to boundary value uncertainty, uncertain porosity related to the limited data availablity of fieldscale storage sites and uncertainty of the relative permeability degree associated with the nonlinearity of the conceptual model. To account for the arising uncertainties, we considered the nonintrusive methods arbitrary polynomial chaos, spatially adaptive sparse grids and kernel greedy interpolation as well as the intrusive hybrid stochastic Galerkin method. They were compared by means of the absolute error of the moments mean and standard deviation of the saturation after 100 days based on a statistical reference solution.
The numerical results show that all methods provide a good representation of the considered moments of the quantity of interest, despite the inherent complexity stemming from the distribution and the impact of the uncertain parameters. Small changes of these parameters may strongly influence the nonlinearity of the flow problem and may cause large variability of the simulation time of the hyperbolic solver resulting in significant changes of the
saturation and the shape of the plume. This usually leads to Gibbs phenomena and oscillations of response surfaces in UQ methods. The arbitary polynomial chaos method and the hybrid stochastic Galerkin method overcome these difficulties already for low number of model runs/resolution, whereas adaptive sparse grids and the kernel greedy method are characterized by improved accuracy at higher number of model runs/resolution. The applicability to high dimensionalities needs to be taken into account. Our discussion indicates that the arbitary polynomial chaos method and the hybrid stochastic Galerkin method are generally less efficient and typically suffer more from the curse of dimensionality compared to adaptive sparse grids and the kernel greedy interpolation technique. The particular method of choice therefore depends on the specific problem and specific goal to be achieved. Taking this fact into consideration we have classified the methods regarding five relevant properties which are presented in Table
2. The classification reflects the main features we deem to be useful for further uncertainty quantification for storage and beyond.The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/2) at the University of Stuttgart.
References
 (1) Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover (1965)
 (2) Ahlfeld, R., Belkouchi, B., Montomoli, F.: SAMBA: sparse approximation of momentbased arbitrary polynomial chaos. J. Comput. Phys. 320, 1–16 (2016)
 (3) Alkhateeb, O., Ida, N.: DataDriven MultiElement Arbitrary Polynomial Chaos for Uncertainty Quantification in Sensors. IEEE Trans. Magn. (2017)
 (4) Alpert, B.K.: A class of bases in for the sparse representation of integral operators. SIAM J. Math. Anal. 24(1), 246–262 (1993)
 (5) Barth, A., Bürger, R., Kröker, I., Rohde, C.: Computational uncertainty quantification for a clarifierthickener model with several random perturbations: A hybrid stochastic Galerkin approach. Comput. Chem. Eng. 89, 11 – 26 (2016)
 (6) Birkholzer, J.T., Zhou, Q., Tsang, C.F.: Largescale impact of CO storage in deep saline aquifers: A sensitivity study on pressure response in stratified systems. Int. J. Greenh. Gas Control 3, 181–194 (2009)
 (7) Blatman, G., Sudret, B.: Sparse polynomial chaos expansions and adaptive stochastic finite elements using a regression approach. C. R. Mécanique 336(6), 518–523 (2008)
 (8) Bungartz, H.J.: Finite Elements of Higher Order on Sparse Grids. Habilitation, Technische Universität München (1998)
 (9) Bungartz, H.J., Dirnstorfer, S.: Multivariate Quadrature on Adaptive Sparse Grids. Computing 71(1), 89–114 (2003)
 (10) Bürger, R., Kröker, I., Rohde, C.: A hybrid stochastic Galerkin method for uncertainty quantification applied to a conservation law modelling a clarifierthickener unit. ZAMM Z. Angew. Math. Mech. 77(10), 793–817 (2014)
 (11) Celia, M., Bachu, S., Nordbotten, J., Bandilla, K.: Status of CO2 storage in deep saline aquifers with emphasis on modeling approaches and practical simulations. Water Resour. Res. 51(9), 6846–6892 (2015)
 (12) Class, H., Ebigbo, A., Helmig, R., Dahle, H., Nordbotten, J.N., Celia, M.A., Audigane, P., Darcis, M., EnnisKing, J., Fan, Y., Flemisch, B., Gasda, S., Jin, M., Krug, S., Labregere, D., Naderi, A., Pawar, R.J., Sbai, A., Sunil, G.T., Trenty, L., Wei, L.: Abenchmarkstudy on problems related to CO storage in geologic formations. Comput. Geosci. 13, 451֭467 (2009)
 (13) Court, B., Bandilla, K.W., Celia, M.A., Janzen, A., Dobossy, M., Nordbotten, J.M.: Applicability of verticalequilibrium and sharpinterface assumptions in CO2 sequestration modeling. Int. J. Greenh. Gas Control 10, 134–147 (2012)

(14)
De Marchi, S., Schaback, R., Wendland, H.: Nearoptimal dataindependent point locations for radial basis function interpolation.
Adv. Comput. Math. 23(3), 317–330 (2005)  (15) Ebigbo, A., Class, H., Helmig, R.: CO leakage through an abandoned well: problemoriented benchmarks. Comput. Geosci. 11(2), 103–115 (2007)

(16)
Elsheikh, A.H., Hoteit, I., Wheeler, M.F.: Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates.
Comput. Methods in Appl. Mech. Eng. 269, 515–537 (2014)  (17) Foo, J., Karniadakis, G.: Multielement probabilistic collocation method in high dimensions. J. Comput. Phys. 229(5), 1536–1557 (2010)
 (18) Franzelin, F., Diehl, P., Pflüger, D.: Nonintrusive Uncertainty Quantification with Sparse Grids for Multivariate Peridynamic Simulations. In: M. Griebel, M.A. Schweitzer (eds.) Meshfree Methods for Partial Differential Equations VII, Lecture Notes in Computational Science and Engineering, vol. 100, pp. 115–143. Springer International Publishing (2015)
 (19) Franzelin, F., Pflüger, D.: From Data to Uncertainty: An Efficient Integrated DataDriven Sparse Grid Approach to Propagate Uncertainty, pp. 29–49. Springer International Publishing, Cham (2016)
 (20) Ghanem, R.G., Spanos, P.D.: Stochastic finite elements: A spectral approach. SpringerVerlag, New York (1991)
 (21) Hansson, A., Bryngelsson, M.: Expert opinions on carbon dioxide capture and storage: A framing of uncertainties and possibilities. Energy Policy 37, 2273–2282 (2009)
 (22) Helton, J., Davis, F.: Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliab. Eng. Syst. Saf. 81(1), 23–69 (2003)
 (23) IPCC: Special report on carbon dioxide capture and storage, Technical report, Intergovernmental Panel on Climate Change (IPCC), prepared by Working Group III. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA (2005)
 (24) Isukapalli, S.S., Roy, A., Georgopoulos, P.G.: Stochastic Response Surface Methods (SRSMs) for uncertainty propagation: Application to environmental and biological systems. Risk Anal. 18(3), 351–363 (1998)
 (25) Jakeman, J.D., Archibald, R., Xiu, D.: Characterization of discontinuities in highdimensional stochastic problems on adaptive sparse grids. J. Comput. Phys. 230(10), 3977–3997 (2011)
 (26) Keese, A., Matthies, H.G.: Sparse quadrature as an alternative to Monte Carlo for stochastic finite element techniques. Proc. Appl. Math. Mech. 3, 493֭494 (2003)
 (27) Kopp, A., Class, H., Helmig, H.: Investigations on CO Storage Capacity in Saline Aquifers  Part 1: Dimensional Analysis of Flow Processes and Reservoir Characteristics. Int. J. Greenh. Gas Control 3, 263–276 (2009)
 (28) Köppel, M., Franzelin, F., Kröker, I., Oladyshkin, S., Santin, G., Wittwar, D., Barth, A., Haasdonk, B., Nowak, W., Pflüger, D., Rohde, C.: Datasets and executables of datadriven uncertainty quantification benchmark in carbon dioxide storage DOI 10.5281/zenodo.933827
 (29) Köppel, M., Kröker, I., Rohde, C.: Intrusive uncertainty quantification for hyperbolicelliptic systems governing twophase flow in heterogeneous porous media. Comput. Geosci. 21(4), 807–832 (2017)
 (30) Kröker, I., Nowak, W., Rohde, C.: A stochastically and spatially adaptive parallel scheme for uncertain and nonlinear twophase flow problems. Comput. Geosci. pp. 1–16 (2015)
 (31) Kurganov, A., Petrova, G.: Centralupwind schemes on triangular grids for hyperbolic systems of conservation laws. Numer. Meth. Part. D. E. 21(3), 536–552 (2005)
 (32) LeVeque, R.: Numerical Methods for Conservation Laws. Lectures in Mathematics ETH Zürich. Springer (1992)
 (33) Li, H., Zhang, D.: Probabilistic collocation method for flow in porous media: Comparisons with other stochastic methods. Water Resour. Res. 43, 44–48 (2007)
 (34) Lin, G., Tartakovsky, A.: An efficient, highorder probabilistic collocation method on sparse grids for threedimensional flow and solute transport in randomly heterogeneous porous media. Adv. Water Res. 32(5), 712–722 (2009)
 (35) Ma, X., Zabaras, N.: An adaptive hierarchical sparse grid collocation algorithm for the solution of stochastic differential equations. J. Comput. Phys. 228(8), 3084–3113 (2009)
 (36) Matthies, H.G., Keese., A.: Galerkin methods for linear and nonlinear elliptic stochastic partial differential equations. Comp. Meth. Appl. Mech. Engrg. 194, 1295–1331 (2005)
 (37) Moritz, H.: LeastSquares Collocation. Rev. Geophys. 16(3), 421–430 (1978)
 (38) Müller, S., Schaback, R.: A Newton basis for kernel spaces. J. Approx. Theory 161(2), 645–655 (2009)
 (39) Namhata, A., Oladyshkin, S., Dilmore, R.M., Zhang, L., Nakles, D.V.: Probabilistic Assessment of Above Zone Pressure Predictions at a Geologic Carbon Storage Site. Scientific reports 6, 39,536 (2016)
 (40) Nordbotten, J., Celia, M., Bachu, M.: Injection and storage of CO in deep saline aquifers: analytical solution for CO plume evolution during injection. Transport Porous Med. 58(3), 339–360 (2005)
 (41) Nordbotten, J.M., Dahle, H.K.: Impact of the capillary fringe in vertically integrated models for CO2 storage. Water Resour. Res. 47(2) (2011). W02537
 (42) Oladyshkin, S., Class, H., Helmig, R., Nowak, W.: A concept for datadriven uncertainty quantification and its application to carbon dioxide storage in geological formations. Adv. Water Res. 34, 1508–1518 (2011)
 (43) Oladyshkin, S., Class, H., Helmig, R., Nowak, W.: An integrative approach to robust design and probabilistic risk assessment for CO storage in geological formations. Comput. Geosci. 15(3), 565–577 (2011)
 (44) Oladyshkin, S., Class, H., Nowak, W.: Bayesian updating via Bootstrap filtering combined with datadriven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Comput. Geosci. 17(4), 671–687 (2013)
 (45) Oladyshkin, S., Nowak, W.: Datadriven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliab. Eng. Syst. Safe. 106, 179–190 (2012)
 (46) Oladyshkin, S., Nowak, W.: Incomplete statistical information limits the utility of highorder polynomial chaos expansions. Reliability Engineering & System Safety 169, 137–148 (2018)
 (47) Oladyshkin, S., Schroeder, P., Class, H., Nowak, W.: Chaos expansion based Bootstrap filter to calibrate CO2 injection models. Energy Procedia 40, 398–407 (2013)
 (48) Peherstorfer, B.: Model Order Reduction of Parametrized Systems with Sparse Grid Learning Techniques. Ph.D. thesis, Technical University of Munich (2013)
 (49) Pettersson, P., Tchelepi, H.A.: Stochastic Galerkin framework with locally reduced bases for nonlinear twophase transport in heterogeneous formations. Comput. Method. Appl. Mech. Eng. 310, 367 – 387 (2016)
 (50) Pfander, D., Heinecke, A., Pflüger, D.: A New SubspaceBased Algorithm for Efficient Spatially Adaptive Sparse Grid Regression, Classification and Multievaluation. In: J. Garcke, D. Pflüger (eds.) Sparse Grids and Applications  Stuttgart 2014, pp. 221–246. Springer International Publishing (2016)
 (51) Pflüger, D.: Spatially Adaptive Sparse Grids for HighDimensional Problems. Verlag Dr. Hut (2010)
 (52) Pflüger, D.: Spatially Adaptive Refinement. In: J. Garcke, M. Griebel (eds.) Sparse Grids and Applications, Lecture Notes in Computational Science and Engineering, pp. 243–262. Springer, Berlin Heidelberg (2012)
 (53) Poëtte, G., Després, B., Lucor, D.: Uncertainty quantification for systems of conservation laws. J. Comput. Phys. 228(7), 2443–2467 (2009)
 (54) Santin, G., Haasdonk, B.: Convergence rate of the dataindependent Pgreedy algorithm in kernelbased approximation. Dolomites Research Notes on Approximation 10, 68–78 (2017)
 (55) Santin, G., Haasdonk, B.: Greedy Kernel Approximation for Sparse Surrogate Modelling. Tech. rep., University of Stuttgart (2017)
 (56) Schaback, R., Wendland, H.: Adaptive greedy techniques for approximate solution of large RBF systems. Numer. Algorithms 24(3), 239–254 (2000)
 (57) Scheuerer, M., Schaback, R., Schlather, M.: Interpolation of spatial data – A stochastic or a deterministic problem? Eur. J. Appl. Math. 24(4), 601–629 (2013)
 (58) Schölkopf, B., Smola, A.: Learning with Kernels. The MIT Press (2002)
 (59) Sinsbeck, M., Nowak, W.: An optimal sampling rule for nonintrusive polynomial chaos expansions of expensive models. Int. J. Uncertain. Quantif. 5(3) (2015)
 (60) Stieltjes, T.J.: Quelques Recherches sur la Théorie des Quadratures dites Méchaniques. Oeuvres I pp. 377–396 (1884)
 (61) Tryoen, J., Maître, O.L., Ern, A.: Adaptive anisotropic spectral stochastic methods for uncertain scalar conservation laws. SIAM J. Sci. Comput. 34(5), A2459–A2481 (2012)
 (62) Valentin, J., Pflüger, D.: Hierarchical GradientBased Optimization with BSplines on Sparse Grids. In: J. Garcke, D. Pflüger (eds.) Sparse Grids and Applications – Stuttgart 2014, Lecture Notes in Computational Science and Engineering, vol. 109, pp. 315–336. Springer (2016)
 (63) Villadsen, J., Michelsen, M.L.: Solution of differential equation models by polynomial approximation. PrenticeHall (1978)
 (64) Wan, X., Karniadakis, G.E.: An adaptive multielement generalized polynomial chaos method for stochastic differential equations. J. Comput. Phys. 209(2), 617–642 (2005)
 (65) Wendland, H.: Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. Adv. Comput. Math. 4(1), 389–396 (1995)
 (66) Wendland, H.: Scattered Data Approximation, Cambridge Monogr. Appl. Comput. Math., vol. 17. Cambridge University Press, Cambridge (2005)
 (67) Wiener, N.: The homogeneous chaos. Am. J. Math 60, 897–936 (1938)
 (68) Wirtz, D., Haasdonk, B.: A Vectorial Kernel Orthogonal Greedy Algorithm. Dolomites Res. Notes Approx. 6, 83–100 (2013). Proceedings of DWCAA12
 (69) Wirtz, D., Karajan, N., Haasdonk, B.: Surrogate Modelling of multiscale models using kernel methods. Int. J. Numer. Methods Eng. 101(1), 1–28 (2015)
 (70) Witteveen, J.A.S., Sarkar, S., Bijl, H.: Modeling physical uncertainties in dynamic stall induced fluid structure interaction of turbine blades using arbitrary polynomial chaos. Comput. Struct. 85, 866–878 (2007)
 (71) Zenger, C.: Sparse Grids. Notes on Numerical Fluid Mechanics 31, 241–251 (1991)
 (72) Zhang, Y., Liu, Y., Pau, G., Oladyshkin, S., Finsterle, S.: Evaluation of multiple reducedorder models to enhance confidence in global sensitivity analyses. Int. J. Greenh. Gas Control 49, 217–226 (2016)
Comments
There are no comments yet.