<?xml version='1.0'?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:georss="http://www.georss.org/georss" xmlns:atom="http://www.w3.org/2005/Atom" >
<channel>
	<title><![CDATA[Scipedia: Pedro Diez’s Journal papers]]></title>
	<link>https://www.scipedia.com/sj/view/139936</link>
	<atom:link href="https://www.scipedia.com/sj/view/139936" rel="self" type="application/rss+xml" />
	<description><![CDATA[]]></description>
	
	<div id="documents_content"><script>var journal_guid = 139936;</script><item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Diez_Huerta_1999a</guid>
	<pubDate>Thu, 24 Oct 2019 10:58:24 +0200</pubDate>
	<link>https://www.scipedia.com/public/Diez_Huerta_1999a</link>
	<title><![CDATA[A unified approach to remeshing strategies for finite element h-adaptivity]]></title>
	<description><![CDATA[<p><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">In&nbsp;</span><em style="color: rgb(46, 46, 46); font-size: 18px; font-weight: 400;">h</em><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">-adaptivity, a remeshing strategy is needed to compute the distribution of required element size using the estimated error distribution. Several authors have introduced different remeshing strategies yielding very different results. In this work these methods are included in a unified framework, emphasizing the role of the underlying hypotheses. Moreover, an objective tool to evaluate the accuracy of the resulting finite element solution is presented. Thus, a new remeshing strategy is introduced to optimize the accuracy of the adapted solutions. The different remeshing strategies are compared with well-known numerical examples.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Egozcue_et_al_1997a</guid>
	<pubDate>Wed, 23 Oct 2019 15:12:26 +0200</pubDate>
	<link>https://www.scipedia.com/public/Egozcue_et_al_1997a</link>
	<title><![CDATA[Vulnerabilidad sísmica y toma de decisiones]]></title>
	<description><![CDATA[<p><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">Vulnerability of civil structures and systems under seismic actions have been extensively used and studied in the last decade. However, the goals of those studies are very diverse and it have caused some confusion on the meaning and definition of vulnerability. Naturally, different scopes of vulnerability lead to different methodologies in its study which may be incompatible.</span><br style="color: rgb(116, 116, 116); font-size: 18px;"><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">The present aim is to analyse the definition of seismic vulnerability in the framework of Bayesian decision making. Three different examples of decision making related to seismic vulnerability, which are qualitatively different, guide the discussion. They have an increasing complexity: decisions on normative seismic design; decisions on insurances against seismic risk and, finally, those decisions that should be taken to lead alarm, mitigation and rescue systems for seismic disasters.</span><br style="color: rgb(116, 116, 116); font-size: 18px;"><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">The three analised examples correspond to three different decision making schemes, respectively named: a priori, a posteriori and preposterior. In each case the definition of seismic vulnerability depends on the concept of considered utility (cost-benefit), on the involved random states and the scale in which the phenomena are to be considered. Most of the concepts associated with seismic vulnerability are reviewed through these three examples.</span><br style="color: rgb(116, 116, 116); font-size: 18px;"><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">A conclusion is that seismic vulnerability studies should consider the associated decision making problem. Simultaneously, the study of methodological and estimation aspects on vulnerability are remarked in order to improve both guidelines for seismic design and disaster mitigation.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Garikapati_et_al_2019b</guid>
	<pubDate>Mon, 11 Nov 2019 18:14:49 +0100</pubDate>
	<link>https://www.scipedia.com/public/Garikapati_et_al_2019b</link>
	<title><![CDATA[A Proper Generalized Decomposition (PGD) approach to crack propagation in brittle materials: with application to random field material properties]]></title>
	<description><![CDATA[<p><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">Understanding the failure of brittle heterogeneous materials is essential in many applications. Heterogeneities in material properties are frequently modeled through random fields, which typically induces the need to solve finite element problems for a large number of realizations. In this context, we make use of reduced order modeling to solve these problems at an affordable computational cost. This paper proposes a reduced order modeling framework to predict crack propagation in brittle materials with random heterogeneities. The framework is based on a combination of the Proper Generalized Decomposition (PGD) method with Griffith&rsquo;s global energy criterion. The PGD framework provides an explicit parametric solution for the physical response of the system. We illustrate that a non-intrusive sampling-based technique can be applied as a postprocessing operation on the explicit solution provided by PGD.We first validate the framework using a global energy approach on a deterministic two-dimensional linear elastic fracture mechanics benchmark. Subsequently, we apply the reduced order modeling approach to a stochastic fracture propagation problem.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Garikapati_et_al_2019a</guid>
	<pubDate>Mon, 11 Nov 2019 17:42:14 +0100</pubDate>
	<link>https://www.scipedia.com/public/Garikapati_et_al_2019a</link>
	<title><![CDATA[Sampling-based stochastic analysis of the PKN model for hydraulic fracturing]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 17px; font-style: normal; font-weight: 400;">Hydraulic fracturing processes are surrounded by uncertainty, as available data is typically scant. In this work, we present a sampling-based stochastic analysis of the hydraulic fracturing process by considering various system parameters to be random. Our analysis is based on the Perkins-Kern-Nordgren (PKN) model for hydraulic fracturing. This baseline model enables computation of high fidelity solutions, which avoids pollution of our stochastic results by inaccuracies in the deterministic solution procedure. In order to obtain the desired degree of accuracy of the computed solution, we supplement the employed time-dependent moving-mesh finite element method with two new enhancements: (i) global conservation of volume is enforced through a Lagrange multiplier; (ii) the weakly singular behavior of the solution at the fracture tip is resolved by supplementing the solution space with a tip enrichment function. This tip enrichment function enables the computation of the tip speed directly from its associated solution coefficient. A novel incremental-iterative solution procedure based on a backward-Euler time-integrator with sub-iterations is employed to solve the PKN model. Direct Monte-Carlo sampling is performed based on random variable and random field input parameters. The presented stochastic results quantify the dependence of the fracture evolution process&mdash;in particular the fracture length and fracture opening&mdash;on variations in the elastic properties and leak-off coefficient of the formation, and the height of the fracture.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Garcia-Blanco_et_al_2018a</guid>
	<pubDate>Mon, 11 Nov 2019 17:14:17 +0100</pubDate>
	<link>https://www.scipedia.com/public/Garcia-Blanco_et_al_2018a</link>
	<title><![CDATA[Algebraic and parametric solvers for the power flow problem: towards real-time and accuracy-guaranteed simulation of electric systems]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 17px; font-style: normal; font-weight: 400;">The power flow model performs the analysis of electric distribution and transmission systems. With this statement at hand, in this work we present a summary of those solvers for the power flow equations, in both algebraic and parametric version. The application of the Alternating Search Direction method to the power flow problem is also detailed. This results in a family of iterative solvers that combined with Proper Generalized Decomposition technique allows to solve the parametric version of the equations. Once the solution is computed using this strategy, analyzing the network state or solving optimization problems, with inclusion of generation in real-time, becomes a straightforward procedure since the parametric solution is available. Complementing this approach, an error strategy is implemented at each step of the iterative solver. Thus, error indicators are used as an stopping criteria controlling the accuracy of the approximation during the construction process. The application of these methods to the model IEEE 57-bus network is taken as a numerical illustration.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Diez_et_al_2018a</guid>
	<pubDate>Mon, 11 Nov 2019 17:03:55 +0100</pubDate>
	<link>https://www.scipedia.com/public/Diez_et_al_2018a</link>
	<title><![CDATA[Algebraic PGD for tensor separation and compression: An algorithmic approach]]></title>
	<description><![CDATA[<p><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;"><span>Proper Generalized Decomposition (PGD) is devised as a computational method to solve high-dimensional&nbsp;boundary value problems&nbsp;(where many dimensions are associated with the space of parameters defining the problem). The PGD philosophy consists in providing a separated representation of the multidimensional solution using a&nbsp;</span>greedy approach&nbsp;combined with an alternated directions scheme to obtain the successive rank-one terms. This paper presents an algorithmic approach to high-dimensional tensor</span><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">&nbsp;separation based on solving the&nbsp;Least Squares approximation&nbsp;in a separable format of multidimensional tensor using PGD. This strategy is usually embedded in a standard PGD code in order to compress the solution (reduce the number of terms and optimize the available storage capacity), but it stands also as an alternative and highly competitive method for tensor separation.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Dorribo_et_al_2018a</guid>
	<pubDate>Mon, 11 Nov 2019 16:54:06 +0100</pubDate>
	<link>https://www.scipedia.com/public/Dorribo_et_al_2018a</link>
	<title><![CDATA[Numerical estimation of the bearing capacity of resistance spot welds in martensitic boron steels using a J-integral fracture criterion]]></title>
	<description><![CDATA[<p><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">Predicting the bearing capacity of resistance spot welds (RSW) during vehicle crash tests has become a crucial task for the automotive industry, since the recent introduction of advanced high strength steels (AHSS) such as martensitic boron steels (e.g. 22MnB5). The spot weld joints of these steels exhibit relatively low bearing strengths, compared to those of more ductile high strength steels. Currently, the bearing capacity of spot weld joints is characterized through extensive experimental campaigns. In this article, a model for quantification of the bearing capacity of RSW using a finite-element&nbsp;</span><em style="color: rgb(46, 46, 46); font-size: 18px; font-weight: 400;">J</em><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">-integral fracture criterion is presented. The model takes into account geometric and mechanical features of the spot weld, namely the weld diameter and the mechanical properties distribution resulting from the welding process. An experimental loading test campaign is carried out for calibration and validation purposes, considering multiple sheet thickness combinations, loading angles and weld sizes. Experimental observations of the failed spot welds and preliminary simulations show that failure is caused mostly by stress concentration around the sharp weld notch. Consequently, the&nbsp;</span><em style="color: rgb(46, 46, 46); font-size: 18px; font-weight: 400;">J</em><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">-integral obtained from detailed finite element simulations is used to asses the stress/strain concentration along the first crack advance direction predicted by the acoustic tensor. The computed&nbsp;</span><em style="color: rgb(46, 46, 46); font-size: 18px; font-weight: 400;">J</em><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">-integral values are compared to the material toughness to obtain the joint&rsquo;s maximum force. The resulting simulated and experimental bearing capacities show a good agreement for all tested configurations.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Zou_et_al_2018a</guid>
	<pubDate>Wed, 06 Nov 2019 16:48:53 +0100</pubDate>
	<link>https://www.scipedia.com/public/Zou_et_al_2018a</link>
	<title><![CDATA[A non-intrusive proper generalized decomposition scheme with application in biomechanics]]></title>
	<description><![CDATA[<p><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">Proper generalized decomposition (PGD) is often used for multi-query and fast-response simulations. It is a powerful tool alleviating the curse of dimensionality affecting multi-parametric partial differential equations. Most implementations of PGD are intrusive extensions based on in-house developed finite element (FE) solvers. In this work, we propose a non-intrusive PGD scheme using off-the-shelf FE codes (such as certified commercial software) as an external solver. The scheme is implemented and monitored by in-house flow-control codes. A typical implementation is provided with downloadable codes. Moreover, a novel parametric separation strategy for the PGD resolution is presented. The parametric space is split into two- or three-dimensional subspaces, to allow PGD technique solving problems with constrained parametric spaces, achieving higher convergence ratio. Numerical examples are provided. In particular, a practical example in biomechanics is included, with potential application to patient-specific simulation.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Sibileau_et_al_2018a</guid>
	<pubDate>Wed, 06 Nov 2019 16:42:09 +0100</pubDate>
	<link>https://www.scipedia.com/public/Sibileau_et_al_2018a</link>
	<title><![CDATA[Explicit parametric solutions of lattice structures with proper generalized decomposition (PGD)]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 17px; font-style: normal; font-weight: 400;">Architectured materials (or metamaterials) are constituted by a unit-cell with a complex structural design repeated periodically forming a bulk material with emergent mechanical properties. One may obtain specific macro-scale (or bulk) properties in the resulting architectured material by properly designing the unit-cell. Typically, this is stated as an optimal design problem in which the parameters describing the shape and mechanical properties of the unit-cell are selected in order to produce the desired bulk characteristics. This is especially pertinent due to the ease manufacturing of these complex structures with 3D printers. The proper generalized decomposition provides explicit parametic solutions of parametric PDEs. Here, the same ideas are used to obtain parametric solutions of the algebraic equations arising from lattice structural models. Once the explicit parametric solution is available, the optimal design problem is a simple post-process. The same strategy is applied in the numerical illustrations, first to a unit-cell (and then homogenized with periodicity conditions), and in a second phase to the complete structure of a lattice material specimen.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Oliveira_et_al_2018c</guid>
	<pubDate>Wed, 06 Nov 2019 16:33:09 +0100</pubDate>
	<link>https://www.scipedia.com/public/Oliveira_et_al_2018c</link>
	<title><![CDATA[Numerical Modelling of Multi-Phase Multi-Component. Reactive Transport in the Earth’s interior]]></title>
	<description><![CDATA[<p><span style="color: rgb(42, 42, 42); font-size: 15px; font-style: normal; font-weight: 400; background-color: rgb(239, 242, 247);">We present a conceptual and numerical approach to model processes in the Earth&rsquo;s interior that involve multiple phases that simultaneously interact thermally, mechanically and chemically. The approach is truly multiphase in the sense that each dynamic phase is explicitly modelled with an individual set of mass, momentum, energy and chemical mass balance equations coupled via interfacial interaction terms. It is also truly multicomponent in the sense that the compositions of the system and its constituent phases are expressed by a full set of fundamental chemical components (e.g. SiO</span><sub style="font-style: normal; font-weight: 400; font-size: inherit; vertical-align: sub; color: rgb(42, 42, 42); background-color: rgb(239, 242, 247);">2</sub><span style="color: rgb(42, 42, 42); font-size: 15px; font-style: normal; font-weight: 400; background-color: rgb(239, 242, 247);">, Al</span><sub style="font-style: normal; font-weight: 400; font-size: inherit; vertical-align: sub; color: rgb(42, 42, 42); background-color: rgb(239, 242, 247);">2</sub><span style="color: rgb(42, 42, 42); font-size: 15px; font-style: normal; font-weight: 400; background-color: rgb(239, 242, 247);">O</span><sub style="font-style: normal; font-weight: 400; font-size: inherit; vertical-align: sub; color: rgb(42, 42, 42); background-color: rgb(239, 242, 247);">3</sub><span style="color: rgb(42, 42, 42); font-size: 15px; font-style: normal; font-weight: 400; background-color: rgb(239, 242, 247);">, MgO, etc.) rather than proxies. These chemical components evolve, react with and partition into different phases according to an internally consistent thermodynamic model. We combine concepts from Ensemble Averaging and Classical Irreversible Thermodynamics to obtain sets of macroscopic balance equations that describe the evolution of systems governed by multiphase multicomponent reactive transport (MPMCRT). Equilibrium mineral assemblages, their compositions and physical properties, and closure relations for the balance equations are obtained via a &lsquo;dynamic&rsquo; Gibbs free-energy minimization procedure (i.e. minimizations are performed on-the-fly as needed by the simulation). Surface tension and surface energy contributions to the dynamics and energetics of the system are taken into account. We show how complex rheologies, that is, visco-elasto-plastic, and/or different interfacial models can be incorporated into our MPMCRT ensemble-averaged formulation. The resulting model provides a reliable platform to study the dynamics and nonlinear feedbacks of MPMCRT systems of different nature and scales, as well as to make realistic comparisons with both geophysical and geochemical data sets. Several numerical examples are presented to illustrate the benefits and limitations of the model.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Diez_et_al_2017a</guid>
	<pubDate>Tue, 05 Nov 2019 16:40:25 +0100</pubDate>
	<link>https://www.scipedia.com/public/Diez_et_al_2017a</link>
	<title><![CDATA[Generalized parametric solutions in Stokes flow]]></title>
	<description><![CDATA[<p><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">Design optimization and uncertainty quantification, among other applications of industrial interest, require fast or multiple queries of some parametric model. The Proper Generalized Decomposition (PGD) provides a separable solution, a computational vademecum explicitly dependent on the parameters, efficiently computed with a greedy algorithm combined with an alternated directions scheme and compactly stored. This strategy has been successfully employed in many problems in computational mechanics. The application to problems with saddle point structure raises some difficulties requiring further attention. This article proposes a PGD formulation of the Stokes problem. Various possibilities of the separated forms of the PGD solutions are discussed and analyzed, selecting the more viable option. The efficacy of the proposed methodology is demonstrated in numerical examples for both Stokes and Brinkman models.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Hospital-Bravo_et_al_2017a</guid>
	<pubDate>Tue, 05 Nov 2019 16:33:50 +0100</pubDate>
	<link>https://www.scipedia.com/public/Hospital-Bravo_et_al_2017a</link>
	<title><![CDATA[A semi‐analytical scheme for highly oscillatory integrals over tetrahedra]]></title>
	<description><![CDATA[<p><span style="color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">This paper details a semi‐analytical procedure to efficiently integrate the product of a smooth function and a complex exponential over tetrahedral elements. These highly oscillatory integrals appear at the core of different numerical techniques. Here, the partition of unity method enriched with plane waves is used as motivation. The high computational cost or the lack of accuracy in computing these integrals is a bottleneck for their application to engineering problems of industrial interest. In this integration rule, the non‐oscillatory function is expanded into a set of Lagrange polynomials. In addition, Lagrange polynomials are expressed as a linear combination of the appropriate set of monomials, whose product with the complex exponentials is analytically integrated, leading to 16 specific cases that are developed in detail. Finally, we present several numerical examples to assess the accuracy and the computational efficiency of the proposed method, compared with standard Gauss&ndash;Legendre quadratures.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Garcia-Blanco_et_al_2017a</guid>
	<pubDate>Tue, 05 Nov 2019 16:22:32 +0100</pubDate>
	<link>https://www.scipedia.com/public/Garcia-Blanco_et_al_2017a</link>
	<title><![CDATA[Monitoring a PGD solver for parametric power flow problems with goal-oriented error assessment]]></title>
	<description><![CDATA[<p><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">The parametric analysis of electric grids requires carrying out a large number of Power Flow computations. The different parameters describe loading conditions and grid properties. In this framework, the Proper Generalized Decomposition (PGD) provides a numerical solution explicitly accounting for the parametric dependence. Once the PGD solution is available, exploring the multidimensional parametric space is computationally inexpensive. The aim of this paper is to provide tools to monitor the error associated with this significant computational gain and to guarantee the quality of the PGD solution. In this case, the PGD algorithm consists in three nested loops that correspond to 1) iterating algebraic solver, 2) number of terms in the separable greedy expansion and 3) the alternated directions for each term. In the proposed approach, the three loops are controlled by stopping criteria based on residual goal-oriented error estimates. This allows one for using only the computational resources necessary to achieve the accuracy prescribed by the end- user. The paper discusses how to compute the goal-oriented error estimates. This requires linearizing the error equation and the Quantity of Interest to derive an efficient error representation based on an adjoint problem. The efficiency of the proposed approach is demonstrated on benchmark problems.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Serafin_et_al_2017a</guid>
	<pubDate>Tue, 05 Nov 2019 16:15:41 +0100</pubDate>
	<link>https://www.scipedia.com/public/Serafin_et_al_2017a</link>
	<title><![CDATA[Enhanced goal-oriented error assessment and computational strategies in adaptive reduced basis solver for stochastic problems]]></title>
	<description><![CDATA[<p><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">This work focuses on providing accurate low-cost approximations of stochastic finite elements simulations in the framework of linear elasticity. In [E. Florentin, P. Diez, Adaptive reduced basis strategy based on goal oriented error assessment for stochastic problems, Comput. Methods Appl. Mech. Engrg. 225-228 (2012) 116-127], an adaptive strategy has been introduced as an improved Monte-Carlo method for multi-dimensional large stochastic problems. We provide here a complete analysis of the method including a new enhanced goal-oriented error estimator and estimates of CPU cost gain. Technical insight of these two topics are presented in details and numerical examples show the interest of these new developments.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Signorini_et_al_2017a</guid>
	<pubDate>Tue, 05 Nov 2019 15:28:47 +0100</pubDate>
	<link>https://www.scipedia.com/public/Signorini_et_al_2017a</link>
	<title><![CDATA[Proper generalized decomposition solution of the parameterized Helmholtz problem: application to inverse geophysical problems]]></title>
	<description><![CDATA[<p style="margin-top: 5px; margin-bottom: 16px; color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">The identification of the geological structure from seismic data is formulated as an inverse problem. The properties and the shape of the rock formations in the subsoil are described by material and geometric parameters, which are taken as input data for a predictive model. Here, the model is based on the Helmholtz equation, describing the acoustic response of the system for a given wave length. Thus, the inverse problem consists in identifying the values of these parameters such that the output of the model agrees the best with observations. This optimization algorithm requires multiple queries to the model with different values of the parameters. Reduced order models are especially well suited to significantly reduce the computational overhead of the multiple evaluations of the model.</p><p style="margin-top: 5px; margin-bottom: 16px; color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">In particular, the proper generalized decomposition produces a solution explicitly stating the parametric dependence, where the parameters play the same role as the physical coordinates. A proper generalized decomposition solver is devised to inexpensively explore the parametric space along the iterative process. This exploration of the parametric space is in fact seen as a post‐process of the generalized solution. The approach adopted demonstrates its viability when tested in two illustrative examples.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Garcia-Blanco_et_al_2016a</guid>
	<pubDate>Tue, 05 Nov 2019 15:14:48 +0100</pubDate>
	<link>https://www.scipedia.com/public/Garcia-Blanco_et_al_2016a</link>
	<title><![CDATA[A reduced order modeling approach for optimal allocation of Distributed Generation in power distribution systems]]></title>
	<description><![CDATA[<p><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">This paper presents an &ldquo;offline-online&rdquo; strategy for optimal allocation and sizing of Distributed Generation. In traditional optimization approaches, each function evaluation requires the solution of a power flow problem, which makes global optimality a computationally challenging goal. In the proposed strategy the power flow solver is invoked only once and a parametric solution is constructed with a monolithic solver. Despite the fact that the parametrized power flow equations result in a high-dimensional problem, the proposed algorithm is specifically designed to circumvent the curse of dimensionality. This is achieved through the application of Model Reduction, in particular the Proper Generalized Decomposition combined with a nonlinear solver. Numerical examples are carried out for showing the validity of the proposed method.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Chinesta_et_al_2016a</guid>
	<pubDate>Tue, 05 Nov 2019 14:59:54 +0100</pubDate>
	<link>https://www.scipedia.com/public/Chinesta_et_al_2016a</link>
	<title><![CDATA[Unified formulation of a family of iterative solvers for power system analysis]]></title>
	<description><![CDATA[<p><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">This paper illustrates the construction of a new class of iterative solvers for power flow calculations based on the method of Alternating Search Directions. This method is fit to the particular algebraic structure of the power flow problem resulting from the combination of a globally linear set of equations and nonlinear local relations imposed by power conversion devices, such as loads and generators. The choice of the search directions is shown to be crucial for improving the overall robustness of the solver. A noteworthy advantage is that constant search directions yield stationary methods that, in contrast with Newton or Quasi-Newton methods, do not require the evaluation of the Jacobian matrix. Such directions can be elected to enforce the convergence to the high voltage operative solution. The method is explained through an intuitive example illustrating how the proposed generalized formulation is able to include other nonlinear solvers that are classically used for power flow analysis, thus offering a unified view on the topic. Numerical experiments are performed on publicly available benchmarks for large distribution and transmission systems.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Hospital-Bravo_et_al_2016b</guid>
	<pubDate>Tue, 05 Nov 2019 14:44:33 +0100</pubDate>
	<link>https://www.scipedia.com/public/Hospital-Bravo_et_al_2016b</link>
	<title><![CDATA[Numerical modeling of undersea acoustics using a partition of unity method with plane waves enrichment]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 17px; font-style: normal; font-weight: 400;">A new 2D numerical model to predict the underwater acoustic propagation is obtained by exploring the potential of the Partition of Unity Method (PUM) enriched with plane waves. The aim of the work is to obtain sound pressure level distributions when multiple operational noise sources are present, in order to assess the acoustic impact over the marine fauna. The model takes advantage of the suitability of the PUM for solving the Helmholtz equation, especially for the practical case of large domains and medium frequencies. The seawater acoustic absorption and the acoustic reflectance of the sea surface and sea bottom are explicitly considered, and perfectly matched layers (PML) are placed at the lateral artificial boundaries to avoid spurious reflexions. The model includes semi-analytical integration rules which are adapted to highly oscillatory integrands with the aim of reducing the computational cost of the integration step. In addition, we develop a novel strategy to mitigate the ill-conditioning of the elemental and global system matrices. Specifically, we compute a low-rank approximation of the local space of solutions, which in turn reduces the number of degrees of freedom, the CPU time and the memory footprint. Numerical examples are presented to illustrate the capabilities of the model and to assess its accuracy.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Steffens_et_al_2016a</guid>
	<pubDate>Tue, 05 Nov 2019 14:36:36 +0100</pubDate>
	<link>https://www.scipedia.com/public/Steffens_et_al_2016a</link>
	<title><![CDATA[Adaptividade e estimativas de erro orientadas por metas aplicadas a um benchmark test de propagação de onda]]></title>
	<description><![CDATA[<div style="color: rgb(17, 17, 17); font-size: 14.56px; font-style: normal; font-weight: 400;">O objetivo deste artigo &eacute; estudar a efici&ecirc;ncia e a robustez de t&eacute;cnicas adaptativas e estimativas de erro orientadas por metas para um benchmark test. As t&eacute;cnicas utilizadas aqui s&atilde;o baseadas em um simples p&oacute;s-processo das aproxima&ccedil;&otilde;es de elementos finitos. As estimativas de erro orientadas por metas s&atilde;o obtidas por analisar o problema direto e um problema auxiliar, o qual est&aacute; relacionado com a quantidade de interesse espec&iacute;fico. O procedimento proposto &eacute; v&aacute;lido para quantidades lineares e n&atilde;o-lineares. Al&eacute;m disso, s&atilde;o discutidas diferentes representa&ccedil;&otilde;es para o erro e &eacute; analisada a influ&ecirc;ncia do erro de dispers&atilde;o. Os resultados num&eacute;ricos mostram que as estimativas de erro fornecem boas aproxima&ccedil;&otilde;es ao erro real e que a t&eacute;cnica de refino adaptativo proposta conduz a uma redu&ccedil;&atilde;o mais r&aacute;pida do erro.</div>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
</div>
</channel>
</rss>