<?xml version='1.0'?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:georss="http://www.georss.org/georss" xmlns:atom="http://www.w3.org/2005/Atom" >
<channel>
	<title><![CDATA[Scipedia: Documents published in 2020]]></title>
	<link>https://www.scipedia.com/sitemaps/year/2020?offset=2400</link>
	<atom:link href="https://www.scipedia.com/sitemaps/year/2020?offset=2400" rel="self" type="application/rss+xml" />
	<description><![CDATA[]]></description>
	
	<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Mauro_et_al_2016a</guid>
	<pubDate>Wed, 22 Apr 2020 10:12:10 +0200</pubDate>
	<link>https://www.scipedia.com/public/Mauro_et_al_2016a</link>
	<title><![CDATA[Physical and numerical modeling of labyrinth weirs with polyhedral bottom]]></title>
	<description><![CDATA[<p><span style="color: rgb(34, 34, 34); font-size: 13px; font-style: normal; font-weight: 400;">In order to comply with the new safety regulations a significant number of Spanish dam spillways must be upgraded. In this scenario and with the aim of increasing the discharge capacity with a reduced investment innovative designs become interesting solutions. One of these innovative designs are the labyrinth weirs. Project POLILAB is carrying out with the objective of optimize the design of labyrinth weirs, physical and numerical tests exposed in this article were developed within this framework. The most relevant results are related with the discharge capacity, the flow pattern and the structural reinforcement achieved by the implementation of a polyhedral bottom.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Esqueda_et_al_2020a</guid>
	<pubDate>Wed, 22 Apr 2020 01:48:07 +0200</pubDate>
	<link>https://www.scipedia.com/public/Esqueda_et_al_2020a</link>
	<title><![CDATA[Solving anisotropic Poisson problems via discrete exterior calculus]]></title>
	<description><![CDATA[<p>We present a local formulation for 2D Discrete Exterior Calculus (DEC) similar to that of the Finite Element Method (FEM), which allows a natural treatment of material heterogeneity (assigning material properties element by element). It also allows us to deduce, in a principled manner, anisotropic fluxes and the DEC discretization of the pullback of 1-forms by the anisotropy tensor, i.e. we deduce the the discrete action of the anisotropy tensor on primal 1-forms. Due to the local formulation, the computational cost of DEC is similar to that of the Finite Element Method with Linear interpolating functions (FEML). The numerical DEC solutions to the anisotropic Poisson equation show numerical convergence, are very close to those of FEML on fine meshes and are slightly better than those of FEML on coarse meshes.</p>]]></description>
	<dc:creator>Rafael Herrera</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Vicente_et_al_2017a</guid>
	<pubDate>Tue, 21 Apr 2020 17:59:31 +0200</pubDate>
	<link>https://www.scipedia.com/public/Vicente_et_al_2017a</link>
	<title><![CDATA[An Interactive Tool for Automatic Predimensioning and Numerical Modeling of Arch Dams]]></title>
	<description><![CDATA[<p><span style="font-size: 16px; font-style: normal; font-weight: 400; text-align: justify;">The construction of double-curvature arch dams is an attractive solution from an economic viewpoint due to the reduced volume of concrete necessary for their construction as compared to conventional gravity dams. Due to their complex geometry, many criteria have arisen for their design. However, the most widespread methods are based on recommendations of traditional technical documents without taking into account the possibilities of computer-aided design. In this paper, an innovative software tool to design FEM models of double-curvature arch dams is presented. Several capabilities are allowed: simplified geometry creation (interesting for academic purposes), preliminary geometrical design, high-detailed model construction, and stochastic calculation performance (introducing uncertainty associated with material properties and other parameters). This paper specially focuses on geometrical issues describing the functionalities of the tool and the fundamentals of the design procedure with regard to the following aspects: topography, reference cylinder, excavation depth, crown cantilever thickness and curvature, horizontal arch curvature, excavation and concrete mass volume, and additional elements such as joints or spillways. Examples of application on two Spanish dams are presented and the results obtained analyzed.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Lezcano-Valverde_et_al_2017a</guid>
	<pubDate>Tue, 21 Apr 2020 17:40:26 +0200</pubDate>
	<link>https://www.scipedia.com/public/Lezcano-Valverde_et_al_2017a</link>
	<title><![CDATA[Development and validation of a multivariate predictive model for rheumatoid arthritis mortality using a machine learning approach]]></title>
	<description><![CDATA[<div id="Abs1-section"><div id="Abs1-content" style="margin-bottom: 40px;"><p style="margin-bottom: 28px;">We developed and independently validated a rheumatoid arthritis (RA) mortality prediction model using the machine learning method Random Survival Forests (RSF). Two independent cohorts from Madrid (Spain) were used: the Hospital Cl&iacute;nico San Carlos RA Cohort (HCSC-RAC; training; 1,461 patients), and the Hospital Universitario de La Princesa Early Arthritis Register Longitudinal study (PEARL; validation; 280 patients). Demographic and clinical-related variables collected during the first two years after disease diagnosis were used. 148 and 21 patients from HCSC-RAC and PEARL died during a median follow-up time of 4.3 and 5.0&nbsp;years, respectively. Age at diagnosis, median erythrocyte sedimentation rate, and number of hospital admissions showed the higher predictive capacity. Prediction errors in the training and validation cohorts were 0.187 and 0.233, respectively. A survival tree identified five mortality risk groups using the predicted ensemble mortality. After 1 and 7 years of follow-up, time-dependent specificity and sensitivity in the validation cohort were 0.79&ndash;0.80 and 0.43&ndash;0.48, respectively, using the cut-off value dividing the two lower risk categories. Calibration curves showed overestimation of the mortality risk in the validation cohort. In conclusion, we were able to develop a clinical prediction model for RA mortality using RSF, providing evidence for further work on external validation.</p></div></div>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Vicente_et_al_2019d</guid>
	<pubDate>Tue, 21 Apr 2020 17:12:39 +0200</pubDate>
	<link>https://www.scipedia.com/public/Vicente_et_al_2019d</link>
	<title><![CDATA[Analysis of temporal and spatial variability of water balance components in a Mediterranean river basin: the Spanish part of the Duero basin.]]></title>
	<description><![CDATA[<p>Water balances have been identified as an effective tool for assessing the amount of water and its availability in a region. In particular, the System of Environmental-Economic Accounting for Water (SEEA-W) has emerged as one of the most complete methodologies to undertake this action. Despite the great effort made to offer a common methodology for elaborating the SEEA-W tables, there are multiple aspects that prevent its standardization in a generalized way. This leads both to multiple approaches in the realization of water balances and to the introduction of high uncertainty in the accounts obtained.The data series variability, both temporal and spatial, is one of the factors that can introduce large uncertainty into the water balances if they are not correctly evaluated. This aspect has not yet been specifically studied, and most of the case studies in the literature only provide the water balance values for a short period (usually a natural or hydrological year) and for a specific basin. In the case of having data corresponding to several years, average values are typically used to elaborate the calculations. In the present study, the SEEA-W methodology was applied to the Spanish part of Duero river basin. Each component of the water asset accounts was calculated using simulated data from three models: SIMPA (rainfall-runoff model), ASTER (snow-related processes model) and SIMGES (water allocation and management model). These models were developed and calibrated by different Spanish entities. Then, the uncertainty associated with the temporal and spatial irregularity of the data series was also estimated.The Duero basin is located in a semi-arid Mediterranean region, where hydrological processes are affected by high intra- and inter-annual variability. To analyse this issue both monthly and annual resolutions were used. The first allowed identifying a significant intra-annual variability. The second was analysed using a period of 26 years, from 1980 to 2006. The results show a high variability for certain hydrological components which can lead to a great degree of uncertainty. In addition, each hydrological year was classified as dry, average or wet and water balances were calculated under the same classification. This approach would improve the estimation of water resources including the prediction of future climate changes scenarios.Regarding the spatial variability, the water resources of the basin, like many of the large Mediterranean rivers, are not evenly distributed and showed large differences in hydrological processes. These differences are caused by climatic and geomorphological differences (the area comprises an extensive central plateau with a peripheral mountain arch). In addition, the analysis considers the different human activities coexisting within the basin and the heterogeneity of their spatial distribution. The water balances split into a more disaggregated territorial units could help to detect the areas with both positive and negative balances and improve the water resources management in the whole basin.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Hariri-Ardebili_Salazar_2019a</guid>
	<pubDate>Tue, 21 Apr 2020 17:04:35 +0200</pubDate>
	<link>https://www.scipedia.com/public/Hariri-Ardebili_Salazar_2019a</link>
	<title><![CDATA[Engaging soft computing in material and modeling uncertainty quantification of dam engineering problems]]></title>
	<description><![CDATA[<div id="Abs1-section"><div id="Abs1-content" style="margin-bottom: 40px;"><p style="margin-bottom: 1.5em;">Due to complex nature of nearly all infrastructures (and more specifically concrete dams), the uncertainty quantification is an inseparable part of risk assessment. Uncertainties might be propagated in different aspects depending on their relative importance such as epistemic and aleatory, or spatial and temporal. The objective of this paper is to focus on the material and modeling uncertainties, and to couple them with soft computing techniques aiming to reduce the computational burden of the conventional Monte Carlo-based finite element simulations. Several scenarios are considered in which the concrete and foundation material properties, the water level, and the dam geometry are assumed as random variables. Five soft computing techniques (i.e., random forest, boosted regression trees, multi-adaptive regression splines, artificial neural networks, and support vector machines) are employed to predict various quantities of interest based on different training sizes. It is argued that the artificial neural network is the most accurate algorithm in majority of cases, with enough accuracy as to be useful in reliability analysis as a complement to numerical models. The results with 200 samples in the training set are enough for reaching useful accuracy in most cases. For the simple prediction tasks, the results were predicted with less than 1% error. It is observed that increasing the number of input parameters increases the prediction error. The partial dependence plots provided most sensitive variables in dam design, which were consistent with the physics of the problem. Finally, several practical recommendations are provided for future applications.</p></div></div>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Salazar_Crookston_2019a</guid>
	<pubDate>Tue, 21 Apr 2020 16:53:54 +0200</pubDate>
	<link>https://www.scipedia.com/public/Salazar_Crookston_2019a</link>
	<title><![CDATA[A performance comparison of machine learning algorithms for arced labyrinth spillways]]></title>
	<description><![CDATA[<p><span style="color: rgb(34, 34, 34); font-size: 13px; font-style: normal; font-weight: 400;">Labyrinth weirs provide an economic option for flow control structures in a variety of applications, including as spillways at dams. The cycles of labyrinth weirs are typically placed in a linear configuration. However, numerous projects place labyrinth cycles along an arc to take advantage of reservoir conditions and dam alignment, and to reduce construction costs such as narrowing the spillway chute. Practitioners must optimize more than 10 geometric variables when developing a head&ndash;discharge relationship. This is typically done using the following tools: empirical relationships, numerical modeling, and physical modeling. This study applied a new tool, machine learning, to the analysis of the geometrically complex arced labyrinth weirs. In this work, both neural networks (NN) and random forests (RF) were employed to estimate the discharge coefficient for this specific type of weir with the results of physical modeling experiments used for training. Machine learning results are critiqued in terms of accuracy, robustness, interpolation, applicability, and new insights into the hydraulic performance of arced labyrinth weirs. Results demonstrate that NN and RF algorithms can be used as a unique expression for curve fitting, although neural networks outperformed random forest when interpolating among the tested geometries.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Vilalta_et_al_2017a</guid>
	<pubDate>Tue, 21 Apr 2020 15:49:40 +0200</pubDate>
	<link>https://www.scipedia.com/public/Vilalta_et_al_2017a</link>
	<title><![CDATA[Statistical analysis for rupture risk prediction of abdominal aortic aneurysms (AAA) based on its morphometry]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">The morphometry of the abdominal aortic aneurysms (AAA) it has been recognized as one of the main factors that may predispose its rupture. The variation of the AAA morphometry, over time, induces modifications in hemodynamic behavior which, in turn, alters the spatial and temporal distribution of hemodynamic stress on the aneurismatic wall, establishing a bidirectional process that can influence the rupture phenomenon. In order to evaluate potential correlations between the main geometric parameters characterizing the AAA and hemodynamic stresses, 13 unrupture AAA patient-specific models were created. To AAA geometric characterization, twelve indices based on lumen center line were defined and determined. The computing of temporal and spatial distributions of hemodynamic stresses was conducted through Computational Fluid Dynamics. Statistical techniques were used to assess the relationships between the hemodynamic parameters and the different geometrical indices of the AAA. Regression analyses were conducted to obtain linear predictor models for hemodynamic stresses using the different indices defined in this paper as predictor variables. The statistical analysis confirmed that the length L, the asymmetry and the saccular index significantly influenced the hemodynamic stresses. The results obtained show the potential of the use of statistical techniques in predicting the rupture risk of patient-specific AAA.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Vaquero_et_al_2016b</guid>
	<pubDate>Tue, 21 Apr 2020 15:40:46 +0200</pubDate>
	<link>https://www.scipedia.com/public/Vaquero_et_al_2016b</link>
	<title><![CDATA[Valoración de los ángulos del cuello del aneurisma de aorta abdominal. Estudio en 507 pacientes]]></title>
	<description><![CDATA[<p>Angulation in different planes of the neck of abdominal aortic aneurysms may be considered a limiting factor in the implantation of different prostheses in the treatment of this pathology. It is necessary to make an assessment of the angles in different planes in routine studies prior to the planning of the procedures and especially by AngioTAC. The study evaluated the measurement of 507 patients with abdominal aortic aneurysms and treated endovascular procedures and of whom the complete information was available at the level of neck angulations. In a prospective and descriptive study the situation of this aortic sector was evaluated, contributing information about the tendency in the morphological presentation of the aorta in patients with this aetiology</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Vaquero_et_al_2016a</guid>
	<pubDate>Tue, 21 Apr 2020 15:31:33 +0200</pubDate>
	<link>https://www.scipedia.com/public/Vaquero_et_al_2016a</link>
	<title><![CDATA[Study of angulations of visceral abdominals arteries in their abdominal aorta emergency]]></title>
	<description><![CDATA[<p>Recently, fenestrated, branched and stent grafts have been developed in order to treat endovascular aneurysmal pathology at the level of the aorta in the emergence of the visceral arteries. For the implantation of stents, it is necessary to have a precise planimetry of the origin and orientation of the visceral branches in order firstly to perform the correct manufacture of the endoprosthesis and secondly the placement with precision of the same. The knowledge of the emergency of the vessels of the aortic wall and its orientation knowing the angle of emergency. The data obtained for the planning of the endovascular treatment of 37 patients in order to obtain data from the descriptive point of view of this sector.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Soudah_et_al_2016b</guid>
	<pubDate>Tue, 21 Apr 2020 15:20:48 +0200</pubDate>
	<link>https://www.scipedia.com/public/Soudah_et_al_2016b</link>
	<title><![CDATA[Estimation of wall shear stress using 4D flow cardiovascular MRI and computational fluid dynamics]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Company In the last few years, wall shear stress (WSS) has arisen as a new diagnostic indicator in patients with arterial disease. There is a substantial evidence that the WSS plays a significant role, together with hemodynamic indicators, in initiation and progression of the vascular diseases. Estimation of WSS values, therefore, may be of clinical significance and the methods employed for its measurement are crucial for clinical community. Recently, four-dimensional (4D) flow cardiovascular magnetic resonance (CMR) has been widely used in a number of applications for visualization and quantification of blood flow, and although the sensitivity to blood flow measurement has increased, it is not yet able to provide an accurate three-dimensional (3D) WSS distribution. The aim of this work is to evaluate the aortic blood flow features and the associated WSS by the combination of 4D flow cardiovascular magnetic resonance (4D CMR) and computational fluid dynamics technique. In particular, in this work, we used the 4D CMR to obtain the spatial domain and the boundary conditions needed to estimate the WSS within the entire thoracic aorta using computational fluid dynamics. Similar WSS distributions were found for cases simulated. A sensitivity analysis was done to check the accuracy of the method. 4D CMR begins to be a reliable tool to estimate the WSS within the entire thoracic aorta using computational fluid dynamics. The combination of both techniques may provide the ideal tool to help tackle these and other problems related to wall shear estimation</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Soudah_et_al_2015b</guid>
	<pubDate>Tue, 21 Apr 2020 15:05:31 +0200</pubDate>
	<link>https://www.scipedia.com/public/Soudah_et_al_2015b</link>
	<title><![CDATA[Mechanical stress in abdominal aortic aneurysms using artificial neural networks]]></title>
	<description><![CDATA[<dl><dd>
	<div>Combination of numerical modeling and artificial intelligence (AI) in bioengineering processes are a promising pathway for the further development of bioengineering sciences. The objective of this work is to use Artificial Neural Networks (ANN) to reduce the long computational times needed in the analysis of shear stress in the Abdominal Aortic Aneurysm (AAA) by finite element methods (FEM). For that purpose two different neural networks are created. The first neural network (Mesh Neural Network, MNN) creates the aneurysm geometry in terms of four geometrical factors (asymmetry factor, aneurism diameter, aneurism thickness, aneurism length). The second neural network (Tension Neural Network, TNN) combines the results of the first neural network with the arterial pressure (new factor) to obtain the maximum stress distribution (output variable) in the aneurysm wall. The use of FEM for the analysis and design of bioengineering processes often requires high computational costs, but if this technique is combined with artificial intelligence, such as neural networks, the simulation time is significantly reduced. The shear stress obtained by the artificial neural models developed in this work achieved 95% of accuracy respect to the wall stress obtained by the FEM. On the other hand, the computational time is significantly reduced compared to the FEM.</div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Gavidia_et_al_2014a</guid>
	<pubDate>Tue, 21 Apr 2020 14:34:37 +0200</pubDate>
	<link>https://www.scipedia.com/public/Gavidia_et_al_2014a</link>
	<title><![CDATA[Modeling human tissues : an efficient integrated methodology]]></title>
	<description><![CDATA[<dl><dd>
	<div>Geometric models of human body organs are obtained from imaging techniques like computed tomography (CT) and magnetic resonance image (MRI) that oallow an accurate visualization of the inner body, thus providing relevant information about their its structure and pathologies. Next, these models are used to generate surface and volumetric meshes, which can be used further for visualization, measurement, biomechanical simulation, rapid prototyping and prosthesis design. However, going from geometric models to numerical models is not an easy task, being necessary to apply image-processing techniques to solve the complexity of human tissues and to get more simplified geometric models, thus reducing the complexity of the subsequent numerical analysis. In this work, an integrated and efficient methodology to obtain models of soft tissues like gray and white matter of brain and hard tissues like jaw and spine bones is proposed. The methodology is based on image-processing algorithms chosen according to some characteristics of the tissue: type, intensity profiles and boundaries quality. First, low-quality images are improved by using enhancement algorithms to reduce image noise and to increase structures contrast. Then, hybrid segmentation for tissue identification is applied through a multi-stage approach. Finally, the obtained models are resampled and exported in formats readable by computer aided design (CAD) tools. In CAD environments, this data is used to generate discrete models using finite element methed (FEM) or other numerical methods like the boundary element method (BEM). Results have shown that the proposed methodology is useful and versatile to obtain accurate geometric models that can be used in several clinical cases to obtain relevant quantitative and qualitative information.</div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Soudah_et_al_2013a</guid>
	<pubDate>Tue, 21 Apr 2020 14:25:58 +0200</pubDate>
	<link>https://www.scipedia.com/public/Soudah_et_al_2013a</link>
	<title><![CDATA[CFD modelling of abdominal aortic aneurysm on hemodynamic loads using a realistic geometry with CT]]></title>
	<description><![CDATA[<dl><dd>
	<div>The objective of this study is to find a correlation between the abdominal aortic aneurysm (AAA) geometric parameters, wall stress shear (WSS), abdominal flow patterns, intraluminal thrombus (ILT), and AAA arterial wall rupture using computational fluid dynamics (CFD). Real AAA 3D models were created by three-dimensional (3D) reconstruction of in vivo acquired computed tomography (CT) images from 5 patients. Based on 3D AAA models, high quality volume meshes were created using an optimal tetrahedral aspect ratio for the whole domain. In order to quantify the WSS and the recirculation inside the AAA, a 3D CFD using finite elements analysis was used. The CFD computation was performed assuming that the arterial wall is rigid and the blood is considered a homogeneous Newtonian fluid with a density of 1050&iquest;kg/m3 and a kinematic viscosity of Pa&middot;s. Parallelization procedures were used in order to increase the performance of the CFD calculations. A relation between AAA geometric parameters (asymmetry index (&szlig;), saccular index (&iquest;), deformation diameter ratio (&iquest;), and tortuosity index (e)) and hemodynamic loads was observed, and it could be used as a potential predictor of AAA arterial wall rupture and potential ILT formation.</div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Vilalta_et_al_2012a</guid>
	<pubDate>Tue, 21 Apr 2020 14:11:52 +0200</pubDate>
	<link>https://www.scipedia.com/public/Vilalta_et_al_2012a</link>
	<title><![CDATA[Hemodynamic features associated with abdominal aortic aneurysm (AAA) geometry]]></title>
	<description><![CDATA[<dl><dd>
	<div>Recent findings have shown that maximum diameter of abdominal aortic aneurysm (AAA) and its growth rate are not entirely reliable indicators of rupture potential. The AAA geometrical shape and size may be related to the rupture risk which is a clinical manifestation of the balance between the forces generated by blood flow within the AAA and its strength. This study aims at assessing the hemodynamic features associated with the geometry of AAAs.</div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Marti_Ryzhakov_2020a</guid>
	<pubDate>Tue, 21 Apr 2020 13:39:50 +0200</pubDate>
	<link>https://www.scipedia.com/public/Marti_Ryzhakov_2020a</link>
	<title><![CDATA[An explicit/implicit Runge–Kutta-based PFEM model for the simulation of thermally coupled incompressible flows]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">A semi-explicit Lagrangian scheme for the simulation of thermally coupled incompressible flow problems is presented. The model relies on combining an explicit multi-step solver for the momentum equation with an implicit heat equation solver. Computational cost of the model is reduced via application of an efficient strategy adopted for the solution of momentum/continuity system by the authors in their previous work. The applicability of the method to solving thermo-mechanical problems is studied via various numerical examples.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ryzhakov_et_al_2019a</guid>
	<pubDate>Tue, 21 Apr 2020 13:32:42 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ryzhakov_et_al_2019a</link>
	<title><![CDATA[Computational modeling of the fluid flow and the flexible intimal flap in type B aortic dissection via a monolithic arbitrary Lagrangian/Eulerian fluid-structure interaction model]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In the present work, we perform numerical simulations of the fluid flow in type B aortic dissection (AD), accounting for the flexibility of the intimal flap. The interaction of the flow with the intimal flap is modeled using a monolithic arbitrary Lagrangian/Eulerian fluid-structure interaction model. The model relies on choosing velocity as the kinematic variable in both domains (fluid and solid) facilitating the coupling. The fluid flow velocity and pressure evolution at different locations is studied and compared against the experimental evidence and the formerly published numerical simulation results. Several tear configurations are analyzed. Details of the fluid flow in the vicinity of the tears are highlighted. Influence of the tear size upon the fluid flow and the flap deformation is discussed.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ryzhakov_Marti_2019a</guid>
	<pubDate>Tue, 21 Apr 2020 13:22:35 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ryzhakov_Marti_2019a</link>
	<title><![CDATA[An explicit–implicit finite element model for the numerical solution of incompressible Navier–Stokes equations on moving grids]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this paper an efficient mesh-moving Finite Element model for the simulation of the incompressible flow problems is proposed. The model is based on a combination of the explicit multi-step scheme (Runge&ndash;Kutta) with an implicit treatment of the pressure. The pressure is decoupled from the velocity and is solved for only once per time step minimizing the computational cost of the implicit step. Novel solution algorithm alleviating time step restrictions faced by the majority of the former Lagrangian approaches is presented. The method is examined with respect to its space and time accuracy as well as the computational cost. Two numerical examples are solved: one involving a problem on a domain with fixed boundaries and the other one dealing with a free surface flow. It is shown that the method can be easily parallelized.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ryzhakov_Marti_2018a</guid>
	<pubDate>Tue, 21 Apr 2020 13:17:14 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ryzhakov_Marti_2018a</link>
	<title><![CDATA[A semi-explicit multi-step method for solving incompressible navier-stokes equations]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">The fractional step method is a technique that results in a computationally-efficient implementation of Navier&ndash;Stokes solvers. In the finite element-based models, it is often applied in conjunction with implicit time integration schemes. On the other hand, in the framework of finite difference and finite volume methods, the fractional step method had been successfully applied to obtain predictor-corrector semi-explicit methods. In the present work, we derive a scheme based on using the fractional step technique in conjunction with explicit multi-step time integration within the framework of Galerkin-type stabilized finite element methods. We show that under certain assumptions, a Runge&ndash;Kutta scheme equipped with the fractional step leads to an efficient semi-explicit method, where the pressure Poisson equation is solved only once per time step. Thus, the computational cost of the implicit step of the scheme is minimized. The numerical example solved validates the resulting scheme and provides the insights regarding its accuracy and computational efficiency</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ryzhakov_Jarauta_2015a</guid>
	<pubDate>Tue, 21 Apr 2020 11:59:08 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ryzhakov_Jarauta_2015a</link>
	<title><![CDATA[An embedded approach for immiscible multi-fluid problems]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">An embedded formulation for the simulation of immiscible multi-fluid problems is proposed. The method is particularly designed for handling gas-liquid systems. Gas and liquid are modeled using the Eulerian and the Lagrangian formulation, respectively. The Lagrangian domain (liquid) moves on top of the fixed Eulerian mesh. The location of the material interface is exactly defined by the position of the boundary mesh of the Lagrangian domain. The individual fluid problems are solved in a partitioned fashion and are coupled using a Dirichlet-Neumann algorithm. Representation of the pressure discontinuity across the interface does not require any additional techniques being an intrinsic feature of the method. The proposed formulation is validated, and its potential applications are shown.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ortega_2004a</guid>
	<pubDate>Mon, 20 Apr 2020 10:21:23 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ortega_2004a</link>
	<title><![CDATA[Método de puntos finitos. Un análisis sobre el efecto de los parámetros que definen las nubes en aproximaciones de segundo y cuarto orden]]></title>
	<description><![CDATA[<p>Se investiga en el presente trabajo la influencia de los distintos par&aacute;metros que definen la aproximaci&oacute;n local en el M&eacute;todo de Puntos Finitos y su relaci&oacute;n con la calidad de la misma. Este an&aacute;lisis se realiza sobre la soluci&oacute;n de un problema de Poisson tridimensional el cual se emplea como caso testigo. Haciendo uso de este &uacute;ltimo se propone un m&eacute;todo para ajustar los par&aacute;metros de la aproximaci&oacute;n local mediante la definici&oacute;n individual de la funci&oacute;n de ponderaci&oacute;n en cada nube de puntos. Ello posibilita flexibilizar la definici&oacute;n de par&aacute;metros y lograr la mejor aproximaci&oacute;n para el problema planteado. El an&aacute;lisis se extiende a funciones de aproximaci&oacute;n de segundo y cuarto orden y distribuciones estructuradas y no estructuradas de puntos.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ortega_Sacco_2003a</guid>
	<pubDate>Mon, 20 Apr 2020 09:44:07 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ortega_Sacco_2003a</link>
	<title><![CDATA[Solución de las ecuaciones de flujo compresible mediante el método de puntos finitos]]></title>
	<description><![CDATA[<p>El trabajo desarrollado consiste en la resoluci&oacute;n num&eacute;rica de las ecuaciones de flujo bidimensional, compresible y no viscoso; mediante el M&eacute;todo de Puntos Finitos (FPM). Este &uacute;ltimo se ubica dentro del conjunto de m&eacute;todos &ldquo;meshless&rdquo; cuya caracter&iacute;stica principal es prescindir de una malla o grilla a efectos de realizar la discretizaci&oacute;n num&eacute;rica. En FPM, la funci&oacute;n inc&oacute;gnita y derivadas de la misma se obtienen exclusivamente a partir de las coordenadas de un conjunto de puntos pertenecientes al dominio de an&aacute;lisis. Esto &uacute;ltimo, sumado a un procedimiento de colocaci&oacute;n puntual para la derivaci&oacute;n del sistema de ecuaciones discreto, convierten a FPM en un verdadero m&eacute;todo sin malla. La discretizaci&oacute;n temporal de las ecuaciones se lleva a cabo mediante un esquema expl&iacute;cito de segundo orden del tipo Lax-Wendroff en dos pasos. Difusi&oacute;n artificial de Jameson, de segundo y cuarto orden, se introduce en las ecuaciones. Se implementa adem&aacute;s un esquema corrector de los flujos difusivos o &lsquo;Flux-Corrected Transport&rsquo;, con la finalidad de regular la cantidad de difusi&oacute;n agregada al esquema y lograr una mayor precisi&oacute;n y calidad de la soluci&oacute;n num&eacute;rica. La performance del algoritmo desarrollado se ilustra mediante la resoluci&oacute;n de distintos ejemplos num&eacute;ricos.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Llacay_Peffer_2019b</guid>
	<pubDate>Fri, 17 Apr 2020 15:19:17 +0200</pubDate>
	<link>https://www.scipedia.com/public/Llacay_Peffer_2019b</link>
	<title><![CDATA[Impact of short-sales in stock market efficiency]]></title>
	<description><![CDATA[<p><span style="color: rgb(65, 65, 65); font-size: 14px; font-style: normal; font-weight: 400; background-color: rgb(240, 247, 251);">In the last two decades, the hedge fund sector has experienced a spectacular growth, up to the point that it is currently estimated to move more than 50% of the daily volume of stock markets. In contrast to other financial institutions, hedge funds are subject to less restrictive regulations which</span><span style="color: rgb(65, 65, 65); font-size: 14px; font-style: normal; font-weight: 400; background-color: rgb(240, 247, 251);">, in particular, allow them to sell short. As they exploit the asset mispricings, their action is thought to contribute to market efficiency. In this paper we aim at studying the impact that short sales have on the informational efficiency of a financial market. This can be used not only to assess the effect that hedge fund actions have in financial markets, but also the consequences of regulatory measures such as short-selling restrictions or bans. Building on an agent-based market, the simulation results indicate that short sales are beneficial to market efficiency, although the market does not become completely efficient even when all the population can sell short.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Llacay_Peffer_2019a</guid>
	<pubDate>Fri, 17 Apr 2020 15:15:14 +0200</pubDate>
	<link>https://www.scipedia.com/public/Llacay_Peffer_2019a</link>
	<title><![CDATA[Impact of Basel III Countercyclical Measures on Financial Stability: An Agent-Based Model]]></title>
	<description><![CDATA[<p><span style="font-size: medium; font-style: normal; font-weight: 400;">The financial system is inherently procyclical, as it amplifies the course of economic cycles, and precisely one of the factors that has been suggested to exacerbate this procyclicality is the Basel regulation on capital requirements. After the recent credit crisis, international regulators have turned their eyes to countercyclical regulation as a solution to avoid similar episodes in the future. Countercyclical regulation aims at preventing excessive risk taking during booms to reduce the impact of losses suffered during recessions, for example increasing the capital requirements during the good times to improve the resilience of financial institutions at the downturn. The Basel Committee has already moved forward towards the adoption of countercyclical measures on a global scale: the Basel III Accord, published in December 2010, revises considerably the capital requirement rules to reduce their procyclicality. These new countercyclical measures will not be completely implemented until 2019, so their impact cannot be evaluated yet, and it is a crucial question whether they will be effective in reducing procyclicality and the appearance of crisis episodes such as the one experienced in 2007-08. For this reason,we present in this article an agent-based model aimed at analysing the effect of two countercyclical mechanisms introduced in Basel III: the countercyclical buffer and the stressed VaR. In particular, we focus on the impact of these mechanisms on the procyclicality induced by market risk requirements and, more specifically, by value-at-risk models, as it is a issue of crucial importance that has received scant attention in the modeling literature. The simulation results suggest that the adoption of both of these countercyclical measures improves market stability and reduces the emergence of crisis episodes</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Llacay_Peffer_2018a</guid>
	<pubDate>Fri, 17 Apr 2020 14:45:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Llacay_Peffer_2018a</link>
	<title><![CDATA[Using realistic trading strategies in an agent-based stock market model]]></title>
	<description><![CDATA[<p><span style="color: rgb(34, 34, 34); font-size: 13px; font-style: normal; font-weight: 400;">The use of agent-based models (ABMs) has increased in the last years to simulate social systems and, in particular, financial markets. ABMs of financial markets are usually validated by checking the ability of the model to reproduce a set of empirical stylised facts. However, other common-sense evidence is available which is often not taken into account, ending with models which are valid but not sensible. In this paper we present an ABM of a stock market which incorporates this type of common-sense evidence and implements realistic trading strategies based on practitioners literature. We next validate the model using a comprehensive approach consisting of four steps: assessment of face validity, sensitivity analysis, calibration and validation of model outputs.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Llacay_2017a</guid>
	<pubDate>Fri, 17 Apr 2020 14:36:39 +0200</pubDate>
	<link>https://www.scipedia.com/public/Llacay_2017a</link>
	<title><![CDATA[Impact of value-at-risk models on market stability]]></title>
	<description><![CDATA[<div id="abstracts" style="font-size: 18px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400;"><div id="abs0001" style="margin-bottom: 8px;"><div id="abss0001"><p id="spara0001" style="margin-bottom: 16px;"><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">Financial institutions around the world use value-at-risk (VaR) models to manage their market risk and calculate their capital requirements under&nbsp;<a href="https://www.sciencedirect.com/topics/economics-econometrics-and-finance/basel-accord" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Basel Accord from ScienceDirect's AI-generated Topic Pages">Basel Accords</a>. VaR models, as any other risk management system, are meant to keep financial institutions out of trouble by, among other things, guiding investment decisions within established risk limits so that the viability of a business is not put unduly at risk in a sharp market downturn. However, some researchers have warned that the widespread use of VaR models creates negative externalities in financial markets, as it can feed market instability and result in what has been called endogenous risk, that is, risk caused and amplified by the system itself, rather than being the result of an exogenous shock. This paper aims at analyzing the potential of VaR systems to amplify market disturbances with an&nbsp;</span><a href="https://www.sciencedirect.com/topics/mathematics/agent-based-model" style="background-color: transparent; color: rgb(12, 125, 187); font-size: 18px; font-style: normal; font-weight: 400;" title="Learn more about Agent Based Model from ScienceDirect's AI-generated Topic Pages">agent-based model</a><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">&nbsp;of fundamentalist and technical traders which manage their risk with a simple VaR model and must reduce their positions when the risk of their portfolio goes above a given threshold. We analyse the impact of the widespread use of VaR systems on different financial instability indicators and confirm that VaR models may induce a particular price dynamics that rises market volatility. These dynamics, which we have called `VaR cycles&rsquo;, take place when a sufficient number of traders reach their VaR limit and are forced to simultaneously reduce their portfolio; the reductions cause a sudden price movement, raise volatility and force even more traders to liquidate part of their positions. The model shows that market is more prone to suffer VaR cycles when investors use a short-term horizon to calculate asset volatility or a not-too-extreme value for their risk threshold.</span></p><div>&nbsp;</div></div></div></div><ul id="issue-navigation" style="margin-right: 0px; font-size: 16px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400; margin-bottom: 16px !important; background-color: rgb(245, 245, 245) !important;"></ul>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Peffer_Llacay_2007a</guid>
	<pubDate>Fri, 17 Apr 2020 14:24:50 +0200</pubDate>
	<link>https://www.scipedia.com/public/Peffer_Llacay_2007a</link>
	<title><![CDATA[Higher-order simulations: Strategic investment under model-induced price patterns]]></title>
	<description><![CDATA[<p><span style="color: rgb(34, 34, 34); font-size: 13px; font-style: normal; font-weight: 400;">The trading and investment decision processes in financial markets become ever more dependent on the use of valuation and risk models. In the case of risk management for instance, modelling practice has become quite homogeneous and the question arises as to the effect this has on the price formation process. Furthermore, sophisticated investors who have private information about the use and characteristics of these models might be able to make superior gains in such an environment. The aim of this article is to test this hypothesis in a stylised market, where a strategic investor trades on information about the valuation and risk management models used by other market participants. Simulation results show that under certain market conditions, such a&#39;higher-order&#39;strategy generates higher profits than standard fundamental and momentum strategies that do not draw on information about model use.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Llacay_Peffer_2005a</guid>
	<pubDate>Fri, 17 Apr 2020 14:12:07 +0200</pubDate>
	<link>https://www.scipedia.com/public/Llacay_Peffer_2005a</link>
	<title><![CDATA[Simulación basada en agentes del efecto inestabilizador de las técnicas VaR]]></title>
	<description><![CDATA[<p><span style="color: rgb(34, 34, 34); font-size: 13px; font-style: normal; font-weight: 400;">En los &uacute;ltimos a&ntilde;os ha habido un gran n&uacute;mero de crisis financieras con graves efectos en la econom&iacute;a de los pa&iacute;ses afectados. Para evitar o minimizar estos efectos negativos es necesario entender qu&eacute; factores pueden desencadenar una crisis financiera. Sin embargo, la literatura s&oacute;lo ofrece explicaciones de tipo cualitativo o modelos anal&iacute;ticos muy estilizados que resultan poco operativos. Proponemos en este art&iacute;culo un modelo basado en agentes que permite estudiar, mediante simulaci&oacute;n, los efectos agregados que emergen de la interacci&oacute;n de los inversores de un mercado financiero. Nuestro objetivo es analizar con este modelo la influencia en la din&aacute;mica de un mercado del uso generalizado de modelos basados en t&eacute;cnicas VaR de gesti&oacute;n de riesgo. Los resultados de la simulaci&oacute;n corroboran la tesis que se&ntilde;ala el uso homog&eacute;neo de estos modelos como uno de los factores que pueden inducir episodios de inestabilidad financiera.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Bazilevs_et_al_2017b</guid>
	<pubDate>Fri, 17 Apr 2020 13:10:59 +0200</pubDate>
	<link>https://www.scipedia.com/public/Bazilevs_et_al_2017b</link>
	<title><![CDATA[A new formulation for air-blast fluid–structure interaction using an immersed approach: part II—coupling of IGA and meshfree discretizations]]></title>
	<description><![CDATA[<div id="Abs1-section"><div id="Abs1-content" style="margin-bottom: 40px;"><p style="margin-bottom: 1.5em;"><span style="color: rgb(51, 51, 51); font-size: 18px; font-style: normal; font-weight: 400; background-color: rgb(252, 252, 252);">In this two-part paper we begin the development of a new class of methods for modeling fluid&ndash;structure interaction (FSI) phenomena for air blast. We aim to develop accurate, robust, and practical computational methodology, which is capable of modeling the dynamics of air blast coupled with the structure response, where the latter involves large, inelastic deformations and disintegration into fragments. An immersed approach is adopted, which leads to an a-priori monolithic FSI formulation with intrinsic contact detection between solid objects, and without formal restrictions on the solid motions. In Part I of this paper, the core air-blast FSI methodology suitable for a variety of discretizations is presented and tested using standard finite elements. Part II of this paper focuses on a particular instantiation of the proposed framework, which couples isogeometric analysis (IGA) based on non-uniform rational B-splines and a reproducing-kernel particle method (RKPM), which is a meshfree technique. The combination of IGA and RKPM is felt to be particularly attractive for the problem class of interest due to the higher-order accuracy and smoothness of both discretizations, and relative simplicity of RKPM in handling fragmentation scenarios. A collection of mostly 2D numerical examples is presented in each of the parts to illustrate the good performance of the proposed air-blast FSI framework.</span></p></div></div>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Davari_et_al_2019a</guid>
	<pubDate>Thu, 16 Apr 2020 13:25:32 +0200</pubDate>
	<link>https://www.scipedia.com/public/Davari_et_al_2019a</link>
	<title><![CDATA[A cut finite element method for the solution of the full-potential equation with an embedded wake]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Potential flow solvers represent an appealing alternative for the simulation of non-viscous subsonic flows. In order to deliver accurate results, such techniques require prescribing explicitly the so called Kutta condition, as well as adding a special treatment on the &ldquo;wake&rdquo; of the body. The wake is traditionally modelled by introducing a gap in the CFD mesh, which requires an often laborious meshing work. The novelty of the proposed work is to embed the wake within the CFD domain. The approach has obvious advantages in the context of aeroelastic optimization, where the position of the wake may change due to evolutionary steps of the geometry. This work presents a simple, yet effective, method for the imposition of the embedded wake boundary condition. The presented method preserves the possibility of employing iterative techniques in the solution of the linear problems which stem out of the discretization. Validation and verification of the solver are performed for a NACA 0012 airfoil.&nbsp;</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Baumgartner_et_al_2018a</guid>
	<pubDate>Thu, 16 Apr 2020 13:19:17 +0200</pubDate>
	<link>https://www.scipedia.com/public/Baumgartner_et_al_2018a</link>
	<title><![CDATA[A robust algorithm for implicit description of immersed geometries within a background mesh]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">The paper presents a robust algorithm, which allows to implicitly describe and track immersed geometries within a background mesh. The background mesh is assumed to be unstructured and discretized by tetrahedrons. The contained geometry is assumed to be given as triangulated surface. Within the background mesh, the immersed geometry is described implicitly using a discontinuous distance function based on a level-set approach. This distance function allows to consider both, &ldquo;double-sided&rdquo; geometries like membrane or shell structures, and &ldquo;single-sided&rdquo; objects for which an enclosed volume is univocally defined. For the second case, the discontinuous distance function is complemented by a continuous signed distance function, whereas ray casting is applied to identify the closed volume regions. Furthermore, adaptive mesh refinement is employed to provide the necessary resolution of the background mesh. The proposed algorithm can handle arbitrarily complicated geometries, possibly containing modeling errors (i.e., gaps, overlaps or a non-unique orientation of surface normals). Another important advantage of the algorithm is the embarrassingly parallel nature of its operations. This characteristic allows for a straightforward parallelization using MPI. All developments were implemented within the open source framework &ldquo;KratosMultiphysics&rdquo; and are available under the BSD license. The capabilities of the implementation are demonstrated with various application examples involving practice-oriented geometries. The results finally show, that the algorithm is able to describe most complicated geometries within a background mesh, whereas the approximation quality may be directly controlled by mesh refinement.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Davari_et_al_2017a</guid>
	<pubDate>Thu, 16 Apr 2020 13:09:24 +0200</pubDate>
	<link>https://www.scipedia.com/public/Davari_et_al_2017a</link>
	<title><![CDATA[Three embedded techniques for finite element heat flow problem with embedded discontinuities]]></title>
	<description><![CDATA[<p>The present paper explores the solution of a heat conduction problem considering discontinuities embedded within the mesh and aligned at arbitrary angles with respect to the mesh edges. Three alternative approaches are proposed as solutions to the problem. The difference between these approaches compared to alternatives, such as the eXtended Finite Element Method (X-FEM), is that the current proposal attempts to preserve the global matrix graph in order to improve performance. The first two alternatives comprise an enrichment of the Finite Element (FE) space obtained through the addition of some new local degrees of freedom to allow capturing discontinuities within the element. The new degrees of freedom are statically condensed prior to assembly, so that the graph of the final system is not changed. The third approach is based on the use of modified FE-shape functions that substitute the standard ones on the cut elements. The imposition of both Neumann and Dirichlet boundary conditions is considered at the embedded interface. The results of all the proposed methods are then compared with a reference solution obtained using the standard FE on a mesh containing the actual discontinuity.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Mora_et_al_2006a</guid>
	<pubDate>Thu, 16 Apr 2020 11:43:57 +0200</pubDate>
	<link>https://www.scipedia.com/public/Mora_et_al_2006a</link>
	<title><![CDATA[Open tools for electromagnetic simulation programs]]></title>
	<description><![CDATA[<div style="margin-bottom: 1rem !important;"><h3 style="margin-top: 1.5rem; font-weight: 500; font-size: 1.333rem;">Purpose</h3><p style="font-size: 1rem;">The aim of the paper is to propose three computer tools to create electromagnetic simulation programs: GiD, Kratos and EMANT.</p></div><div style="margin-bottom: 1rem !important;"><h3 style="margin-top: 1.5rem; font-weight: 500; font-size: 1.333rem;">Design/methodology/approach</h3><p style="font-size: 1rem;">The paper presents a review of numerical methods for solving electromagnetic problems and presentation of the main features of GiD, Kratos and EMANT.</p></div><div style="margin-bottom: 1rem !important;"><h3 style="margin-top: 1.5rem; font-weight: 500; font-size: 1.333rem;">Findings</h3><p style="font-size: 1rem;">The paper provides information about three computer tools to create electromagnetic simulation packages: GiD (geometrical modeling, data input, visualisation of results), Kratos (C++ library) and EMANT (finite element software for solving Maxwell equations).</p></div><div style="margin-bottom: 1rem !important;"><h3 style="margin-top: 1.5rem; font-weight: 500; font-size: 1.333rem;">Research limitations/implications</h3><p style="font-size: 1rem;">The proposed platforms are in development and future work should be done to validate the codes for expecific problems and to provide extensive manual and tutorial information.</p></div><div style="margin-bottom: 1rem !important;"><h3 style="margin-top: 1.5rem; font-weight: 500; font-size: 1.333rem;">Practical implications</h3><p style="font-size: 1rem;">The tools could be easily learnt by different user profiles: from end‐users interested in simulation programs to developers of simulation packages.</p></div><div><h3 style="margin-top: 1.5rem; font-weight: 500; font-size: 1.333rem;">Originality/value</h3><p style="font-size: 1rem;">This paper offers an integrated vision of open and easily customisable tools for the demands of different users profiles.</p><div>&nbsp;</div></div>
<p>&nbsp;</p>
]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Lantada_et_al_2020a</guid>
	<pubDate>Thu, 16 Apr 2020 10:16:33 +0200</pubDate>
	<link>https://www.scipedia.com/public/Lantada_et_al_2020a</link>
	<title><![CDATA[Disaster risk reduction: a decision-making support tool based on the morphological analysis]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Risk management due to natural hazards is a multidimensional and complex problem since it requires the knowledge and experience of several disciplines. The effectiveness of risk management can be analyzed, inviting to the action through weakness identification of the urban area. This article proposes a methodology based on the morphological analysis to support the decision-making on disaster risk management, taking as a starting point the results of a holistic evaluation of the seismic risk. The results of the holistic evaluation of risk are achieved aggravating the physical risk using the contextual conditions, such as the socio-economic fragility and the lack of resilience. In consequence, the risk mitigation can be performed through the reduction of the potential damage and consequences involved; and the improvement of social conditions. The proposed methodology allows prioritizing the risk reduction strategies according to i) performance level of component indicators involved into the Disaster Risk Management index, DRMi; ii) physical risk factors dependent from the potential damages, and iii) aggravating factors involved in the aggravating coefficient. Moreover, it involves 35 strategies to reduce the physical risk and the aggravating social conditions of the urban area. The proposed methodology has been applied to the city of M&eacute;rida (Venezuela), located within an area of high seismic activity. The performance level of the indicators involved in the DRMi was evaluated by a survey to local experts. As a result, eleven strategies have been identified to reduce the potential damage and to improve the social conditions of this city</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Hernandez_et_al_2018a</guid>
	<pubDate>Thu, 16 Apr 2020 09:49:17 +0200</pubDate>
	<link>https://www.scipedia.com/public/Hernandez_et_al_2018a</link>
	<title><![CDATA[Methodologies and tools of risk management: Hurricane Risk index (HRi)]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Mexico is recognized worldwide for the extension of its coastlines and its tourist exploitation. Quintana Roo is one of the Mexican states, which has a shoreline of approximately 800 km, known as the Mexican Caribbean. The hurricanes that form in the Atlantic Ocean are the main natural hazard to which this region is exposed. In this article, hurricane risk is evaluated for coastal cities through the definition of a system of indicators. Based on this indicators system, the Hurricane Risk Index (HRi) is calculated. This system allows the construction of vulnerability indices for different dimensions: physical, environmental, social, economic, cultural and institutional. The obtained results can contribute to the definition of public prevention policies and actions to reduce the levels of vulnerability and increase the resilience of these communities. This indicators model is applied to two coastal cities of the Mexican Caribbean; Mahahual, obtaining an HRi of 82.13%, and Chetumal obtaining an HRi of 69.31%, corresponding to the impact of Hurricane Dean in 2007. The proposed indicators system can be replicated for different hazards</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cardona_et_al_2018d</guid>
	<pubDate>Wed, 15 Apr 2020 17:51:13 +0200</pubDate>
	<link>https://www.scipedia.com/public/Cardona_et_al_2018d</link>
	<title><![CDATA[Latin American and Caribbean earthquakes in the GEM’s Earthquake Consequences Database (GEMECD)]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Among the activities developed under the framework of the Global Earthquake Model, the development of a global consequences database was included. This was defined with the objective of serving as public repository of damages and losses, occurred on different types of elements because of a selected list of earthquakes with epicentres at varying locations around the globe, but also to be used as a benchmark for the development of vulnerability models that capture specific characteristics of the building typologies in each country. The online earthquakes consequences&rsquo; database has information on 71 events where 16 of them occurred in the Latin America and the Caribbean Region. A complete and comprehensive review and data gathering process were developed for these selected earthquakes accounting for different aspects and dimensions that were considered of interest, besides the physical damage, such as casualties, socio-economic implications, damages and disruptions in critical facilities and infrastructures, together with the occurrence of secondary events triggered by the ground shaking such as landslides and tsunamis. When possible, the damage and casualties were geo-located using a standardized approach and included in the database. The contributions of the Latin America and Caribbean Region to the database were at the same time a challenge and an opportunity to collect, review, put together and standardize, up to a certain point, damage data of previous earthquakes additionally of being a step forward in the field of open data.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Jaramillo_et_al_2016a</guid>
	<pubDate>Wed, 15 Apr 2020 17:05:56 +0200</pubDate>
	<link>https://www.scipedia.com/public/Jaramillo_et_al_2016a</link>
	<title><![CDATA[Evaluation of social context integrated into the study of seismic risk for urban areas]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Usually the seismic risk evaluation involves only the estimation of the expected physical damage, casualties or economic losses. This article corresponds to a holistic approach for seismic risk assessment which involves the evaluation of the social fragility and the lack of resilience. The complementary evaluation of social context aspects such as the distribution of the population, the absence of economic and social development, deficiencies in institutional management, and lack of capacity for response and recovery; is useful in order to have seismic risk evaluation suitable to support a decision making processes for risk reduction. The proposed methodology allows a standardized assessment of the social fragility and lack of resilience, by means of an aggravating coefficient of which summarizes the characteristics of the social context using fuzzy sets and Analytic Hierarchy Process (AHP). The selection of 20 social indicators is based on the indicators used by urban observatories of United Nations and other social researchers. These indicators are classified according to social item they describe, in six categories. Applying the determination level analysis, thirteen prevailing social indicators are selected. The proposed methodology has been applied in the cities of Merida (Venezuela) and Barcelona (Spain)</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Marulanda_et_al_2012a</guid>
	<pubDate>Wed, 15 Apr 2020 16:21:49 +0200</pubDate>
	<link>https://www.scipedia.com/public/Marulanda_et_al_2012a</link>
	<title><![CDATA[Probabilistic assessment of seismic risk of Barcelona, Spain, using the CAPRA platform]]></title>
	<description><![CDATA[<p>The Comprehensive Approach for Probabilistic Risk Assessment (CAPRA), is a robust methodology for modeling risk which allows identifying the most important aspects of catastrophes. CAPRA performs the evaluation of losses of the exposed elements using probabilistic metrics, such as the loss exceedance curve, the expected annual loss and the probable maximum loss, which are useful for multi-hazard risk analysis. The outcomes obtained with such a technical-scientific methodology are oriented to facilitate decision-making. They allow designing risk transfer instruments, evaluating the cost&ndash;benefit ratio, developing risk mitigation strategies and loss scenarios for emergency response, etc. The CAPRA platform is described in this paper by using as a testbed the city of Barcelona, Spain. Nevertheless, the results included for this urban area have not only a high scientific interest but also a practical one, because they are useful in taking risk reduction decision by the Municipality</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Draft_Samper_669549469</guid>
	<pubDate>Wed, 15 Apr 2020 15:51:21 +0200</pubDate>
	<link>https://www.scipedia.com/public/Draft_Samper_669549469</link>
	<title><![CDATA[Comportamiento sismico de los edificios de Lorca]]></title>
	<description><![CDATA[<p>Despu&eacute;s de ocurrido el terremoto de Lorca, el 11 de mayo de 2011, el Institut Geol&ograve;gic de Catalunya (IGC), la Universidad Polit&eacute;cnica de Catalu&ntilde;a (UPC) y la Asociaci&oacute;n Espa&ntilde;ola de Ingenier&iacute;a S&iacute;smica (AEIS) organizaron, junto con las asociaciones de ingenier&iacute;a s&iacute;smica francesa (AFPS) y portuguesa (SPES), una visita t&eacute;cnica con el objeto de realizar observaciones que permitieran calibrar, verificar y validar los m&eacute;todos utilizados en las evaluaciones de riesgo s&iacute;smico en zonas urbanas. Este art&iacute;culo resume los aspectos m&aacute;s destacados en cuanto a la observaci&oacute;n de la vulnerabilidad de edificios de vivienda, as&iacute; como de edificios de especial importancia como son hospitales y escuelas. After the earthquake of Lorca, May 11, 2011, the Geological Institute of Catalonia (IGC), the Technical University of Catalonia (UPC) and the Spanish Association for Earthquake Engineering (AEIS) organized a technical visit in order to compile information that allow to calibrate, to verify and to validate the methods used in seismic risk assessments in urban areas. This article summarizes the highlights regarding the vulnerability of residential buildings and buildings of special importance such as hospitals and schools</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Parada_et_al_2020a</guid>
	<pubDate>Wed, 15 Apr 2020 13:38:59 +0200</pubDate>
	<link>https://www.scipedia.com/public/Parada_et_al_2020a</link>
	<title><![CDATA[A fractional step method for computational aeroacoustics using weak imposition of Dirichlet boundary conditions]]></title>
	<description><![CDATA[<dl><dd>
	<div>In this work we consider the approximation of the isentropic Navier&ndash;Stokes equations. The model we present is capable of taking into account acoustic and flow scales at once. After space and time discretizations have been chosen, it is very convenient from the computational point of view to design fractional step schemes in time so as to permit a segregated calculation of the problem unknowns. While these segregation schemes are well established for incompressible flows, much less is known in the case of isentropic flows. We discuss this issue in this article and, furthermore, we study the way to weakly impose Dirichlet boundary conditions via Nitsche&rsquo;s method. In order to avoid spurious reflections of the acoustic waves, Nitsche&rsquo;s method is combined with a non-reflecting boundary condition. Employing a purely algebraic approach to discuss the problem, some of the boundary contributions are treated explicitly and we explain how these are included in the different steps of the final algorithm. Numerical evidence shows that this explicit treatment does not have a significant impact on the convergence rate of the resulting time integration scheme. The equations of the formulation are solved using a subgrid scale technique based on a term-by-term stabilization.</div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Aguirre_et_al_2019a</guid>
	<pubDate>Wed, 15 Apr 2020 13:34:07 +0200</pubDate>
	<link>https://www.scipedia.com/public/Aguirre_et_al_2019a</link>
	<title><![CDATA[Pseudoplastic fluid flows for different Prandtl numbers: steady and time-dependent solutions]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this work, a variational multiscale (VMS) finite element formulation is used to approximate numerically the natural convection in square cavity with differentially heated from sidewalls problem for Newtonian and power-law fluids. The problem is characterized for going through a Hopf bifurcation when reaching high enough Rayleigh numbers, which initiates the transition between steady and time dependent behavior, however, results found in the literature are only for air Prandtl number. The presented VMS formulation is validated using existing results, and is used to study highly convective cases, to determine the flow conditions at which it becomes time dependent, and to establish new benchmark solutions for non-Newtonian fluid flows for different Pr and power-law indexes n. The range of solutions were found in the range 0.6</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Castillo_et_al_2019a</guid>
	<pubDate>Wed, 15 Apr 2020 13:21:00 +0200</pubDate>
	<link>https://www.scipedia.com/public/Castillo_et_al_2019a</link>
	<title><![CDATA[An oil sloshing study: adaptive fixed-mesh ALE analysis and comparison with experiments]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">We report in this work a numerical analysis of the sloshing of a squared tank partially filled with a domestic vegetable oil. The tank is subject to controlled motions with a shake table. The free-surface evolution is captured using ultrasonic sensors and an image capturing method. Only confirmed data within the error range is reported. Filling depth, imposed amplitude and frequency effects on the sloshing wave pattern are specifically evaluated. The experiments also reveal the nonlinear wave behavior. The numerical model is based on a stabilized finite element method of the variational multi-scale type. The free-surface is captured using a level set technique developed to be used with adaptive meshes in Arbitrary Lagrangian&ndash;Eulerian framework. The numerical results are compared with the experiments for different sloshing conditions near the first sloshing mode. The simulations satisfactorily match the experiments, providing a reliable tool for the analysis of this kind of problems.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Tavares_et_al_2019a</guid>
	<pubDate>Wed, 15 Apr 2020 13:07:15 +0200</pubDate>
	<link>https://www.scipedia.com/public/Tavares_et_al_2019a</link>
	<title><![CDATA[A dynamic spring element model for the prediction of longitudinal failure of polymer composites]]></title>
	<description><![CDATA[<p>A spring element model that takes into account the dynamic effects associated with fibre failure in composite materials is presented. The model is implemented in a parallel environment to allow a better performance in the prediction of the complex mechanisms associated with longitudinal tensile failure. The model is used to identify the changes in the stress fields around a broken fibre representing fibre failure as a dynamic phenomenon. In light of these changes in the stress fields, the cluster formation and failure development is analysed and the results are compared with the static spring element model. It is observed that the stress redistribution around a broken fibre is strongly dependent on the dynamic effects and it varies with the material in study, specially if the matrix is considered linear elastic. The changes in local stress redistribution are seen to affect the materials tensile behaviour and cluster formation, being this changes higher for a material with an elastic matrix</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Baiges_et_al_2019b</guid>
	<pubDate>Wed, 15 Apr 2020 11:48:38 +0200</pubDate>
	<link>https://www.scipedia.com/public/Baiges_et_al_2019b</link>
	<title><![CDATA[Large-scale stochastic topology optimization using adaptive mesh refinement and coarsening through a two-level parallelization scheme]]></title>
	<description><![CDATA[<p><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;"><span><span><span><a href="https://www.sciencedirect.com/topics/mathematics/topology" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Topology from ScienceDirect's AI-generated Topic Pages">Topology</a><span>&nbsp;optimization under uncertainty of large-scale&nbsp;<a href="https://www.sciencedirect.com/topics/mathematics/continuum" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Continuum from ScienceDirect's AI-generated Topic Pages">continuum</a>&nbsp;structures is a computational challenge due to the combination of large finite element models and&nbsp;</span></span><a href="https://www.sciencedirect.com/topics/computer-science/uncertainty-propagation" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Uncertainty Propagation from ScienceDirect's AI-generated Topic Pages">uncertainty propagation</a>&nbsp;methods. The former aims to address the ever-increasing complexity of more and more realistic models, whereas the latter is required to estimate the statistical metrics of the formulation. In this work, the computational burden of the problem is addressed using a sparse grid stochastic collocation method, to calculate the statistical metrics of the topology optimization under uncertainty formulation, and a parallel&nbsp;</span><a href="https://www.sciencedirect.com/topics/engineering/adaptive-mesh-refinement" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Adaptive Mesh Refinement from ScienceDirect's AI-generated Topic Pages">adaptive mesh refinement</a>&nbsp;method, to efficiently solve each of the stochastic collocation nodes. A two-level parallel processing scheme (TOUU-PS2) is proposed to profit from parallel computation on distributed memory systems: the stochastic nodes are distributed through the distributed memory system, and the efficient computation of each stochastic node is performed partitioning the problem using a&nbsp;</span><a href="https://www.sciencedirect.com/topics/engineering/domain-decomposition" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Domain Decomposition from ScienceDirect's AI-generated Topic Pages">domain decomposition</a><span>&nbsp;strategy and solving each&nbsp;<a href="https://www.sciencedirect.com/topics/engineering/subdomain" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Subdomain from ScienceDirect's AI-generated Topic Pages">subdomain</a><span><span>&nbsp;using an adaptive mesh refinement method. A&nbsp;<a href="https://www.sciencedirect.com/topics/computer-science/dynamic-load-balancing" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Dynamic Load Balancing from ScienceDirect's AI-generated Topic Pages">dynamic load-balancing</a>&nbsp;strategy is used to balance the workload between subdomains, and thus increasing the parallel performance by reducing processor idle time. The topology optimization problem is addressed using the topological derivative concept in combination with a&nbsp;</span><a href="https://www.sciencedirect.com/topics/engineering/level-set-method" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Level Set Method from ScienceDirect's AI-generated Topic Pages">level-set method</a>. The performance and scalability of the proposed methodology are evaluated using several numerical benchmarks and&nbsp;</span></span></span><a href="https://www.sciencedirect.com/topics/engineering/real-world-application" style="background-color: transparent; color: rgb(12, 125, 187); font-size: 18px; font-style: normal; font-weight: 400;" title="Learn more about Real World Application from ScienceDirect's AI-generated Topic Pages">real-world applications</a><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">, showing good performance and scalability up to thousands of processors</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Baiges_Bayona_2017a</guid>
	<pubDate>Tue, 14 Apr 2020 14:33:20 +0200</pubDate>
	<link>https://www.scipedia.com/public/Baiges_Bayona_2017a</link>
	<title><![CDATA[Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper are its capability of handling multiple types of elements in two and three dimensions (triangular, quadrilateral, tetrahedral, and hexahedral), the small amount of memory required per processor, and the parallel scalability up to thousands of processors. The presented algorithm is also capable of dealing with nonbalanced hierarchical refinement, where multirefinement level jumps are possible between neighbor elements. An algorithm for dealing with load rebalancing is also presented, which allows us to move the hierarchical data structure between processors so that load unbalancing is kept below an acceptable level at all times during the simulation. A particular feature of the proposed algorithm is that arbitrary renumbering algorithms can be used in the load rebalancing step, including both graph partitioning and space-filling renumbering algorithms. The presented algorithm is packed in the Fortran 2003 object oriented library \textttRefficientLib, whose interface calls which allow it to be used from any computational physics code are summarized. Finally, numerical experiments illustrating the performance and scalability of the algorithm are presented. No separate or additional fees are collected for access to or distribution of the work</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Kollmannsberger_et_al_2015a</guid>
	<pubDate>Tue, 14 Apr 2020 13:39:04 +0200</pubDate>
	<link>https://www.scipedia.com/public/Kollmannsberger_et_al_2015a</link>
	<title><![CDATA[Parameter-free, weak imposition of Dirichlet boundary conditions and coupling of trimmed and non-conforming patches]]></title>
	<description><![CDATA[<p>We present a parameter-free domain sewing approach for low-order as well as high-order finite elements. Its final form contains only primal unknowns; that is, the approach does not introduce additional unknowns at the interface. Additionally, it does not involve problem-dependent parameters, which require an estimation. The presented approach is symmetry preserving; that is, the resulting discrete form of an elliptic equation will remain symmetric and positive definite. It preserves the order of the underlying discretization, and we demonstrate high-order accuracy for problems of non-matching discretizations concerning the mesh size h as well as the polynomial degree of the order of discretization p. We also demonstrate how the method may be used to model material interfaces, which may be curved and for which the interface does not coincide with the underlying mesh. This novel approach is presented in the context of the p-version and B-spline version of the finite cell method, an embedded domain method of high order, and compared with more classical methods such as the penalty method or Nitsche&#39;s method.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Neiva_et_al_2019a</guid>
	<pubDate>Tue, 14 Apr 2020 11:30:43 +0200</pubDate>
	<link>https://www.scipedia.com/public/Neiva_et_al_2019a</link>
	<title><![CDATA[A scalable parallel finite element framework for growing geometries: application to metal additive manufacturing]]></title>
	<description><![CDATA[<p>This work introduces an innovative parallel, fully-distributed finite element framework for growing geometries and its application to metal additive manufacturing. It is well-known that virtual part design and qualification in additive manufacturing requires highly-accurate multiscale and multiphysics analyses. Only high performance computing tools are able to handle such complexity in time frames compatible with time-to-market. However, efficiency, without loss of accuracy, has rarely held the centre stage in the numerical community. Here, in contrast, the framework is designed to adequately&nbsp;exploit the resources of high-end distributed-memory machines. It is grounded on three building blocks: (1) Hierarchical adaptive mesh refinement with octree-based meshes; (2) a parallel strategy to model the growth of the geometry; (3) state-of-the-art parallel iterative linear solvers. Computational experiments consider the heat transfer analysis at the part scale of the printing process by powder-bed technologies. After verification against a 3D benchmark, a strong-scaling analysis assesses performance and identifies major sources of parallel overhead. A third numerical example examines the efficiency and robustness of (2) in a curved 3D shape. Unprecedented parallelism and scalability were achieved in this work. Hence, this framework contributes to take on higher complexity and/or accuracy, not only of part-scale simulations of metal or polymer additive manufacturing, but also in welding, sedimentation, atherosclerosis, or any other physical problem where the physical domain of interest grows in time.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2019e</guid>
	<pubDate>Tue, 14 Apr 2020 11:25:32 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2019e</link>
	<title><![CDATA[Scalable solvers for complex electromagnetics problems]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this work, we present scalable balancing domain decomposition by constraints methods for linear systems arising from arbitrary order edge finite element discretizations of multi-material and heterogeneous 3D problems. In order to enforce the continuity across subdomains of the method, we use a partition of the interface objects (edges and faces) into sub-objects determined by the variation of the physical coefficients of the problem. For multi-material problems, a constant coefficient condition is enough to define this sub-partition of the objects. For arbitrarily heterogeneous problems, a relaxed version of the method is defined, where we only require that the maximal contrast of the physical coefficient in each object is smaller than a predefined threshold. Besides, the addition of perturbation terms to the preconditioner is empirically shown to be effective in order to deal with the case where the two coefficients of the model problem jump simultaneously across the interface. The new method, in contrast to existing approaches for problems in curl-conforming spaces does not require spectral information whilst providing robustness with regard to coefficient jumps and heterogeneous materials. A detailed set of numerical experiments, which includes the application of the preconditioner to 3D realistic cases, shows excellent weak scalability properties of the implementation of the proposed algorithms.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Bonilla_Badia_2019a</guid>
	<pubDate>Tue, 14 Apr 2020 11:09:03 +0200</pubDate>
	<link>https://www.scipedia.com/public/Bonilla_Badia_2019a</link>
	<title><![CDATA[Maximum-principle preserving space–time isogeometric analysis]]></title>
	<description><![CDATA[<dl><dd style="margin-left: 0px; color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">
	<div>In this work we propose a nonlinear stabilization technique for convection&ndash;diffusion&ndash;reaction and pure transport problems discretized with space&ndash;time isogeometric analysis. The stabilization is based on a graph-theoretic artificial diffusion operator and a novel shock detector for isogeometric analysis. Stabilization in time and space directions are performed similarly, which allow us to use high-order discretizations in time without any CFL-like condition. The method is proven to yield solutions that satisfy the discrete maximum principle (DMP) unconditionally for arbitrary order. In addition, the stabilization is linearity preserving in a space&ndash;time sense. Moreover, the scheme is proven to be Lipschitz continuous ensuring that the nonlinear problem is well-posed. Solving large problems using a space&ndash;time discretization can become highly costly. Therefore, we also propose a partitioned space&ndash;time scheme that allows us to select the length of every time slab, and solve sequentially for every subdomain. As a result, the computational cost is reduced while the stability and convergence properties of the scheme remain unaltered. In addition, we propose a twice differentiable version of the stabilization scheme, which enjoys the same stability properties while the nonlinear convergence is significantly improved. Finally, the proposed schemes are assessed with numerical experiments. In particular, we considered steady and transient pure convection and convection&ndash;diffusion problems in one and two dimensions.<br /><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button">-</button></div>

	<div><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button"><br /></button></div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Olm_et_al_2019b</guid>
	<pubDate>Tue, 14 Apr 2020 10:57:56 +0200</pubDate>
	<link>https://www.scipedia.com/public/Olm_et_al_2019b</link>
	<title><![CDATA[On a general implementation of h- and p-adaptive curl-conforming finite elements]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Edge (or N&eacute;d&eacute;lec) finite elements are theoretically sound and widely used by the computational electromagnetics community. However, its implementation, especially for high order methods, is not trivial, since it involves many technicalities that are not properly described in the literature. To fill this gap, we provide a comprehensive description of a general implementation of edge elements of first kind within the scientific software project FEMPAR . We cover into detail how to implement arbitrary order (i.e., p-adaptive) elements on hexahedral and tetrahedral meshes. First, we set the three classical ingredients of the finite element definition by Ciarlet, both in the reference and the physical space: cell topologies, polynomial spaces and moments. With these ingredients, shape functions are automatically implemented by defining a judiciously chosen polynomial pre-basis that spans the local finite element space combined with a change of basis to automatically obtain a canonical basis with respect to the moments at hand. Next, we discuss global finite element spaces putting emphasis on the construction of global shape functions through oriented meshes, appropriate geometrical mappings, and equivalence classes of moments, in order to preserve the inter-element continuity of tangential components of the magnetic field. Finally, we extend the proposed methodology to generate global curl-conforming spaces on non-conforming hierarchically refined (i.e., h-adaptive) meshes with arbitrary order finite elements. Numerical results include experimental convergence rates to test the proposed implementation.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2019d</guid>
	<pubDate>Tue, 14 Apr 2020 10:16:59 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2019d</link>
	<title><![CDATA[Physics-based balancing domain decomposition by constraints for multi-material problems]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this work, we present a new variant of the balancing domain decomposition by constraints preconditioner that is robust for multi-material problems. We start with a well-balanced subdomain partition, and based on an aggregation of elements according to their physical coefficients, we end up with a finer physics-based (PB) subdomain partition. Next, we define corners, edges, and faces for this PB partition, and select some of them to enforce subdomain continuity (primal faces/edges/corners). When the physical coefficient in each PB subdomain is constant and the set of selected primal faces/edges/corners satisfy a mild condition on the existence of acceptable paths, we can show both theoretically and numerically that the condition number does not depend on the contrast of the coefficient across subdomains. An extensive set of numerical experiments for 2D and 3D for the Poisson and linear elasticity problems is provided to support our findings. In particular, we show robustness and weak scalability of the new preconditioner variant up to 8232 cores when applied to 3D multi-material problems with the contrast of the physical coefficient up to 108 and more than half a billion degrees of freedom. For the scalability analysis, we have exploited a highly scalable advanced inter-level overlapped implementation of the preconditioner that deals very efficiently with the coarse problem computation. The proposed preconditioner is compared against a state-of-the-art implementation of an adaptive BDDC method in PETSc for thermal and mechanical multi-material problems.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Olm_et_al_2019a</guid>
	<pubDate>Tue, 14 Apr 2020 10:09:06 +0200</pubDate>
	<link>https://www.scipedia.com/public/Olm_et_al_2019a</link>
	<title><![CDATA[Simulation of high temperature superconductors and experimental validation]]></title>
	<description><![CDATA[<dl><dd style="margin-left: 0px; color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">
	<div>In this work, we present a parallel, fully-distributed finite element numerical framework to simulate the low-frequency electromagnetic behaviour of superconducting devices, which efficiently exploits high performance computing platforms. We select the so-called H-formulation, which uses the magnetic field as a state variable. N&eacute;d&eacute;lec elements (of arbitrary order) are required for an accurate approximation of the H-formulation for modelling electromagnetic fields along interfaces between regions with high contrast medium properties. An h-adaptive mesh refinement technique customized for N&eacute;d&eacute;lec elements leads to a structured fine mesh in areas of interest whereas a smart coarsening is obtained in other regions. The composition of a tailored, robust, parallel nonlinear solver completes the exposition of the developed tools to tackle the problem. First, a comparison against experimental data is performed to show the availability of the finite element approximation to model the physical phenomena. Then, a selected state-of-the-art 3D benchmark is reproduced, focusing on the parallel performance of the algorithms.<br /><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button">-</button></div>

	<div><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button"><br /></button></div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2018c</guid>
	<pubDate>Tue, 14 Apr 2020 09:40:11 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2018c</link>
	<title><![CDATA[Mixed aggregated finite element methods for the unfitted discretization of the Stokes problem]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this work, we consider unfitted finite element methods for the numerical approximation of the Stokes problem. It is well-known that these kinds of methods lead to arbitrarily ill-conditioned systems and poorly approximated fluxes on unfitted interfaces/boundaries. In order to solve these issues, we consider the recently proposed aggregated finite element method, originally motivated for coercive problems. However, the well-posedness of the Stokes problem is far more subtle and relies on a discrete inf-sup condition. We consider mixed finite element methods that satisfy the discrete version of the inf-sup condition for body-fitted meshes and analyze how the discrete inf-sup is affected when considering the unfitted case. We propose different aggregated mixed finite element spaces combined with simple stabilization terms, which can include pressure jumps and/or cell residuals, to fix the potential deficiencies of the aggregated inf-sup. We carry out a complete numerical analysis, which includes stability, optimal a priori error estimates, and condition number bounds that are not affected by the small cut cell problem. For the sake of conciseness, we have restricted the analysis to hexahedral meshes and discontinuous pressure spaces. A thorough numerical experimentation bears out the numerical analysis. The aggregated mixed finite element method is ultimately applied to two problems with nontrivial geometries. No separate or additional fees are collected for access to or distribution of the work.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Zuluaga_et_al_2020a</guid>
	<pubDate>Sun, 12 Apr 2020 16:18:29 +0200</pubDate>
	<link>https://www.scipedia.com/public/Zuluaga_et_al_2020a</link>
	<title><![CDATA[PHARMACOLOGICAL LEARNING PROCESS IN MEDICAL STUDENTS: EDUCATIONAL PERSPECTIVES AND DILEMMAS]]></title>
	<description><![CDATA[<div style="font-weight: 400; font-style: normal; font-size: 12.8px; text-align: justify;"><span style="font-size: 10.24px;">Pharmacology is taught in medical schools in the years of basic science and in the transition to the clinical field, this science is in charge of studying the interactions between drugs and living matter; at the time of medical practice it teaches how a chemical substance (active principle) has benefits and helps to improve certain pathologies, as well as which substances should not be used. The teaching of this subject is a process of great importance in the introduction of students to clinical practice and their subsequent professional development. Although the importance of this subject in the medical field is taken into account, the learning gaps and the lack of theoretical-practical correlation on the part of students become evident, since when it comes to focusing knowledge on problem solving, very few students have an adequate picture of what is the most appropriate management in a specific scenario. It has been observed for several years that the educational process of pharmacology and therapeutics is not enough, and this is evidenced by the fact that the conceptual gaps of medical students and medical graduates are found. This global review of the education-learning process focuses on the most used pedagogical elements and which are most appropriate to impact on learning, solid understanding and depth of both pharmacological terminology and concepts and their applicability to achieve academic and professional competence.&nbsp;</span></div><div style="font-weight: 400; font-style: normal; font-size: 12.8px; text-align: justify;">&nbsp;</div>]]></description>
	<dc:creator>Juan Manuel Pérez-Agudelo</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Equihua-Anguiano_et_al_2020a</guid>
	<pubDate>Wed, 08 Apr 2020 19:32:03 +0200</pubDate>
	<link>https://www.scipedia.com/public/Equihua-Anguiano_et_al_2020a</link>
	<title><![CDATA[Diseño de túneles usando Elementos Finitos 2D y 3D y la influencia de los parámetros mecánicos y geométricos para el diseño]]></title>
	<description><![CDATA[<p>In practice, tunnels design uses analytical methods and numerical models. The last methods are because tunnels need solutions that provide a high level of security given their complexity and civil use. In this paper the study of the influence of the geometric and the mechanical soil properties on the tunnel displacements is presented, when it is used the finite element method (FEM) in two (2D) and three (3D) dimensions. Validation of results was through of the displacements comparison with an elastic analytical method. The constitutive model used was the Mohr-Coulomb and the soil parameters are of a soil in Mexico. Results shows a convergence towards a similar behavior for the elastic case and a difference when a perfect elasto-plastic behavior is considered in design. As fundamental conclusions, it is observed that it is possible to speed up the time using the 2D-FEM method, as well as the feasibility of quickly reviewing a 3D tunnel using a 2D program. &#39;&#39;&#39;Key words: &#39;&#39;&#39;Finite elements; two-dimensions; three-dimensions; tunnels.</p>]]></description>
	<dc:creator>Luisa Equihua-Anguiano</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Verdugo_2018a</guid>
	<pubDate>Tue, 07 Apr 2020 17:46:36 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Verdugo_2018a</link>
	<title><![CDATA[Robust and scalable domain decomposition solvers for unfitted finite element methods]]></title>
	<description><![CDATA[<dl><dd style="margin-left: 0px; color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">
	<div>Unfitted finite element methods, e.g., extended finite element techniques or the so-called finite cell method, have a great potential for large scale simulations, since they avoid the generation of body-fitted meshes and the use of graph partitioning techniques, two main bottlenecks for problems with non-trivial geometries. However, the linear systems that arise from these discretizations can be much more ill-conditioned, due to the so-called small cut cell problem. The state-of-the-art approach is to rely on sparse direct methods, which have quadratic complexity and are thus not well suited for large scale simulations. In order to solve this situation, in this work we investigate the use of domain decomposition preconditioners (balancing domain decomposition by constraints) for unfitted methods. We observe that a straightforward application of these preconditioners to the unfitted case has a very poor behavior. As a result, we propose a customization of the classical BDDC methods based on the stiffness weighting operator and an improved definition of the coarse degrees of freedom in the definition of the preconditioner. These changes lead to a robust and algorithmically scalable solver able to deal with unfitted grids. A complete set of complex 3D numerical experiments shows the good performance of the proposed preconditioners.<br /><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button">-</button></div>

	<div><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button"><br /></button></div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Olm_2018a</guid>
	<pubDate>Tue, 07 Apr 2020 17:40:59 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Olm_2018a</link>
	<title><![CDATA[Nonlinear parallel-in-time Schur complement solvers for ordinary differential equations]]></title>
	<description><![CDATA[<dl><dd style="margin-left: 0px; color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">
	<div>In this work, we propose a parallel-in-time solver for linear and nonlinear ordinary differential equations. The approach is based on an efficient multilevel solver of the Schur complement related to a multilevel time partition. For linear problems, the scheme leads to a fast direct method. Next, two different strategies for solving nonlinear ODEs are proposed. First, we consider a Newton method over the global nonlinear ODE, using the multilevel Schur complement solver at every nonlinear iteration. Second, we state the global nonlinear problem in terms of the nonlinear Schur complement (at an arbitrary level), and perform nonlinear iterations over it. Numerical experiments show that the proposed schemes are weakly scalable, i.e., we can efficiently exploit increasing computational resources to solve for more time steps the same problem.<br /><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button">-</button></div>

	<div><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button"><br /></button></div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2018b</guid>
	<pubDate>Tue, 07 Apr 2020 17:22:09 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2018b</link>
	<title><![CDATA[The aggregated unfitted finite element method for elliptic problems]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hindersthe practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2017a</guid>
	<pubDate>Tue, 07 Apr 2020 17:11:22 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2017a</link>
	<title><![CDATA[Differentiable monotonicity-preserving schemes for discontinuous Galerkin methods on arbitrary meshes]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">This work is devoted to the design of interior penalty discontinuous Galerkin (dG) schemes that preserve maximum principles at the discrete level for the steady transport and convection&ndash;diffusion problems and the respective transient problems with implicit time integration. Monotonic schemes that combine explicit time stepping with dG space discretization are very common, but the design of such schemes for implicit time stepping is rare, and it had only been attained so far for 1D problems. The proposed scheme is based on a piecewise linear dG discretization supplemented with an artificial diffusion that linearly depends on a shock detector that identifies the troublesome areas. In order to define the new shock detector, we have introduced the concept of discrete local extrema. The diffusion operator is a graph-Laplacian, instead of the more common finite element discretization of the Laplacian operator, which is essential to keep monotonicity on general meshes and in multi-dimension. The resulting nonlinear stabilization is non-smooth and nonlinear solvers can fail to converge. As a result, we propose a smoothed (twice differentiable) version of the nonlinear stabilization, which allows us to use Newton with line search nonlinear solvers and dramatically improve nonlinear convergence. A theoretical numerical analysis of the proposed schemes shows that they satisfy the desired monotonicity properties. Further, the resulting operator is Lipschitz continuous and there exists at least one solution of the discrete problem, even in the non-smooth version. We provide a set of numerical results to support our findings.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Gutierrez-Santacreu_2017a</guid>
	<pubDate>Tue, 07 Apr 2020 17:04:16 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Gutierrez-Santacreu_2017a</link>
	<title><![CDATA[Convergence to suitable weak solutions for a finite element approximation of the Navier–Stokes equations with numerical subgrid scale modeling]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this work we prove that weak solutions constructed by a variational multiscale method are suitable in the sense of Scheffer. In order to prove this result, we consider a subgrid model that enforces orthogonality between subgrid and finite element components. Further, the subgrid component must be tracked in time. Since this type of schemes introduce pressure stabilization, we have proved the result for equal-order velocity and pressure finite element spaces that do not satisfy a discrete inf-sup condition</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Olm_2017a</guid>
	<pubDate>Tue, 07 Apr 2020 16:56:08 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Olm_2017a</link>
	<title><![CDATA[Space-time balancing domain decomposition]]></title>
	<description><![CDATA[<dl><dd style="margin-left: 0px; color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">
	<div>In this work, we propose two-level space-time domain decomposition preconditioners for parabolic problems discretized using finite elements. They are motivated as an extension to space-time of balancing domain decomposition by constraints preconditioners. The key ingredients to be defined are the subassembled space and operator, the coarse degrees of freedom (DOFs) in which we want to enforce continuity among subdomains at the preconditioner level, and the transfer operator from the subassembled to the original finite element space. With regard to the subassembled operator, a perturbation of the time derivative is needed to end up with a well-posed preconditioner. The set of coarse DOFs includes the time average (at the space-time subdomain) of classical space constraints plus new constraints between consecutive subdomains in time. Numerical experiments show that the proposed schemes are weakly scalable in time, i.e., we can efficiently exploit increasing computational resources to solve more time steps in the same total elapsed time. Further, the scheme is also weakly space-time scalable, since it leads to asymptotically constant iterations when solving larger problems both in space and time. Excellent wall clock time weak scalability is achieved for space-time parallel solvers on some thousands of cores No separate or additional fees are collected for access to or distribution of the work.<br /><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button">-</button></div>

	<div><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button"><br /></button></div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Bonilla_2017a</guid>
	<pubDate>Tue, 07 Apr 2020 16:47:32 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Bonilla_2017a</link>
	<title><![CDATA[Monotonicity-preserving finite element schemes based on differentiable nonlinear stabilization]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this work, we propose a nonlinear stabilization technique for scalar conservation laws with implicit time stepping. The method relies on an artificial diffusion method, based on a graph-Laplacian operator. It is nonlinear, since it depends on a shock detector. Further, the resulting method is linearity preserving. The same shock detector is used to gradually lump the mass matrix. The resulting method is LED, positivity preserving, and also satisfies a global DMP. Lipschitz continuity has also been proved. However, the resulting scheme is highly nonlinear, leading to very poor nonlinear convergence rates. We propose a smooth version of the scheme, which leads to twice differentiable nonlinear stabilization schemes. It allows one to straightforwardly use Newton&rsquo;s method and obtain quadratic convergence. In the numerical experiments, steady and transient linear transport, and transient Burgers&rsquo; equation have been considered in 2D. Using the Newton method with a smooth version of the scheme we can reduce 10 to 20 times the number of iterations of Anderson acceleration with the original non-smooth scheme. In any case, these properties are only true for the converged solution, but not for iterates. In this sense, we have also proposed the concept of projected nonlinear solvers, where a projection step is performed at the end of every nonlinear iterations onto a FE space of admissible solutions. The space of admissible solutions is the one that satisfies the desired monotonic properties (maximum principle or positivity).</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Colomes_Badia_2017a</guid>
	<pubDate>Tue, 07 Apr 2020 16:36:27 +0200</pubDate>
	<link>https://www.scipedia.com/public/Colomes_Badia_2017a</link>
	<title><![CDATA[Segregated Runge–Kutta time integration of convection-stabilized mixed finite element schemes for wall-unresolved LES of incompressible flows]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this work, we develop a high-performance numerical framework for the large eddy simulation (LES) of incompressible flows. The spatial discretization of the nonlinear system is carried out using mixed finite element (FE) schemes supplemented with symmetric projection stabilization of the convective term and a penalty term for the divergence constraint. These additional terms introduced at the discrete level have been proved to act as implicit LES models. In order to perform meaningful wall-unresolved simulations, we consider a weak imposition of the boundary conditions using a Nitsche&rsquo;s-type scheme, where the tangential component penalty term is designed to act as a wall law. Next, segregated Runge&ndash;Kutta (SRK) schemes (recently proposed by the authors for laminar flow problems) are applied to the LES simulation of turbulent flows. By the introduction of a penalty term on the trace of the acceleration, these methods exhibit excellent stability properties for both implicit and explicit treatment of the convective terms. SRK schemes are excellent for large-scale simulations, since they reduce the computational cost of the linear system solves by splitting velocity and pressure computations at the time integration level, leading to two uncoupled systems. The pressure system is a Darcy-type problem that can easily be preconditioned using a traditional block-preconditioning scheme that only requires a Poisson solver. At the end, only coercive systems have to be solved, which can be effectively preconditioned by multilevel domain decomposition schemes, which are both optimal and scalable. The framework is applied to the Taylor&ndash;Green and turbulent channel flow benchmarks in order to prove the accuracy of the convection-stabilized mixed FEs as LES models and SRK time integrators. The scalability of the preconditioning techniques (in space only) has also been proven for one step of the SRK scheme for the Taylor&ndash;Green flow using uniform meshes. Moreover, a turbulent flow around a NACA profile is solved to show the applicability of the proposed algorithms for a realistic problem.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Nguyen_2016a</guid>
	<pubDate>Tue, 07 Apr 2020 15:52:23 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Nguyen_2016a</link>
	<title><![CDATA[Balancing domain decomposition by constraints and perturbation]]></title>
	<description><![CDATA[<dl><dd style="margin-left: 0px; color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">
	<div>In this paper, we formulate and analyze a perturbed formulation of the balancing domain decomposition by constraints (BDDC) method. We prove that the perturbed BDDC has the same polylogarithmic bound for the condition number as the standard formulation. Two types of properly scaled zero-order perturbations are considered: one uses a mass matrix, and the other uses a Robin-type boundary condition, i.e, a mass matrix on the interface. With perturbation, the wellposedness of the local Neumann problems and the global coarse problem is automatically guaranteed, and coarse degrees of freedom can be defined only for convergence purposes but not well-posedness. This allows a much simpler implementation as no complicated corner selection algorithm is needed. Minimal coarse spaces using only face or edge constraints can also be considered. They are very useful in extreme scale calculations where the coarse problem is usually the bottleneck that can jeopardize scalability. The perturbation also adds extra robustness as the perturbed formulation works even when the constraints fail to eliminate a small number of subdomain rigid body modes from the standard BDDC space. This is extremely important when solving problems on unstructured meshes partitioned by automatic graph partitioners since arbitrary disconnected subdomains are possible. Numerical results are provided to support the theoretical findings.<br /><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button">-</button></div>

	<div><button style="margin: 0px; font-style: inherit; color: rgb(51, 51, 51); cursor: pointer; vertical-align: middle; background-color: rgb(255, 255, 255); font-weight: bold !important; font-size: 14px !important; padding: 0px 4px !important;" type="button"><br /></button></div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Camargo_et_al_2020a</guid>
	<pubDate>Mon, 06 Apr 2020 20:41:03 +0200</pubDate>
	<link>https://www.scipedia.com/public/Camargo_et_al_2020a</link>
	<title><![CDATA[Modeling amputee gait: analogy of the triple inverted pendulum]]></title>
	<description><![CDATA[<p>A mathematical model of the gait of the transtibial amputee based on the inverted pendulum model is presented, this model shows the gait affectation when the prosthesis components vary. The model for the lower limb with transtibial amputation is a triple pendulum with 6 segments and 7 degrees of freedom, the Lagrange theory to determine the equations of motion of the system was used. With the system&#39;s equations, simulation software to visualize the 3D animation of the patient in normal gait and with prosthesis was designed, also to kinematic analysis of the joints of the lower limb and the forces applied to the surfaces. Spatio-temporal data specific to gait are acquired and it is possible to vary the weight and height of the patient, lengths, diameters and materials of the prosthesis components, making it possible to carry out the analysis for different subjects.</p>]]></description>
	<dc:creator>Lely A. Luengas</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Hierro_et_al_2016a</guid>
	<pubDate>Wed, 01 Apr 2020 17:55:21 +0200</pubDate>
	<link>https://www.scipedia.com/public/Hierro_et_al_2016a</link>
	<title><![CDATA[Shock capturing techniques for hp-adaptive finite elements]]></title>
	<description><![CDATA[<p>The aim of this work is to propose an hp-adaptive algorithm for discontinuous Galerkin methods that is capable to detect the discontinuities and sharp layers and avoid the spurious oscillation of the solution around them. In order to control the spurious oscillations, artificial viscosity is used with the particularity that it is only applied around the layers where the solution changes abruptly. To do so, a novel troubled-cell detector has been developed in order to mark the elements around those layers and to impose linear order in them. The detector takes advantage of the evolution of the value of the gradient through the adaptive process.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Colomes_Badia_2015a</guid>
	<pubDate>Wed, 01 Apr 2020 17:40:30 +0200</pubDate>
	<link>https://www.scipedia.com/public/Colomes_Badia_2015a</link>
	<title><![CDATA[Segregated Runge-Kutta methods for the incompressible Navier-Stokes equations]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">In this work, we propose Runge-Kutta time integration schemes for the incompressible Navier-Stokes equations with two salient properties. First, velocity and pressure computations are segregated at the time integration level, without the need to perform additional fractional step techniques that spoil high orders of accuracy. Second, the proposed methods keep the same order of accuracy for both velocities and pressures. The segregated Runge-Kutta methods are motivated as an implicit-explicit Runge-Kutta time integration of the projected Navier-Stokes system onto the discrete divergence-free space, and its re-statement in a velocity-pressure setting using a discrete pressure Poisson equation. We have analysed the preservation of the discrete divergence constraint for segregated Runge-Kutta methods and their relation (in their fully explicit version) with existing half-explicit methods. We have performed a detailed numerical experimentation for a wide set of schemes (from first to third order), including implicit and IMEX integration of viscous and convective terms, for incompressible laminar and turbulent flows. Further, segregated Runge-Kutta schemes with adaptive time stepping are proposed</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Smolentsev_et_al_2015a</guid>
	<pubDate>Wed, 01 Apr 2020 17:25:23 +0200</pubDate>
	<link>https://www.scipedia.com/public/Smolentsev_et_al_2015a</link>
	<title><![CDATA[An approach to verification and validation of MHD codes for fusion applications]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">We propose a new activity on verification and validation (V&amp;V) of MHD codes presently employed by the fusion community as a predictive capability tool for liquid metal cooling applications, such as liquid metal blankets. The important steps in the development of MHD codes starting from the 1970s are outlined first and then basic MHD codes, which are currently in use by designers of liquid breeder blankets, are reviewed. A benchmark database of five problems has been proposed to cover a wide range of MHD flows from laminar fully developed to turbulent flows, which are of interest for fusion applications: (A) 2D fully developed laminar steady MHD flow, (B) 3D laminar, steady developing MHD flow in a non-uniform magnetic field, (C) quasi-two-dimensional MHD turbulent flow, (D) 3D turbulent MHD flow, and (E) MHD flow with heat transfer (buoyant convection). Finally, we introduce important details of the proposed activities, such as basic V&amp;V rules and schedule. The main goal of the present paper is to help in establishing an efficient V&amp;V framework and to initiate benchmarking among interested parties. The comparison results computed by the codes against analytical solutions and trusted experimental and numerical data as well as code-to-code comparisons will be presented and analyzed in companion paper/papers</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Hierro_2015a</guid>
	<pubDate>Wed, 01 Apr 2020 17:02:47 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Hierro_2015a</link>
	<title><![CDATA[On discrete maximum principles for discontinuous Galerkin methods]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">The aim of this work is to propose a monotonicity-preserving method for discontinuous Galerkin (dG) approximations of convection&ndash;diffusion problems. To do so, a novel definition of discrete maximum principle (DMP) is proposed using the discrete variational setting of the problem, and we show that the fulfilment of this DMP implies that the minimum/maximum (depending on the sign of the forcing term) is on the boundary for multidimensional problems. Then, an artificial viscosity (AV) technique is designed for convection-dominant problems that satisfies the above mentioned DMP. The noncomplete stabilized interior penalty dG method is proved to fulfil the DMP property for the one-dimensional linear case when adding such AV with certain parameters. The benchmarks for the constant values to satisfy the DMP are calculated and tested in the numerical experiments section. Finally, the method is applied to different test problems in one and two dimensions to show its performance.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2014d</guid>
	<pubDate>Wed, 01 Apr 2020 16:41:34 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2014d</link>
	<title><![CDATA[Block recursive LU preconditioners for the thermally coupled incompressible inductionless MHD problem]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">The thermally coupled incompressible inductionless magnetohydrodynamics (MHD) problem models the flow of an electrically charged fluid under the influence of an external electromagnetic field with thermal coupling. This system of partial differential equations is strongly coupled and highly nonlinear for real cases of interest. Therefore, fully implicit time integration schemes are very desirable in order to capture the different physical scales of the problem at hand. However, solving the multiphysics linear systems of equations resulting from such algorithms is a very challenging task which requires efficient and scalable preconditioners. In this work, a new family of recursive block LU preconditioners is designed and tested for solving the thermally coupled inductionless MHD equations. These preconditioners are obtained after splitting the fully coupled matrix into one-physics problems for every variable (velocity, pressure, current density, electric potential and temperature) that can be optimally solved, e.g., using preconditioned domain decomposition algorithms. The main idea is to arrange the original matrix into an (arbitrary) 2 x 2 block matrix, and consider an LU preconditioner obtained by approximating the corresponding Schur complement. For every one of the diagonal blocks in the LU preconditioner, if it involves more than one type of unknowns, we proceed the same way in a recursive fashion. This approach is stated in an abstract way, and can be straightforwardly applied to other multiphysics problems. Further, we precisely explain a flexible and general software design for the code implementation of this type of preconditioners.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Gutierrez-Santacreu_2014a</guid>
	<pubDate>Wed, 01 Apr 2020 16:23:09 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Gutierrez-Santacreu_2014a</link>
	<title><![CDATA[Convergence towards weak solutions of the Navier-Stokes equations for a finite element approximation with numerical subgrid-scale modelling]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Residual-based stabilized finite element (FE) techniques for the Navier-Stokes equations lead to numerical discretizations that provide convection stabilization as well as pressure stability without the need to satisfy an inf-sup condition. They can be motivated by using a variational multiscale (VMS) framework, based on the decomposition of the fluid velocity into a resolvable FE component plus a modelled subgrid-scale component. The subgrid closure acts as a large eddy simulation turbulence model, leading to accurate under-resolved simulations. However, even though VMS formulations are increasingly used in the applied FE community, their numerical analysis has been restricted to a priori estimates and convergence to smooth solutions only, via a priori error estimates. In this work, we prove that some versions of these methods (based on dynamic and orthogonal closures) also converge to weak (turbulent) solutions of the Navier-Stokes equations. These results are obtained by using compactness results in Bochner-Lebesgue spaces.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Hierro_2014a</guid>
	<pubDate>Wed, 01 Apr 2020 16:09:40 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Hierro_2014a</link>
	<title><![CDATA[On monotonicity-preserving stabilized finite element approximations of transport problems]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">The aim of this work is to design monotonicity-preserving stabilized finite element techniques for transport problems as a blend of linear and nonlinear (shock-capturing) stabilization. As linear stabilization, we consider and analyze a novel symmetric projection stabilization technique based on a local Scott--Zhang projector. Next, we design a weighting of the aforementioned linear stabilization such that, when combined with a finite element discretization enjoying a discrete maximum principle (usually attained via nonlinear stabilization), it does not spoil these monotonicity properties. Then, we propose novel nonlinear stabilization schemes in the form of an artificial viscosity method where the amount of viscosity is proportional to gradient jumps at either finite element boundaries or nodes. For the nodal scheme, we prove a discrete maximum principle for time-dependent multidimensional transport problems. Numerical experiments support the numerical analysis and we show that the resulting methods provide excellent results. In particular, we observe that the proposed nonlinear stabilization techniques do an excellent job eliminating oscillations around shocks</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Baiges_2013a</guid>
	<pubDate>Tue, 31 Mar 2020 13:17:55 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Baiges_2013a</link>
	<title><![CDATA[Adaptive finite element simulation of incompressible flows by hybrid continuous-discontinuous Galerkin formulations]]></title>
	<description><![CDATA[<p>In this work we design hybrid continuous-discontinuous finite element spaces that permit discontinuities on nonmatching element interfaces of nonconforming meshes. Then we develop an equal-order stabilized finite element formulation for incompressible flows over these hybrid spaces, which combines the element interior stabilization of SUPG-type continuous Galerkin formulations and the jump stabilization of discontinuous Galerkin formulations. Optimal stability and convergence results are obtained. For the adaptive setting, we use a standard error estimator and marking strategy. Numerical experiments show the optimal accuracy of the hybrid algorithm for both uniformly and adaptively refined nonconforming meshes. The outcome of this work is a finite element formulation that can naturally be used on nonconforming meshes, as discontinuous Galerkin formulations, while keeping the much lower CPU cost of continuous Galerkin formulations.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2013d</guid>
	<pubDate>Tue, 31 Mar 2020 13:02:49 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2013d</link>
	<title><![CDATA[Unconditionally stable operator splitting algorithms for the incompressible magnetohydrodynamics (MHD) system discretized by a stabilized finite element formulation based on projections]]></title>
	<description><![CDATA[<p><span style="color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">In this article, we propose different splitting procedures for the transient incompressible magnetohydrodynamics (MHD) system that are unconditionally stable. We consider two levels of splitting, on one side we perform the segregation of the fluid pressure and magnetic pseudo‐pressure from the vectorial fields computation. At the second level, the fluid velocity and induction fields are also decoupled. This way, we transform a fully coupled indefinite multi‐physics system into a set of smaller definite ones, clearly reducing the CPU cost. With regard to the finite element approximation, we stick to an unconditionally convergent stabilized finite element formulation because it introduces convection stabilization, allows to circumvent inf‐sup conditions (clearly simplifying implementation issues), and is able to capture non‐smooth solutions of the magnetic subproblem. However, residual‐based finite element formulations are not suitable for segregation, because they lose the skew‐symmetry of the off‐diagonal blocks. Therefore, in this work, we have proposed a novel term‐by‐term stabilization of the MHD system based on projections that is still unconditionally convergent.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_2012a</guid>
	<pubDate>Tue, 31 Mar 2020 12:48:15 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_2012a</link>
	<title><![CDATA[On stabilized finite element methods based on the Scott-Zhang projector: circumventing the inf-sup condition for the Stokes problem]]></title>
	<description><![CDATA[<dl><dd>
	<div>In this work we propose a stabilized nite element method that permits us to circumvent discrete inf-sup conditions, e.g. allowing equal order interpolation. The type of method we propose belongs to the family of symmetric stabilization techniques, which are based on the introduction of additional terms that penalize the di erence between some quantities, i.e. the pressure gradient in the Stokes problem, and their nite element projections. The key feature of the formulation we propose is the de nition of the projection to be used, a non-standard Scott-Zhang projector that is well-de ned for L1() functions. The resulting method has some appealing features: the projector is local and nested meshes or enriched spaces are not required.</div>
	</dd>
</dl>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2011a</guid>
	<pubDate>Tue, 31 Mar 2020 12:07:13 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2011a</link>
	<title><![CDATA[An Overview on Numerical Analyses of Nematic Liquid Crystal Flows]]></title>
	<description><![CDATA[<p style="margin-bottom: 1.5em; color: rgb(51, 51, 51); font-size: 18px; font-style: normal; font-weight: 400; background-color: rgb(252, 252, 252);">The purpose of this work is to provide an overview of the most recent numerical developments in the field of nematic liquid crystals. The Ericksen-Leslie equations govern the motion of a nematic liquid crystal. This system, in its simplest form, consists of the Navier-Stokes equations coupled with an extra anisotropic stress tensor, which represents the effect of the nematic liquid crystal on the fluid, and a convective harmonic map equation. The sphere constraint must be enforced almost everywhere in order to obtain an energy estimate. Since an almost everywhere satisfaction of this restriction is not appropriate at a numerical level, two alternative approaches have been introduced: a penalty method and a saddle-point method. These approaches are suitable for their numerical approximation by finite elements, since a discrete version of the restriction is enough to prove the desired energy estimate.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Samper_2020a</guid>
	<pubDate>Tue, 31 Mar 2020 11:23:14 +0200</pubDate>
	<link>https://www.scipedia.com/public/Samper_2020a</link>
	<title><![CDATA[Finite element approximation of nematic liquid crystal flows using a saddle-point structure]]></title>
	<description><![CDATA[]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2009b</guid>
	<pubDate>Tue, 31 Mar 2020 11:01:34 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2009b</link>
	<title><![CDATA[Coupling Biot and Navier-Stokes equations for modelling fluid-poroelastic media interaction]]></title>
	<description><![CDATA[<div id="abstracts" style="font-size: 18px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400;"><div id="aep-abstract-id15" lang="en" style="margin-bottom: 8px;"><div id="aep-abstract-sec-id16"><p style="margin-bottom: 16px;">The interaction between a fluid and a poroelastic structure is a complex problem that couples the Navier&ndash;Stokes equations with the Biot system. The finite element approximation of this problem is involved due to the fact that both subproblems are indefinite. In this work, we first design residual-based stabilization techniques for the Biot system, motivated by the variational multiscale approach. Then, we state the monolithic Navier&ndash;Stokes/Biot system with the appropriate transmission conditions at the interface. For the solution of the coupled system, we adopt both monolithic solvers and heterogeneous&nbsp;<a href="https://www.sciencedirect.com/topics/computer-science/domain-decomposition" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Domain Decomposition from ScienceDirect's AI-generated Topic Pages">domain decomposition</a><span>&nbsp;strategies. Different&nbsp;<a href="https://www.sciencedirect.com/topics/computer-science/domain-decomposition-methods" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Domain Decomposition Methods from ScienceDirect's AI-generated Topic Pages">domain decomposition methods</a>&nbsp;are considered and their convergence is analyzed for a simplified problem. We compare the efficiency of all the methods on a test problem that exhibits a large added-mass effect, as it happens in hemodynamics applications.</span></p></div></div></div><ul id="issue-navigation" style="margin-right: 0px; font-size: 16px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400; margin-bottom: 16px !important; background-color: rgb(245, 245, 245) !important;"></ul>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2009a</guid>
	<pubDate>Tue, 31 Mar 2020 10:47:43 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2009a</link>
	<title><![CDATA[Robin-Robin preconditioned Krylov methods for fluid-structure interaction problems]]></title>
	<description><![CDATA[<div id="abstracts" style="font-size: 18px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400;"><div id="aep-abstract-id15" lang="en" style="margin-bottom: 8px;"><div id="aep-abstract-sec-id16"><p style="margin-bottom: 16px;">In this work, we propose a Robin&ndash;Robin preconditioner combined with Krylov iterations for the solution of the interface system arising in fluid&ndash;structure interaction (FSI) problems. It can be seen as a partitioned FSI procedure and in this respect it generalizes the ideas introduced in [S. Badia, F. Nobile, C. Vergara, J. Comput. Phys. 227 (2008) 7027&ndash;7051]. We analyze the convergence of GMRES iterations with the Robin&ndash;Robin preconditioner on a model problem and compare its efficiency with some existing algorithms. The method is shown to be very efficient for many challenging fluid&ndash;structure interaction problems, such as those characterized by a large added-mass effect or by enclosed fluids. In particular, the possibility to solve balloon-type problems without any special treatment makes this algorithm very appealing compared to the computationally intensive existing approaches.</p></div></div></div><ul id="issue-navigation" style="margin-right: 0px; font-size: 16px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400; margin-bottom: 16px !important; background-color: rgb(245, 245, 245) !important;"></ul>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2008d</guid>
	<pubDate>Tue, 31 Mar 2020 10:16:56 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2008d</link>
	<title><![CDATA[Modular vs non-modular preconditioners for fluid-structure systems with large added-mass effect]]></title>
	<description><![CDATA[<div id="abstracts" style="font-size: 18px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400;"><div id="aep-abstract-id17" lang="en" style="margin-bottom: 8px;"><div id="aep-abstract-sec-id18"><p style="margin-bottom: 16px;">In this article we address the numerical simulation of fluid&ndash;structure interaction (FSI) problems featuring large added-mass effect. We analyze different preconditioners for the coupled system matrix obtained after space&ndash;time discretization and linearization of the FSI problem. The classical Dirichlet&ndash;Neumann preconditioner has the advantage of &ldquo;modularity&rdquo; because it allows to reuse existing fluid and structure codes with minimum effort (simple interface communication). Unfortunately, its performance is very poor in case of large added-mass effects. Alternatively, we consider two non-modular approaches. The first one consists in preconditioning the coupled system with a suitable diagonal scaling combined with an ILUT preconditioner. The system is then solved by a Krylov method. The drawback of this procedure is that the combination of fluid and structure codes to solve the coupled system is not straightforward. The second non-modular approach we consider is a splitting technique based on an inexact block-LU factorization of the linear FSI system. The resulting algorithm computes the fluid velocity separately from the coupled pressure&ndash;structure system at each iteration, reducing the computational cost. Independently of the preconditioner, the efficiency of semi-implicit algorithms (i.e., those that treat geometric and fluid nonlinearities in an explicit way) is highlighted and their performance compared to the one of implicit algorithms. All the methods are tested on three-dimensional blood-vessel systems. The algorithm combining the non-modular ILUT preconditioner with Krylov methods proved to be the fastest.</p></div></div></div><ul id="issue-navigation" style="margin-right: 0px; font-size: 16px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400; margin-bottom: 16px !important; background-color: rgb(245, 245, 245) !important;"></ul>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2008c</guid>
	<pubDate>Tue, 31 Mar 2020 09:47:51 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2008c</link>
	<title><![CDATA[Fluid-structure partitioned procedures based on Robin transmission conditions]]></title>
	<description><![CDATA[<p><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">In this article we design new partitioned procedures for fluid&ndash;structure interaction problems, based on Robin-type transmission conditions. The choice of the coefficient in the Robin conditions is justified via simplified models. The strategy is effective whenever an&nbsp;</span><a href="https://www.sciencedirect.com/topics/physics-and-astronomy/incompressible-fluids" style="background-color: transparent; color: rgb(12, 125, 187); font-size: 18px; font-style: normal; font-weight: 400;" title="Learn more about Incompressible Fluids from ScienceDirect's AI-generated Topic Pages">incompressible fluid</a><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">&nbsp;interacts with a relatively thin membrane, as in hemodynamics applications. We analyze theoretically the new iterative procedures on a model problem, which represents a simplified&nbsp;<a href="https://www.sciencedirect.com/topics/physics-and-astronomy/blood-vessels" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Blood Vessels from ScienceDirect's AI-generated Topic Pages">blood-vessel</a><span>&nbsp;system. In particular, the Robin&ndash;Neumann scheme exhibits enhanced&nbsp;<a href="https://www.sciencedirect.com/topics/computer-science/convergence-property" style="background-color: transparent; color: rgb(12, 125, 187);" title="Learn more about Convergence Property from ScienceDirect's AI-generated Topic Pages">convergence properties</a>&nbsp;with respect to the existing partitioned procedures. The theoretical results are checked using numerical experimentation</span></span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2008b</guid>
	<pubDate>Mon, 30 Mar 2020 17:39:34 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2008b</link>
	<title><![CDATA[On atomistic-to-continuum coupling by blending]]></title>
	<description><![CDATA[<div><div><p>A mathematical framework for the coupling of atomistic and continuum models by blending them over a subdomain subject to a constraint is developed. Using the framework, four classes of atomistic-to-continuum (AtC) blending methods are established, their consistency is studied, and their relative merits are discussed. In addition, the framework helps clarify the origin of ghost forces and formalizes the notion of a patch test. Numerical experiments with the AtC methods are used to illustrate the theoretical results.</p></div></div>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2008a</guid>
	<pubDate>Mon, 30 Mar 2020 17:20:53 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2008a</link>
	<title><![CDATA[Splitting methods based on algebraic factorization for fluid-structure interaction]]></title>
	<description><![CDATA[<p>We discuss in this paper the numerical approximation of fluid-structure interaction (FSI) problems dealing with strong added-mass effect. We propose new semi-implicit algorithms based on inexact block-$LU$ factorization of the linear system obtained after the space-time discretization and linearization of the FSI problem. As a result, the fluid velocity is computed separately from the coupled pressure-structure velocity system at each iteration, reducing the computational cost. We investigate explicit-implicit decomposition through algebraic splitting techniques originally designed for the FSI problem. This approach leads to two different families of methods which extend to FSI the algebraic pressure correction method and the Yosida method, two schemes that were previously adopted for pure fluid problems. Furthermore, we have considered the inexact factorization of the fluid-structure system as a preconditioner. The numerical properties of these methods have been tested on a model problem representing a blood-vessel system.&nbsp;</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Fish_et_al_2007a</guid>
	<pubDate>Mon, 30 Mar 2020 16:36:42 +0200</pubDate>
	<link>https://www.scipedia.com/public/Fish_et_al_2007a</link>
	<title><![CDATA[Concurrent AtC coupling based on a blend of the continuum stress and the atomistic force]]></title>
	<description><![CDATA[<p>&nbsp;</p><ul id="issue-navigation" style="margin-top: 0px; margin-right: 0px; margin-bottom: 16px !important; margin-left: 0px; padding: 0px; background-color: rgb(245, 245, 245) !important; font-size: 16px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400; text-align: start;"></ul><p>&nbsp;</p><div id="abstracts" style="margin: 0px; padding: 0px; font-size: 18px; color: rgb(46, 46, 46); font-style: normal; font-weight: 400; text-align: start;"><div id="aep-abstract-id19" lang="en" style="margin: 0px 0px 8px; padding: 0px;"><div id="aep-abstract-sec-id20" style="margin: 0px; padding: 0px;"><p style="margin: 0px 0px 16px; padding: 0px;">A concurrent atomistic to continuum (AtC) coupling method is presented in this paper. The problem domain is decomposed into an atomistic sub-domain where fine scale features need to be resolved, a continuum sub-domain which can adequately describe the macroscale deformation and an overlap interphase sub-domain that has a blended description of the two. The problem is formulated in terms of equilibrium equations with a blending between the continuum stress and the atomistic force in the interphase. Coupling between the continuum and the atomistics is established by imposing constraints between the continuum solution and the atomistic solution over the interphase sub-domain in a weak sense. Specifically, in the examples considered here, the atomistic domain is modeled by the aluminum embedded atom method (EAM) inter-atomic potential developed by Ercolessi and Adams [F. Ercolessi, J.B. Adams, Interatomic potentials from first-principles calculations: the force-matching method, Europhys. Lett. 26 (1994) 583] and the continuum domain is a linear elastic model consistent with the EAM potential. The formulation is subjected to patch tests to demonstrate its ability to represent the constant strain modes and the rigid body modes. Numerical examples are illustrated with comparisons to reference atomistic solution.</p>
<p>&nbsp;</p>
</div></div></div>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_et_al_2007a</guid>
	<pubDate>Mon, 30 Mar 2020 14:58:21 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_et_al_2007a</link>
	<title><![CDATA[A force-based blending model for atomistic-to-continuum coupling]]></title>
	<description><![CDATA[<p style="font-size: 12px; text-align: justify; color: rgb(76, 69, 75); font-style: normal; font-weight: 400;">A method for coupling atomistic and continuum models across a subdomain, or bridge region, is presented. Coupling is effected through a force-based blending model. The method properly accounts for the the atomistic and continuum contributions to the force balance at points in the bridge region. Simple patch tests and computational experiments are used to study the method and its properties in one dimension. A discussion of implementation issues in higher dimensions is provided.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Marti-Vide_et_al_2014a</guid>
	<pubDate>Mon, 30 Mar 2020 13:43:23 +0200</pubDate>
	<link>https://www.scipedia.com/public/Marti-Vide_et_al_2014a</link>
	<title><![CDATA[Collapse of the Pilcomayo River]]></title>
	<description><![CDATA[<div id="ab0005" style="margin-bottom: 8px; color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;"><div id="aep-abstract-sec-id21"><p id="sp0005" style="margin-bottom: 16px;">The Pilcomayo River flows south-eastwards from the Bolivian Andes across the Chaco Plains, setting the border between Argentina and Paraguay. It flows down along 1000&nbsp;km, in principle, to finally join the Paraguay River. It spills over the plains during the rainy season from January to March. The sediment load of the Pilcomayo is one of the largest in the world: 140&nbsp;million&nbsp;tons per year, which is mostly wash load from the upland Andes. The mean concentration of suspended sediment is 15&nbsp;g/l. The maximum recorded concentration is as high as 60&nbsp;g/l. The river has built a large fan covering a surface of 210,000&nbsp;km<span style="font-size: 13.5px;">2</span>, with many abandoned channels. Today, it is a river prone to avulsion, raising border disputes between the two lowland countries, Argentina and Paraguay. Moreover, the very special feature of Pilcomayo River is that it does not actually flow into the Paraguay River. Very far upstream of the mouth in the Paraguay the channel blocks itself with sediment and wood debris forcing water and sediment to spread across the plains. Moreover, the point of blockage has moved hundreds of kilometers upstream throughout the 20th century. Many environmental issues arise because of this&nbsp;<em>collapse</em>&nbsp;(channel discontinuity), not the least of them is the migration of fish. The future of the river concerns Bolivia and the two lowland countries.</p></div></div>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Oliveira_et_al_2009a</guid>
	<pubDate>Mon, 30 Mar 2020 11:00:51 +0200</pubDate>
	<link>https://www.scipedia.com/public/Oliveira_et_al_2009a</link>
	<title><![CDATA[Nonlinear Regular Wave Generation in Numerical and Physical Flumes]]></title>
	<description><![CDATA[<p>The generation of nonlinear waves in a numerical wave using first-order wavemaker theory is discussed comparing numerical results with free Surface data from large scale physical tests (CIEM wave flume) and Stokes wave theories. A general formulation for the analysis of fluid-structure interaction problems is employed to simulate the numerical wave flume using the Particle Finite Element (PFEM). This method uses a Lagrangian description to model the motion of particles in both the fluid and the structure domains. With this work we can conclude that PFEM formulation simulate the generation of natyrally-occurring nonlinear waves with different types for varied wave conditions and at different scales. Like in physical flumes if we use first-order wavemaker theory in numerical flumes unwanted nonlinearities can be found for some wave conditions.The generation of nonlinear waves in a numerical wave using first-order wavemaker theory is discussed comparing numerical results with free Surface data from large scale physical tests (CIEM wave flume) and Stokes wave theories. A general formulation for the analysis of fluid-structure interaction problems is employed to simulate the numerical wave flume using the Particle Finite Element (PFEM). This method uses a Lagrangian description to model the motion of particles in both the fluid and the structure domains. With this work we can conclude that PFEM formulation simulate the generation of natyrally-occurring nonlinear waves with different types for varied wave conditions and at different scales. Like in physical flumes if we use first-order wavemaker theory in numerical flumes unwanted nonlinearities can be found for some wave conditions.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Rodriguez-Escales_Sanchez-Vila_2020a</guid>
	<pubDate>Fri, 27 Mar 2020 16:04:39 +0100</pubDate>
	<link>https://www.scipedia.com/public/Rodriguez-Escales_Sanchez-Vila_2020a</link>
	<title><![CDATA[Modeling the fate of UV filters in subsurface: co-metabolic degradation and the role of biomass in sorption processes]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Ultraviolet filters (UVFs) are emerging organic compounds found in most water systems. They are constituents of personal care products, as well as industrial ones. The concentration of UVFs in the water bodies in space and time is mostly determined by degradation and sorption, both processes being determinant of their bioavailability and toxicity to ecosystems and humans. UVFs are a wide group of compounds, with different sorption behavior expected depending on the individual chemical properties (pKa,Koc,Kow). The goal of this work is framed in the context of improving our understanding of the sorption processes of UVFs occurring in the aquifer; that is, to evaluate the role of biomass growth, solid organic matter (SOM) and redox conditions in the characterization of sorption of a set of UVFs. We constructed a conceptual and a numerical model to evaluate the fate of selected UV filters, focused on both sorption and degradation. The models were validated with published data by Liu et al. (2013), consisting in a suite of batch experiments evaluating the fate of a cocktail of UVs under different redox conditions. The compounds evaluated included ionic UV filters (Benzophenone-3; 2-(3-t-butyl-2-hydroxy-5-methylphenyl)5-chloro-benzotriazole; 2-(2&#39;-hydroxy-5&#39;-octylphenyl)-benzotriazole) and neutral ones (octyl 4-methoxycinnamatte; and octocrylene).</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ferrer_et_al_2020a</guid>
	<pubDate>Fri, 27 Mar 2020 15:59:10 +0100</pubDate>
	<link>https://www.scipedia.com/public/Ferrer_et_al_2020a</link>
	<title><![CDATA[What are the main factors influencing the presence of faecal bacteria pollution in groundwater systems in developing countries?]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Groundwater is the major source of drinking water in most rural areas in developing countries. This resource is threatened by the potential presence of faecal bacteria coming from a variety of sources and pollution paths, the former including septic tanks, landfills, and crop irrigation with untreated, or insufficiently treated, sewage effluent. Accurately assessing the microbiological safety of water resources is essential to reduce diseases caused by waterborne faecal exposure. The objective of this study is to discern which are the most significant sanitary, hydrogeological, geochemical, and physical variables influencing the presence of faecal bacterial pollution in groundwater by means of statistical multivariate analyses. The concentration of Escherichia coli was measured in a number of waterpoints of different types in a rural area located in the coast of Kenya, assessing both a dry and a wet season. The results from the analyses reaffirm that the design of the well and their maintenance, the distance to latrines, and the geological structure of the waterpoints are the most significant variables affecting the presence of E. coli. Most notably, the presence of faecal bacteria in the study area correlates negatively with the concentration of ion Na+ (being an indirect indicator of fast recharge in the study site), and also negatively with the length of the water column inside the well.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Sole-Mari_et_al_2019a</guid>
	<pubDate>Fri, 27 Mar 2020 15:52:51 +0100</pubDate>
	<link>https://www.scipedia.com/public/Sole-Mari_et_al_2019a</link>
	<title><![CDATA[Particle density estimation with grid-projected and boundary-corrected adaptive kernels]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">The reconstruction of smooth density fields from scattered data points is a procedure that has multiple applications in a variety of disciplines, including Lagrangian (particle-based) models of solute transport in fluids. In random walk particle tracking (RWPT) simulations, particle density is directly linked to solute concentrations, which is normally the main variable of interest, not just for visualization and post-processing of the results, but also for the computation of non-linear processes, such as chemical reactions. Previous works have shown the improved nature of kernel density estimation (KDE) over other methods such as binning, in terms of its ability to accurately estimate the &ldquo;true&rdquo; particle density relying on a limited amount of information. Here, we develop a grid-projected KDE methodology to determine particle densities by applying kernel smoothing on a pilot binning; this may be seen as a &ldquo;hybrid&rdquo; approach between binning and KDE. The kernel bandwidth is optimized locally. Through simple implementation examples, we elucidate several appealing aspects of the proposed approach, including its computational efficiency and the possibility to account for typical boundary conditions, which would otherwise be cumbersome in conventional KDE.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Perujo_et_al_2019a</guid>
	<pubDate>Fri, 27 Mar 2020 15:40:11 +0100</pubDate>
	<link>https://www.scipedia.com/public/Perujo_et_al_2019a</link>
	<title><![CDATA[A bilayer coarse-fine infiltration system minimizes bioclogging: the relevance of depth-dynamics]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Bioclogging is a main concern in infiltration systems as it may significantly shorten the service life of these low-technology water treatment methods. In porous media, biofilms grow to clog partially or totally the pore network. Dynamics of biofilm accumulation (e.g., by attachment, detachment, advective transport in depth) and their impact on both surface and deep bioclogging are still not yet fully understood. To address this concern, a 104 day-long outdoor infiltration experiment in sand tanks was performed, using secondary treated wastewater and two grain size distributions (GSDs): a monolayer system filled with fine sand, and a bilayer one composed by a layer of coarse sand placed on top of a layer of fine sand. Biofilm dynamics as a function of GSD and depth were studied through cross-correlations and multivariate statistical analyses using different parameters from biofilm biomass and activity indices, plus hydraulic parameters measured at different depths. Bioclogging (both surface and deep) was found more significant in the monolayer fine system than in the bilayer coarse-fine one, possibly due to an early low-cohesive biofilm formation in the former, driven by lower porosity and lower fluxes; under such conditions biomass is favorably detached from the top layer, transported and accumulated in depth, so that new biomass might colonize the surface. On the other hand, in the bilayer system, fluxes are highest, and the biofilm is still in a growing phase, with low biofilm detachment capability from the top sand layer and high microbial activity in depth, resulting in low bioclogging. Overall, the bilayer coarse-fine system allows infiltrating higher volume of water per unit of surface area than the monolayer fine one, minimizing surface and deep bioclogging, and thus increasing the longevity and efficiency of infiltration systems. Bioclogging is a main concern in infiltration systems as it may significantly shorten the service life of these low-technology water treatment methods. In porous media, biofilms grow to clog partially or totally the pore network. Dynamics of biofilm accumulation (e.g., by attachment, detachment, advective transport in depth) and their impact on both surface and deep bioclogging are still not yet fully understood. To address this concern, a 104&iquest;day-long outdoor infiltration experiment in sand tanks was performed, using secondary treated wastewater and two grain size distributions (GSDs): a monolayer system filled with fine sand, and a bilayer one composed by a layer of coarse sand placed on top of a layer of fine sand. Biofilm dynamics as a function of GSD and depth were studied through cross-correlations and multivariate statistical analyses using different parameters from biofilm biomass and activity indices, plus hydraulic parameters measured at different depths. Bioclogging (both surface and deep) was found more significant in the monolayer fine system than in the bilayer coarse-fine one, possibly due to an early low-cohesive biofilm formation in the former, driven by lower porosity and lower fluxes; under such conditions biomass is favorably detached from the top layer, transported and accumulated in depth, so that new biomass might colonize the surface. On the other hand, in the bilayer system, fluxes are highest, and the biofilm is still in a growing phase, with low biofilm detachment capability from the top sand layer and high microbial activity in depth, resulting in low bioclogging. Overall, the bilayer coarse-fine system allows infiltrating higher volume of water per unit of surface area than the monolayer fine one, minimizing surface and deep bioclogging, and thus increasing the longevity and efficiency of infiltration systems.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Barba_et_al_2019b</guid>
	<pubDate>Fri, 27 Mar 2020 15:33:53 +0100</pubDate>
	<link>https://www.scipedia.com/public/Barba_et_al_2019b</link>
	<title><![CDATA[Are dominant microbial sub-surface communities affected by water quality and soil characteristics?]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Subsurface microorganisms must deal with quite extreme environmental conditions. The lack of light, oxygen, and potentially nutrients are the main environmental stresses faced by subsurface microbial communities. Likewise, environmental disruptions providing an unbalanced positive input of nutrients force microorganisms to adapt to varying conditions, visible in the changes in microbial community diversity. In order to test microbial community adaptation to environmental changes, we performed a study in a surface Managed Aquifer Recharge facility, consisting of a settlement basin (two-day residence time) and an infiltration pond. Data on groundwater hydrochemistry, soil texture, and microbial characterization was compiled from surface water, groundwater, and soil samples at two distinct recharge operation conditions. Multivariate statistics by means of Principal Component Analysis (PCA) was the technique used to map the relevant dimensionality reduced combinations of input variables that properly describe the system behavior. The methodology selected allows including variables of different nature and displaying very different range values. Strong differences in the microbial assemblage under recharge conditions were found, coupled to hydrochemistry and grain-size distribution variables. Also, some microbial groups displayed correlations with either carbon or nitrogen cycles, especially showing abundant populations of denitrifying bacteria in groundwater. A significant correlation was found between Methylotenera mobilis and the concentrations of NO3 and SO4, and also between Vogesella indigofera and the presence of DOC in the infiltrating water. Also, microbial communities present at the bottom of the pond correlated with representative descriptors of soil grain size distribution.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Locatelli_et_al_2019a</guid>
	<pubDate>Fri, 27 Mar 2020 15:27:59 +0100</pubDate>
	<link>https://www.scipedia.com/public/Locatelli_et_al_2019a</link>
	<title><![CDATA[A simple contaminant fate and transport modelling tool for management and risk assessment of groundwater pollution from contaminated sites]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Contaminated sites pose a significant threat to groundwater resources. The resources that can be allocated by water regulators for site investigation and cleanup are limited compared to the large number of contaminated sites. Numerical transport models of individual sites require large amounts of data and are labor intensive to set up, and thus they are likely to be too expensive to be useful in the management of thousands of contaminated sites. Therefore, simple tools based on analytical solutions of contaminant transport models are widely used to assess (at an early stage) whether a site might pose a threat to groundwater. We present a tool consisting of five different models, representing common geological settings, contaminant pathways, and transport processes. The tool employs a simplified approach for preliminary, conservative, fast and inexpensive estimation of the contamination levels of aquifers. This is useful for risk assessment applications or to select and prioritize the sites, which should be targeted for further investigation. The tool is based on steady-state semi-analytical models simulating different contaminant transport scenarios from the source to downstream groundwater, and includes both unsaturated and saturated transport processes. The models combine existing analytical solutions from the literature for vertical (from the source to the top of the aquifer) and horizontal (within the aquifer) transport. The effect of net recharge causing a downward migration and an increase of vertical dispersion and dilution of the plume is also considered. Finally, we illustrate the application of the tool for a preliminary assessment of two contaminated sites in Denmark and compare the model results with field data. The comparison shows that a first preliminary assessment with conservative, and often non-site specific parameter selection, is qualitatively consistent with broad trends in observations and provides a conservative estimate of contamination.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Barba_et_al_2019a</guid>
	<pubDate>Fri, 27 Mar 2020 15:16:57 +0100</pubDate>
	<link>https://www.scipedia.com/public/Barba_et_al_2019a</link>
	<title><![CDATA[Microbial community changes induced by Managed Aquifer Recharge activities: linking hydrogeological and biological processes]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Managed Aquifer Recharge (MAR) is a technique used worldwide to increase the availability of water resources. We study how MAR modifies microbial ecosystems and its implications for enhancing biodegradation processes to eventually improve groundwater quality. We compare soil and groundwater samples taken from a MAR facility located in NE Spain during recharge (with the facility operating continuously for several months) and after 4 months of no recharge. The study demonstrates a strong correlation between soil and water microbial prints with respect to sampling location along the mapped infiltration path. In particular, managed recharge practices disrupt groundwater ecosystems by modifying diversity indices and the composition of microbial communities, indicating that infiltration favors the growth of certain populations. Analysis of the genetic profiles showed the presence of nine different bacterial phyla in the facility, revealing high biological diversity at the highest taxonomic range. In fact, the microbial population patterns under recharge conditions agree with the intermediate disturbance hypothesis (IDH). Moreover, DNA sequence analysis of excised denaturing gradient gel electrophoresis (DGGE) band patterns revealed the existence of indicator species linked to MAR, most notably Dehalogenimonas sp., Nitrospira sp. and Vogesella sp.. Our real facility multidisciplinary study (hydrological, geochemical and microbial), involving soil and groundwater samples, indicates that MAR is a naturally based, passive and efficient technique with broad implications for the biodegradation of pollutants dissolved in water.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Carles-Brangari_et_al_2018a</guid>
	<pubDate>Fri, 27 Mar 2020 14:59:04 +0100</pubDate>
	<link>https://www.scipedia.com/public/Carles-Brangari_et_al_2018a</link>
	<title><![CDATA[Ecological and soil hydraulic implications of microbial responses to stress: a modeling analysis]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">A better understanding of microbial dynamics in porous media may lead to improvements in the design and management of a number of technological applications, ranging from the degradation of contaminants to the optimization of agricultural systems. To this aim, there is a recognized need for predicting the proliferation of soil microbial biomass (often organized in biofilms) under different environments and stresses. We present a general multi-compartment model to account for physiological responses that have been extensively reported in the literature. The model is used as an explorative tool to elucidate the ecological and soil hydraulic consequences of microbial responses, including the production of extracellular polymeric substances (EPS), the induction of cells into dormancy, and the allocation and reuse of resources between biofilm compartments. The mechanistic model is equipped with indicators allowing the microorganisms to monitor environmental and biological factors and react according to the current stress pressures. The feedbacks of biofilm accumulation on the soil water retention are also described. Model runs simulating different degrees of substrate and water shortage show that adaptive responses to the intensity and type of stress provide a clear benefit to microbial colonies. Results also demonstrate that the model may effectively predict qualitative patterns in microbial dynamics supported by empirical evidence, thereby improving our understanding of the effects of pore-scale physiological mechanisms on the soil macroscale phenomena.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Rodriguez-Escales_et_al_2018a</guid>
	<pubDate>Fri, 27 Mar 2020 14:49:46 +0100</pubDate>
	<link>https://www.scipedia.com/public/Rodriguez-Escales_et_al_2018a</link>
	<title><![CDATA[A risk assessment methodology to evaluate the risk failure of managed aquifer recharge in the Mediterranean Basin]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Managed aquifer recharge (MAR) can be affected by many risks. Those risks are related to different technical and non-technical aspects of recharge, like water availability, water quality, legislation, social issues, etc. Many other works have acknowledged risks of this nature theoretically; however, their quantification and definition has not been developed. In this study, the risk definition and quantification has been performed by means of &quot;fault trees&quot; and probabilistic risk assessment (PRA). We defined a fault tree with 65 basic events applicable to the operation phase. After that, we have applied this methodology to six different managed aquifer recharge sites located in the Mediterranean Basin (Portugal, Spain, Italy, Malta, and Israel). The probabilities of the basic events were defined by expert criteria, based on the knowledge of the different managers of the facilities. From that, we conclude that in all sites, the perception of the expert criteria of the non-technical aspects were as much or even more important than the technical aspects. Regarding the risk results, we observe that the total risk in three of the six sites was equal to or above 0.90. That would mean that the MAR facilities have a risk of failure equal to or higher than 90&iquest;% in the period of 2&ndash;6 years. The other three sites presented lower risks (75, 29, and 18&iquest;% for Malta, Menashe, and Serchio, respectively). Managed aquifer recharge (MAR) can be affected by many risks. Those risks are related to different technical and non-technical aspects of recharge, like water availability, water quality, legislation, social issues, etc. Many other works have acknowledged risks of this nature theoretically; however, their quantification and definition has not been developed. In this study, the risk definition and quantification has been performed by means of &quot;fault trees&quot; and probabilistic risk assessment (PRA). We defined a fault tree with 65 basic events applicable to the operation phase. After that, we have applied this methodology to six different managed aquifer recharge sites located in the Mediterranean Basin (Portugal, Spain, Italy, Malta, and Israel). The probabilities of the basic events were defined by expert criteria, based on the knowledge of the different managers of the facilities. From that, we conclude that in all sites, the perception of the expert criteria of the non-technical aspects were as much or even more important than the technical aspects. Regarding the risk results, we observe that the total risk in three of the six sites was equal to or above 0.90. That would mean that the MAR facilities have a risk of failure equal to or higher than 90 % in the period of 2&ndash;6 years. The other three sites presented lower risks (75, 29, and 18 % for Malta, Menashe, and Serchio, respectively).</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Perujo_et_al_2018a</guid>
	<pubDate>Fri, 27 Mar 2020 14:38:21 +0100</pubDate>
	<link>https://www.scipedia.com/public/Perujo_et_al_2018a</link>
	<title><![CDATA[Bilayer infiltration system combines benefits from both coarse and fine sands promoting nutrient accumulation in sediments and increasing removal rates]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Infiltration systems are treatment technologies based on water percolation through porous media where biogeochemical processes take place. Grain size distribution (GSD) acts as a driver of these processes and their rates and influences nutrient accumulation in sediments. Coarse sands inhibit anaerobic reactions such as denitrification and could constrain nutrient accumulation in sediments due to smaller specific surface area. Alternatively, fine sands provide higher nutrient accumulation but need a larger area available to treat the same volume of water; furthermore, they are more susceptible to bioclogging. Combining both sand sizes in a bilayer system would allow infiltrating a greater volume of water and the occurrence of aerobic/anaerobic processes. We studied the performance of a bilayer coarse-fine system compared to a monolayer fine one - by triplicate - in an outdoor infiltration experiment to close the C-N-P cycles simultaneously in terms of mass balances. Our results confirm that the bilayer coarse-fine GSD promotes nutrient removal by physical adsorption and biological assimilation in sediments, and further it enhances biogeochemical process rates (2-fold higher than the monolayer system). Overall, the bilayer coarse-fine system allows treating a larger volume of water per surface unit achieving similar removal efficiencies as the fine system. This document is the unedited Author&rsquo;s version of a Submitted Work that was subsequently accepted for publication in Environmental science and technology.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Rubol_et_al_2018a</guid>
	<pubDate>Fri, 27 Mar 2020 14:25:39 +0100</pubDate>
	<link>https://www.scipedia.com/public/Rubol_et_al_2018a</link>
	<title><![CDATA[Linking biofilm spatial structure to real-time microscopic oxygen decay imaging]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Two non-destructive techniques, confocal laser scanning microscopy (CLSM) and planar optode (VisiSens imaging), were combined to relate the fine-scale spatial structure of biofilm components to real-time images of oxygen decay in aquatic biofilms. Both techniques were applied to biofilms grown for seven days at contrasting light and temperature (10/20&deg;C) conditions. The geo-statistical analyses of CLSM images indicated that biofilm structures consisted of small (~100 &micro;m) and middle sized (~101 &micro;m) irregular aggregates. Cyanobacteria and EPS (extracellular polymeric substances) showed larger aggregate sizes in dark grown biofilms while, for algae, aggregates were larger in light-20&deg;C conditions. Light-20&deg;C biofilms were most dense while 10&deg;C biofilms showed a sparser structure and lower respiration rates. There was a positive relationship between the number of pixels occupied and the oxygen decay rate. The combination of optodes and CLMS, taking advantage of geo-statistics, is a promising way to relate biofilm architecture and metabolism at the micrometric scale.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Sole-Mari_et_al_2017a</guid>
	<pubDate>Fri, 27 Mar 2020 13:43:54 +0100</pubDate>
	<link>https://www.scipedia.com/public/Sole-Mari_et_al_2017a</link>
	<title><![CDATA[A KDE-based random walk method for modeling reactive transport with complex kinetics in porous media]]></title>
	<description><![CDATA[<p><span style="color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">In recent years, a large body of the literature has been devoted to study reactive transport of solutes in porous media based on pure Lagrangian formulations. Such approaches have also been extended to accommodate second‐order bimolecular reactions, in which the reaction rate is proportional to the concentrations of the reactants. Rather, in some cases, chemical reactions involving two reactants follow more complicated rate laws. Some examples are (1) reaction rate laws written in terms of powers of concentrations, (2) redox reactions incorporating a limiting term (e.g., Michaelis‐Menten), or (3) any reaction where the activity coefficients vary with the concentration of the reactants, just to name a few. We provide a methodology to account for complex kinetic bimolecular reactions in a fully Lagrangian framework where each particle represents a fraction of the total mass of a specific solute. The method, built as an extension to the second‐order case, is based on the concept of optimal Kernel Density Estimator, which allows the concentrations to be written in terms of particle locations, hence transferring the concept of reaction rate to that of particle location distribution. By doing so, we can update the probability of particles reacting without the need to fully reconstruct the concentration maps. The performance and convergence of the method is tested for several illustrative examples that simulate the Advection‐Dispersion‐Reaction Equation in a 1‐D homogeneous column. Finally, a 2‐D application example is presented evaluating the need of fully describing non‐bilinear chemical kinetics in a randomly heterogeneous porous medium.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ginn_et_al_2017a</guid>
	<pubDate>Fri, 27 Mar 2020 13:32:12 +0100</pubDate>
	<link>https://www.scipedia.com/public/Ginn_et_al_2017a</link>
	<title><![CDATA[Revisiting the analytical solution approach to mixing-limited equilibrium multicomponent reactive transport using mixing ratios: identification of basis, fixing an error, and dealing with multiple minerals]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 13px; font-style: normal; font-weight: 400; background-color: rgb(240, 244, 255);">Multicomponent reactive transport involves the solution of a system of nonlinear coupled partial differential equations. A number of methods have been developed to simplify the problem. In the case where all reactions are in instantaneous equilibrium and the mineral assemblage is constant in both space and time, de Simoni et al. (2007) provide an analytical solution that separates transport of aqueous components and minerals using scalar dissipation of Multicomponent reactive transport involves the solution of a system of nonlinear coupled partial differential equations. A number of methods have been developed to simplify the problem. In the case where all reactions are in instantaneous equilibrium and the mineral assemblage is constant in both space and time, de Simoni et al. (2007) provide an analytical solution that separates transport of aqueous components and minerals using scalar dissipation of &quot;mixing ratios&quot; between a number of boundary/initial solutions. In this approach, aqueous speciation is solved in conventional terms of primary and secondary species, and the mineral dissolution/precipitation rate is given in terms of the scalar dissipation and a chemical transformation term, both involving the secondary species associated with the mineral reaction. However, the identification of the secondary species is nonunique, and so it is not clear how to use the approach in general, a problem that is keenly manifest in the case of multiple minerals which may share aqueous ions. We address this problem by developing an approach to identify the secondary species required in the presence of one or multiple minerals. We also remedy a significant error in the de Simoni et al. (2007) approach. The result is a fixed and extended de Simoni et al. (2007) approach that allows construction of analytical solutions to multicomponent equilibrium reactive transport problems in which the mineral assemblage does not change in space or time and where the transport is described by closed-form solutions of the mixing ratios.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>

</channel>
</rss>