<?xml version='1.0'?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:georss="http://www.georss.org/georss" xmlns:atom="http://www.w3.org/2005/Atom" >
<channel>
	<title><![CDATA[Scipedia: Documents published in 2016]]></title>
	<link>https://www.scipedia.com/sitemaps/year/2016?offset=1700</link>
	<atom:link href="https://www.scipedia.com/sitemaps/year/2016?offset=1700" rel="self" type="application/rss+xml" />
	<description><![CDATA[]]></description>
	
	<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Yervilla_et_al._2015a</guid>
	<pubDate>Thu, 21 Jul 2016 15:00:13 +0200</pubDate>
	<link>https://www.scipedia.com/public/Yervilla_et_al._2015a</link>
	<title><![CDATA[Removing non visible objects in scenes of clusters of particles]]></title>
	<description><![CDATA[<p>The aim of this paper is to partially solve the problem of visualization of clusters of particles resulting from numerical methods in engineering. Two methods for removing non-visible particles from some view point of the camera were developed. These are divided depending on the type of information available: particle systems with contour information (surface mesh) and particle systems without contour information (methods without mesh). In both cases, the results are very good, it is achieved removing large amount of particles that do not affect the final image, permitting interact with the results of numerical methods.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Torres-Bejarano_et_al._2015a</guid>
	<pubDate>Thu, 21 Jul 2016 15:00:08 +0200</pubDate>
	<link>https://www.scipedia.com/public/Torres-Bejarano_et_al._2015a</link>
	<title><![CDATA[The hydrodynamic modelling for the water management of el Guájaro Reservoir, Colombia]]></title>
	<description><![CDATA[<p>The Gu&aacute;jaro Reservoir in northern Colombia is a hydrosystem that is supplied by an artificial channel (Canal del Dique) through a system of floodgates. During the last decades, has been presenting problems of excessive use, which is why, it is necessary to regulate the hydraulic structures that serve this water body, as they play an important role in managing the levels that in turn affect the water supply. The present work is carried out as a sustainability management alternative of the reservoir. A two-dimensional hydrodynamic model (EFDC Explorer) and its calibration is implemented using time series of the free surface levels, and comparing the measured velocities and those estimated by the model for two different climatic periods, to assist the operation of the Hydrosystem Canal del Dique-Gu&aacute;jaro Reservoir sustainability. The corresponding comparisons showed a good behavior between measured and simulated data, based on the quantitative results of Nash-Sutcliffe reliability method. It is considered that the results are quite satisfactory and allows the estimation of conditions for restoration, its use, as well as the incoming and outgoing water through hydrosystem channel-reservoir.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Moreno_Cervera_2015a</guid>
	<pubDate>Thu, 21 Jul 2016 15:00:02 +0200</pubDate>
	<link>https://www.scipedia.com/public/Moreno_Cervera_2015a</link>
	<title><![CDATA[Stabilized finite elements for Bingham and Herschel-Bulkley confined flows Part II: Numerical simulations]]></title>
	<description><![CDATA[<p>The objective of this work is to model computationally Bingham and Herschel-Bulkley viscoplastic fluids using stabilized mixed velocity/pressure finite elements. Numerical solutions for these viscoplastic flows are presented and assessed. The regularized viscoplastic models due to Papanastasiou is used. In the discrete model, the Orthogonal Subgrid scale (OSS) method is used.</p><p>In this part II , numerical solutions for two problems of Bingham and Herschel-Bulkley confined flows are presented. The solutions obtained validate the methodology proposed in part I of this work.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cao_et_al._2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:18 +0200</pubDate>
	<link>https://www.scipedia.com/public/Cao_et_al._2015a</link>
	<title><![CDATA[Geoengineering: Basic science and ongoing research efforts in China]]></title>
	<description><![CDATA[<p>Geoengineering (also called climate engineering), which refers to large-scale intervention in the Earths climate system to counteract greenhouse gas-induced warming, has been one of the most rapidly growing areas of climate research as a potential option for tackling global warming. Here, we provide an overview of the scientific background and research progress of proposed geoengineering schemes. Geoengineering can be broadly divided into two categories: solar geoengineering (also called solar radiation management, or SRM), which aims to reflect more sunlight to space, and carbon dioxide removal (CDR), which aims to reduce the CO2 content in the atmosphere. First, we review different proposed geoengineering methods involved in the solar radiation management and carbon dioxide removal schemes. Then, we discuss the fundamental science underlying the climate response to the carbon dioxide removal and solar radiation management schemes. We focus on two basic issues: 1) climate response to the reduction in solar irradiance and 2) climate response to the reduction in atmospheric CO2. Next, we introduce an ongoing geoengineering research project in China that is supported by National Key Basic Research Program. This research project, being the first coordinated geoengineering research program in China, will systematically investigate the physical mechanisms, climate impacts, and risk and governance of a few targeted geoengineering schemes. It is expected that this research program will help us gain a deep understanding of the physical science underlying geoengineering schemes and the impacts of geoengineering on global climate, in particular, on the Asia monsoon region.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/He_2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:16 +0200</pubDate>
	<link>https://www.scipedia.com/public/He_2015a</link>
	<title><![CDATA[Chinas INDC and non-fossil energy development]]></title>
	<description><![CDATA[<p>Global climate change promotes the energy system reform. Achieving a high proportion of renewable energy becomes the major countries&#39; energy strategy. As proposed in its Intended Nationally Determined Contributions (INDC), China intends to raise the proportion of non-fossil energy in primary energy consumption to about 20% by 2030. That ambitious goal means the non-fossil energy supplies by 2030 will be 7&ndash;8 times that of 2005, and the annual increase rate is more than 8% within the 25 years. Besides, the capacity of wind power, solar power, hydropower and nuclear power reaches 400&nbsp;GW, 350&nbsp;GW, 450&nbsp;GW, and 150&nbsp;GW respectively, and Chinas non-fossil power capacity is even greater than the U.S.s total power capacity. In addition, the scale of natural gas increases. Consequently, by 2030, the proportion of coal falls from the current 70% to below 50%, and the CO2 intensity of energy consumption decreases by 20% compared with the level of 2005, which play important roles in significantly reducing the CO2 intensity of GDP. Since China has confirmed to achieve the CO2 emissions peak around 2030, at that time, the newly added energy demand will be satisfied by non-fossil energy, and the consumption of fossil fuel will stop growing. By 2030, non-fossil energy accounts for 20%, and the large scale and sound momentum of new and renewable energy industry will support the growth of total energy demand, which plays a key role in CO2 emissions peaking and beginning to decline, and lays the foundation for establishing a new energy system dominated by new and renewable energy in the second half of the 21st century as well as finally achieving the CO2 zero-emission.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Liu_Chen_2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:16 +0200</pubDate>
	<link>https://www.scipedia.com/public/Liu_Chen_2015a</link>
	<title><![CDATA[Impacts, risks, and governance of climate engineering]]></title>
	<description><![CDATA[<p>Climate engineering is a potential alternative method to curb global warming, and this discipline has garnered considerable attention from the international scientific community including the Chinese scientists. This manuscript provides an overview of several aspects of climate engineering, including its definition, its potential impacts and risk, and its governance status. The overall conclusion is that China is not yet ready to implement climate engineering. However, it is important for China to continue conducting research on climate engineering, particularly with respect to its feasible application within China, its potential social, economic, and environmental impacts, and possible international governance structures and governing principles, with regard to both experimentation and implementation.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Lu_et_al.2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:14 +0200</pubDate>
	<link>https://www.scipedia.com/public/Lu_et_al.2015a</link>
	<title><![CDATA[Industrial transformation and green production to reduce environmental emissions: Taking cement industry as a case]]></title>
	<description><![CDATA[<p>Industrial transformation and green production (ITGP) is a new 10-year international research initiative proposed by the Chinese National Committee for Future Earth. It is also an important theme for adapting and responding to global environmental change. Aiming at a thorough examination of the implementation of ITGP in China, this paper presents its objectives, its three major areas, and their progress so far. It also identifies the key elements of its management and proposes new perspectives on managing green transformation. For instance, we introduce a case study on cement industry that shows the positive policy effects of reducing backward production capacity on PCDD/Fs emissions. Finally, to develop different transformation scenarios for a green future, we propose four strategies: 1) policy integration for promoting green industry, 2) system innovation and a multidisciplinary approach, 3) collaborative governance with all potential stakeholders, and 4) managing uncertainty, risks, and long-time horizons.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ma_et_al._2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:10 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ma_et_al._2015a</link>
	<title><![CDATA[CH4 emissions and reduction potential in wastewater treatment in China]]></title>
	<description><![CDATA[<p>The treatment of domestic and industrial wastewater is one of the major sources of CH4 in the Chinese waste sector. On the basis of statistical data and country-specific emission factors, using IPCC methodology, the characteristics of CH4 emissions from wastewater treatment in China were analyzed. The driving factors of CH4 emissions were studied, and the emission trend and reduction potential were predicted and analyzed according to the current situation. Results show that in 2010, CH4 emissions from the treatment of domestic and industrial wastewater were 0.6110&nbsp;Mt and 1.6237&nbsp;Mt, respectively. Eight major industries account for more than 92% of emissions, and CH4 emissions gradually increased from 2005 to 2010. From the controlling management scenario, we predict that in 2020, CH4 emissions from the treatment of domestic and industrial wastewater will be 1.0136&nbsp;Mt and 2.3393&nbsp;Mt, respectively, and the reduction potential will be 0.0763&nbsp;Mt and 0.2599&nbsp;Mt, respectively. From 2010 to 2020, CH4 emissions from the treatment of domestic and industrial wastewater will increase by 66% and 44%, respectively.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Sanwal_2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:09 +0200</pubDate>
	<link>https://www.scipedia.com/public/Sanwal_2015a</link>
	<title><![CDATA[The climate negotiations &amp; sustainable development: Rational behind the debate and the need for a better understanding]]></title>
	<description><![CDATA[<p>The unresolved issue in the negotiation on the new climate regime is whether the multilateral consensus at Paris will be around international cooperation to deal with the causes, the need for a global transformation to a low carbon economy and society, or focus on emissions reduction, which are the symptoms of the problem.</p><p>The commentary looks at two divergent trends which are broadening the definition of climate change to include security and shifting the focus of solutions from production to consumption patterns; both are expected to shape the debate. The security establishment has expressed views on the climate negotiations in, their deliberations in the North Atlantic Treaty Organization on 12 October, 2015. A recent report of the International Energy Agency, 8 October 2015, stresses energy efficiency as a key solution to deal with the causes of the problem. Twenty years after the climate treaty was negotiated in 1992, the climate debate has moved away from the sole focus on atmospheric sciences to the social sciences and also consider strategic concerns and consumption patterns, or politics and lifestyles.</p><p>It remains to be seen whether the new regime will adopt a broader sustainable development perspective or continue to view international cooperation and national action narrowly in terms of environmental risk.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Xiao_et_al._2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:07 +0200</pubDate>
	<link>https://www.scipedia.com/public/Xiao_et_al._2015a</link>
	<title><![CDATA[A preliminary study of cryosphere service function and value evaluation]]></title>
	<description><![CDATA[<p>Cryosphere science research and development (R&amp;D) has been strongly committed to public service, integrating natural sciences with socioeconomic impacts. Owing to the current shift from purely natural cryosphere scientific research to linking cryosphere science with socioeconomic and cultural science, cross-disciplinary research in this field is emerging, which advocates future cryosphere science research in this field. Utilizing the cryosphere service function (CSF), this study establishes CSF and its value evaluation system. Cryosphere service valuation can benefit the decisionmakers&#39; and publics awareness of environmental protection. Implementing sustainable CSF utilization strategies and macroeconomic policymaking for global environmental protection will have profound and practical significance as well as avoid environmental degradation while pursuing short-term economic profits and achieving rapid economic development.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Yang_et_al._2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:05 +0200</pubDate>
	<link>https://www.scipedia.com/public/Yang_et_al._2015a</link>
	<title><![CDATA[Vulnerability of mountain glaciers in China to climate change]]></title>
	<description><![CDATA[<p>Mountain glaciers in China are an important water source for both China and adjoining countries, and therefore their adaptation to glacier change is crucial in relation to maintaining populations. This study aims to improve our understanding of glacial vulnerability to climate change to establish adaptation strategies. A glacial numerical model is developed using spatial principle component analysis (SPCA) supported by remote sensing (RS) and geographical information system (GIS) technologies. The model contains nine factors&mdash;slope, aspect, hillshade, elevation a.s.l., air temperature, precipitation, glacial area change percentage, glacial type and glacial area, describing topography, climate, and glacier characteristics. The vulnerability of glaciers to climate change is evaluated during the period of 1961&ndash;2007 on a regional scale, and in the 2030s and 2050s based on projections of air temperature and precipitation changes under the IPCC RCP6.0 scenario and of glacier change in the 21st century. Glacial vulnerability is graded into five levels: potential, light, medial, heavy, and very heavy, using natural breaks classification (NBC). The spatial distribution of glacial vulnerability and its temporal changes in the 21st century for the RCP6.0 scenario are analyzed, and the factors influencing vulnerability are discussed. Results show that mountain glaciers in China are very vulnerable to climate change, and 41.2% of glacial areas fall into the levels of heavy and very heavy vulnerability in the period 1961&ndash;2007. This is mainly explained by topographical exposure and the high sensitivity of glaciers to climate change. Trends of glacial vulnerability are projected to decline in the 2030s and 2050s, but a declining trend is still high in some regions. In addition to topographical factors, variation in precipitation in the 2030s and 2050s is found to be crucial.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Zhang_et_al._2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:02 +0200</pubDate>
	<link>https://www.scipedia.com/public/Zhang_et_al._2015a</link>
	<title><![CDATA[How similar are annual and summer temperature variability in central Sweden?]]></title>
	<description><![CDATA[<p>Tree-ring based temperature reconstructions have successfully inferred the past inter-annual to millennium scales summer temperature variability. A clear relationship between annual and summer temperatures can provide insights into the variability of past annual mean temperature from the reconstructed summer temperature. However, how similar are summer and annual temperatures is to a large extent still unknown. This study aims at investigating the relationship between annual and summer temperatures at different timescales in central Sweden during the last millennium. The temperature variability in central Sweden can represent large parts of Scandinavia which has been a key region for dendroclimatological research. The observed annual and summer temperatures during 1901&ndash;2005 were firstly decomposed into different frequency bands using ensemble empirical mode decomposition (EEMD) method, and then the scale-dependent relationship was quantified using Pearson correlation coefficients. The relationship between the observed annual and summer temperatures determined by the instrumental data was subsequently used to evaluate 7 climate models. The model with the best performance was used to infer the relationship for the last millennium. The results show that the relationship between the observed annual and summer temperatures becomes stronger as the timescale increases, except for the 4&ndash;16 years timescales at which it does not show any relationship. The summer temperature variability at short timescales (2&ndash;4 years) shows much higher variance than the annual variability, while the annual temperature variability at long timescales (&gt;32 years) has a much higher variance than the summer one. During the last millennium, the simulated summer temperature also shows higher variance at the short timescales (2&ndash;4 years) and lower variance at the long timescales (&gt;1024 years) than those of the annual temperature. The relationship between the two temperatures is generally close at the long timescales, and weak at the short timescales. Overall the summer temperature variability cannot well reflect the annual mean temperature variability for the study region during both the 20th century and the last millennium. Furthermore, all the climate models examined overestimate the annual mean temperature variance at the 2&ndash;4 years timescales, which indicates that the overestimate could be one of reasons why the volcanic eruption induced cooling is larger in climate models than in proxy data.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Yong-Jung_2015a</guid>
	<pubDate>Tue, 19 Jul 2016 17:28:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Yong-Jung_2015a</link>
	<title><![CDATA[Climate technology promotion in the Republic of Korea]]></title>
	<description><![CDATA[<p>The implementation of climate technologies and their commercialization ultimately depends on the success of their research and development (R&amp;D) projects. In the Republic of Korea (ROK), twenty-seven climate technologies were selected to boost the greening of existing industries and to develop new green industries to promote a sustainable climate technology development strategy. Rechargeable battery technology, carbon capture and storage (CCS) technology, smart grids, and sewage treatment are all research areas expected to have tangible outcomes in the forthcoming years. As such, they were included in a comprehensive R&amp;D plan for climate technology advancement, which places an emphasis on climate technology development and commercialization strategy. In this study, the R&amp;D plan of the ROK is reviewed by examining its six core climate technology programs: solar cells, fuel cells, bioenergy, rechargeable battery technology, information technology (IT) applications for the power sector, and CCS technology in detail. The climate policy in the ROK aims to find new economic growth engines and to develop new business opportunities while actively participating in international efforts to combat climate change.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Abd_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:22 +0200</pubDate>
	<link>https://www.scipedia.com/public/Abd_et_al._2016a</link>
	<title><![CDATA[Effect of intravenous infusion of iodinated contrast media on the coronary blood flow in dogs]]></title>
	<description><![CDATA[<p>Coronary computed tomography angiography (CCTA) is obtained using peripheral intravenous iodinated contrast agents (ICA) injection. There is continuing attempts to derive coronary physiological information like coronary blood flow (CBF) and/or fractional flow reserve from CCTA images. However, no data is available regarding the effect of peripheral intravenous injection of ICA on CBF.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Casale_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:20 +0200</pubDate>
	<link>https://www.scipedia.com/public/Casale_et_al._2016a</link>
	<title><![CDATA[Idiopathic sensorineural hearing loss is associated with endothelial dysfunction]]></title>
	<description><![CDATA[<p id="p0010" style="font-size: 16px; font-weight: 100; margin-bottom: 9px; color: rgb(46, 46, 46); font-style: normal;">Sensorineural hearing loss (SNHL) is the most common type of permanent hearing loss and it occurs when there is damage to the inner ear (cochlea), or to the nerve pathways from the inner ear to the brain. Most of the time, SNHL cannot be medically or surgically corrected.The aim of the study is finding a relationship between idiopathic SNHL and endothelial dysfunction.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Choy_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:17 +0200</pubDate>
	<link>https://www.scipedia.com/public/Choy_et_al._2016a</link>
	<title><![CDATA[Cardiac disease and arrhythmogenesis: Mechanistic insights from mouse models]]></title>
	<description><![CDATA[<p>The mouse is the second mammalian species, after the human, in which substantial amount of the genomic information has been analyzed. With advances in transgenic technology, mutagenesis is now much easier to carry out in mice. Consequently, an increasing number of transgenic mouse systems have been generated for the study of cardiac arrhythmias in ion channelopathies and cardiomyopathies. Mouse hearts are also amenable to physical manipulation such as coronary artery ligation and transverse aortic constriction to induce heart failure, radiofrequency ablation of the AV node to model complete AV block and even implantation of a miniature pacemaker to induce cardiac dyssynchrony. Last but not least, pharmacological models, despite being simplistic, have enabled us to understand the physiological mechanisms of arrhythmias and evaluate the anti-arrhythmic properties of experimental agents, such as gap junction modulators, that may be exert therapeutic effects in other cardiac diseases. In this article, we examine these in turn, demonstrating that primary inherited arrhythmic syndromes are now recognized to be more complex than abnormality in a particular ion channel, involving alterations in gene expression and structural remodelling. Conversely, in cardiomyopathies and heart failure, mutations in ion channels and proteins have been identified as underlying causes, and electrophysiological remodelling are recognized pathological features. Transgenic techniques causing mutagenesis in mice are extremely powerful in dissecting the relative contributions of different genes play in producing disease phenotypes. Mouse models can serve as useful systems in which to explore how protein defects contribute to arrhythmias and direct future therapy.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Corsten-Janssen_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:16 +0200</pubDate>
	<link>https://www.scipedia.com/public/Corsten-Janssen_et_al._2016a</link>
	<title><![CDATA[Congenital arch vessel anomalies in CHARGE syndrome: A frequent feature with risk for co-morbidity]]></title>
	<description><![CDATA[<p>CHARGE syndrome is a complex multiple congenital malformation disorder with variable expression that is caused by mutations in the CHD7 gene. Variable heart defects occur in 74% of patients with a CHD7 mutation, with an overrepresentation of atrioventricular septal defects and conotruncal defects &mdash; including arch vessel anomalies.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/de-Souza_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:15 +0200</pubDate>
	<link>https://www.scipedia.com/public/de-Souza_et_al._2016a</link>
	<title><![CDATA[Dental care before cardiac valve surgery: Is it important to prevent infective endocarditis?]]></title>
	<description><![CDATA[<p>Infective endocarditis (IE) is a serious disease that affects the surface of the endocardium. The spread of microorganisms from the oral cavity has been associated with the occurrence of IE.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Gehle_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:13 +0200</pubDate>
	<link>https://www.scipedia.com/public/Gehle_et_al._2016a</link>
	<title><![CDATA[NT-proBNP and diastolic left ventricular function in patients with Marfan syndrome]]></title>
	<description><![CDATA[<p>Subclinical diastolic dysfuntion in patients with preclinical heart failure with preserved ejection fraction (HFpEF) has been demonstrated in patients with Marfan syndrome (MFS). We investigated the relationship between diastolic dysfunction and NT-proBNP levels in patients with MFS.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ozkaramanli-Gur_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:12 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ozkaramanli-Gur_et_al._2016a</link>
	<title><![CDATA[The fate of small side branches following drug eluting stent implantation]]></title>
	<description><![CDATA[<p>Although drug eluting stents (DES) have documented convenience in bifurcation lesions, possible unfavorable effects on small side branch ostium (SBO) remain a question. We aimed to explore the effects of DES on small jailed SBs (1.5&ndash;2.25&nbsp;mm) which originated from the lesion on the main vessel and were not treated with either stenting or balloon dilatation.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Hamilton-Craig_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:09 +0200</pubDate>
	<link>https://www.scipedia.com/public/Hamilton-Craig_et_al._2016a</link>
	<title><![CDATA[Accuracy of quantitative echocardiographic measures of right ventricular function as compared to cardiovascular magnetic resonance]]></title>
	<description><![CDATA[<p>Many echocardiographic parameters have been proposed to evaluate right ventricular (RV) systolic function. We comprehensively assessed a wide range of quantitative echocardiographic parameters in a single cohort compared with same-day cardiovascular magnetic resonance (CMR).</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Parasuraman_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:06 +0200</pubDate>
	<link>https://www.scipedia.com/public/Parasuraman_et_al._2016a</link>
	<title><![CDATA[Assessment of pulmonary artery pressure by echocardiography—A comprehensive review]]></title>
	<description><![CDATA[<p>Pulmonary hypertension is a pathological haemodynamic condition defined as an increase in mean pulmonary arterial pressure&nbsp;&ge;&nbsp;25&nbsp;mmHg at rest, assessed using gold standard investigation by right heart catheterisation. Pulmonary hypertension could be a complication of cardiac or pulmonary disease, or a primary disorder of small pulmonary arteries. Elevated pulmonary pressure (PAP) is associated with increased mortality, irrespective of the aetiology. The gold standard for diagnosis is invasive right heart catheterisation, but this has its own inherent risks. In the past 30&nbsp;years, immense technological improvements in echocardiography have increased its sensitivity for quantifying pulmonary artery pressure (PAP) and it is now recognised as a safe and readily available alternative to right heart catheterisation. In the future, scores combining various echo techniques can approach the gold standard in terms of sensitivity and accuracy, thereby reducing the need for repeated invasive assessments in these patients.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Schulz_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:04 +0200</pubDate>
	<link>https://www.scipedia.com/public/Schulz_et_al._2016a</link>
	<title><![CDATA[Transcatheter aortic valve implantation with the new-generation Evolut R™: Comparison with CoreValve® in a single center cohort]]></title>
	<description><![CDATA[<p>The Medtronic Evolut R (EVR) is a novel transcatheter heart valve designed to allow precise implantation at the intended position and to minimize prosthesis dysfunction as well as procedural complications. Our aim was to compare short-term functional and clinical outcomes of the new EVR with the established Medtronic CoreValve (CV) system.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Verma_et_al._2016a</guid>
	<pubDate>Tue, 19 Jul 2016 16:37:02 +0200</pubDate>
	<link>https://www.scipedia.com/public/Verma_et_al._2016a</link>
	<title><![CDATA[Aorto-ostial atherosclerotic coronary artery disease—Risk factor profiles, demographic &amp; angiographic features]]></title>
	<description><![CDATA[<p>The risk factors along with demographic and angiographic features associated with aorto-ostial atherosclerotic coronary artery disease usually differ from that of non-aorto-ostial atherosclerotic coronary artery disease.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016ab</guid>
	<pubDate>Thu, 30 Jun 2016 13:31:54 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016ab</link>
	<title><![CDATA[Coupling X-ray physics and engineering mechanics, for enhanced analysis of computer tomographic images]]></title>
	<description><![CDATA[<p>Since its invention in the 1960s, Computed Tomography has become one of the most powerful and versatile non-destructive imaging tools, with applications ranging from biomedicine to concrete technology. For about two decades, it is also common to use CT images as the basis for Finite Element modeling of the scanned objects. Thereby, the main focus has been classically laid upon the accurate representation of geometrical details, while particularly for solids made up of natural non-homogeneous materials, the question of material property assignment has remained an open challenge over the years.</p><p>Since 2008, our group, in cooperation with colleagues from Germany, Italy, Russia, Poland, Belgium, and Iceland, has been deeply involved in overcoming this challenge, by more deeply studying the X-ray physics underlying Computed Tomography: we developed increasingly mature methods to retrieve, from the grey value-defined voxel characteristics given in CT images, the actually underlying physical property, called X-ray attenuation coefficient. The latter contains information on the chemical composition of the material making up the considered voxel, and combining this information with known chemical characteristics of the material class making up the scanned object, gives access to important microstructural information inside the voxel, such as microporosity, or contents of known chemical substances. The latter then enter, as input values, experimentally validated micromechanical formulations representing the material inside the voxel, so as to reliably determine the voxel&rsquo;s mechanical properties. Corresponding CT-to-mechanics conversion schemes will be presented in appropriate detail, with applications ranging from various ceramics&nbsp;and polymer-ceramic composites used in tissue engineering, to organs made up of the natural material bone.</p>]]></description>
	<dc:creator>Coupled Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016ac</guid>
	<pubDate>Thu, 30 Jun 2016 13:30:03 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016ac</link>
	<title><![CDATA[Modelling coupled chemo-hygro-thermo-mechanical phenomena in porous building materials]]></title>
	<description><![CDATA[<p>A general approach&nbsp;to modelling various degradation processes in porous building materials, due to combined action of variable chemical, hygro-thermal and mechanical loads, is presented. Mechanics of multiphase porous media and damage mechanics are applied for this purpose. Kinetics of physicochemical processes, like for example: salt crystallization/dissolution, calcium leaching, Alkali Silica Reaction (ASR), and water freezing/thawing, is described with evolution equations based on thermodynamics of chemical reactions. The mass-, energy- and momentum balances, the evolution equations describing chemical reactions, as well as the constitutive and physical relations are briefly summarized. The mutual couplings between the chemical, hygral, thermal and mechanical processes are presented and discussed, both from the viewpoint of physicochemical mechanisms and mathematical modelling. Numerical methods used for solution of the model governing equations are presented. For this purpose the finite element method is applied for space discretization and the finite difference method for integration in the time domain.</p><p>Four examples of the model application for analysing transient chemo-hygro-thermo-mechanical processes in porous building materials are presented and discussed. The first example concerns the salt crystallization during drying of a wall made of concrete or ceramic brick, causing degradation of surface layer due to development of crystallization pressure. The second one deals with calcium leaching from a concrete wall due to chemical attack of pure water, exposed to gradients of temperature and pressure. The third one describes cracking of concrete element, caused by development of expanding products of ASR. The fourth example concerns freezing and thawing of a wet concrete wall in variable temperature and relative humidity.</p>]]></description>
	<dc:creator>Coupled Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016ad</guid>
	<pubDate>Thu, 30 Jun 2016 13:27:00 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016ad</link>
	<title><![CDATA[Unified Finite Element Formulation to Improve Understanding of Materials Science]]></title>
	<description><![CDATA[<p>Unified formulation to solve fluid-structure interaction and multi-fluids problems are gaining popularity in many engineering applications, in particular for material forming processes. Indeed, it simplifies different issues related to mesh generation and boundary conditions and increases the flexibility to deal with multiscale problems.</p><p>We propose in this work a monolithic formulation where the complete problem is written in a fully Eulerian framework and the phases (fluid, solid,&hellip;) are separated by a level set function. The obtained system is solved using stabilized finite element methods. We combine this approximation with time-dependent anisotropic mesh adaptation to ensure accurate capturing of the discontinuities at the interfaces.</p><p>Different use of the levelset function ranging from, grain growth models for the evolutions of microstructure or void disclosure induced by forming operations, to the heat treatment of immersed metallic-alloys inside three-dimensional industrial furnaces will be presented. The advantages and the encountered numerical issues as well as the ongoing investigations related to these formulations will be discussed.&nbsp;</p>]]></description>
	<dc:creator>Coupled Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016ae</guid>
	<pubDate>Thu, 30 Jun 2016 13:24:58 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016ae</link>
	<title><![CDATA[On Scalable Multiphysics Solvers]]></title>
	<description><![CDATA[<p>The largest runs up-to-now are usually performed for simple symmetric positive definite systems. It is a reasonable approach when measuring the overall scalability of an algorithm/implementation. However, in order to have an impact in science and industry, we must extend scalability to the most challenging applications, since these are the ones that really require extreme scale simulation tools, e.g., multiscale, multiphysics, nonlinear, and transient problems. In this talk, we will discuss some of our experiences in the development of FEMPAR, an in-house finite element multiphysics and massively parallel simulator.</p><p>On one hand, we will talk about how to deal in a parallel element-based environment with multiphysics simulations that involve interface coupling, e.g., fluid-structure interaction. Our approach is based on the partition of topological meshes, and ghost element information, in order to define locally the degrees of freedom and the unknowns that must be communicated among processors.</p><p>On the other hand, we will discuss how we deal with the resulting multiphysics (non)linear systems. We have two different approaches to the problem: block preconditioning and monolithic solvers. Block preconditioning techniques are interesting in the sense that they allow us to decouple complex multiphysics problems into simpler (probably) one physics simulations. However, in order for block preconditioners to be effective, we must define effective approximation of Schur complement systems, which can be a complicated (and very heuristic) task. We will show how we have implemented complex (recursive) block preconditioning strategies in FEMPAR using abstract definitions of operators, and how this framework has been applied to different multiphysics solvers.</p><p>We will also discuss how we can reach sustained scalability up to large core-counts (about 400,000 cores in a BG/Q). Our in-house numerical linear algebra solvers are based on multilevel domain decomposition techniques, and their very efficient practical implementations based on overlapped and asynchronous techniques. We will consider two different approaches, the first one being a combination of block-preconditioning and multilevel domain decomposition, whereas the second one will be a truly monolithic domain decomposition approach.</p><p>Many multiphysics simulations are also multiscale, and the use of adaptively refined meshes can reduce even orders of magnitude the computational cost of simulations with respect to uniformly refined meshes. The possibility to reach extremely scalable adaptive multiphysics solvers would open the door to unprecedented simulations of challenging problems that are out of reach nowadays. In this sense, we will show how we are dealing with scalable adaptive solvers in FEMPAR, via a combination of the p4est library for parallel mesh refinement and dynamic load balancing in our element-based framework. Further, we will show how we modify our solvers to deal with nonconforming meshes through interfaces, and the effect of cheap space-filling curve partitions on solver robustness.&nbsp;</p>]]></description>
	<dc:creator>Coupled Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016af</guid>
	<pubDate>Thu, 30 Jun 2016 13:19:54 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016af</link>
	<title><![CDATA[Computational Challenges in Multiscale Poromechanics]]></title>
	<description><![CDATA[<p>We consider the problem of coupled fluid flow-solid deformation in the unsaturated range and address the computational challenges of accommodating the multiscale and multiphysical nature of this problem. To this end, we present a general mathematical framework for unsaturated poromechanics in the finite deformation range and identify energy-conjugate variables relevant for constitutive modeling [1,2]. The framework relies on classic mixed finite element formulation with solid displacements and fluid pressures as independent degrees of freedom. Theoretical and computational issues addressed include material heterogeneity at the mesoscale level, formulation of the problem in the finite deformation range, development of solution algorithms based on iterative linear solvers, shear band triggering, double porosity modelling and simulations, and stabilized mixed finite element formulations. We also present a generalized continuum model to propagate a persistent shear beyond the peak response and well into the softening regime.</p>]]></description>
	<dc:creator>Coupled Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016ag</guid>
	<pubDate>Thu, 30 Jun 2016 13:18:47 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016ag</link>
	<title><![CDATA[Modeling and Simulation of Tsunami Using Virtual Reality Technology]]></title>
	<description><![CDATA[<p>Tsunami kill many human beings and damages economic activities seriously, such as tsunami caused by the Great East Japan Earthquake in 2011. It is very important to develop useful modelling and simulation methods for tsunami waves in order to perform the planning and design for the community development and the prevention of disaster. The visualization is also important to understand the power of tsunami and to improve the consciousness of disaster prevention. Recently, the visualization using the virtual reality (VR) technology is becoming more popular for three dimensional numerical simulations.</p><p>In this presentation, the modelling, simulation and visualization methods are presented for tsunami waves. The stabilized finite element methods are employed for 2D and 3D tsunami simulations based on the shallow water equation, Boussinesq equation and Navier-Stokes equation. In order to realize an efficient tsunami simulation, a combination method using 2D and 3D models is presented. We also propose a visualization system linked to the evacuation simulation using virtual reality technology&nbsp;to understand the power of tsunami and the importance of the evacuation. The present modelling, simulation and visualization methods are shown to be useful tools to realize the high quality computing for large scale tsunami simulation.&nbsp;</p>]]></description>
	<dc:creator>Coupled Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016ah</guid>
	<pubDate>Thu, 30 Jun 2016 13:16:38 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016ah</link>
	<title><![CDATA[Topology Optimization for Coupled Thermos-Fluidic Problems]]></title>
	<description><![CDATA[<p>Topology optimization is a numerical method for determining optimal material distributions. Originally developed for stiffness optimization of elastic structures, the method has since then expanded to all kinds of other physics and multiphysics problems. Application areas rich on challenges are fluid and thermofluidic problems. Apart from the issues associated with efficient numerical solving of coupled fluid problems, various issues with regards to material interpolation models and boundary modelling and control provide additional challenges.</p><p>The talk will review recent activities on topology optimization of thermofluidic problems within the TopOpt group. On the parameterization side we discuss pros and cons between element-based (fictitious domain) and boundary tracking topology optimization formulations&nbsp;as well as comparisons between Finite Element and Lattice Boltzman formulations. On the application side we discuss recent applications within systematic design of active and passive&nbsp;(natural convection) cooling devices, heat exchangers, as well as simplified models for fire-protection of structures.&nbsp;</p>]]></description>
	<dc:creator>Coupled Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016aa</guid>
	<pubDate>Thu, 30 Jun 2016 13:12:50 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016aa</link>
	<title><![CDATA[Finite Swelling and Fracture of Biological Tissues, Hydrogels and Shale: A Micro and Macro Analysis]]></title>
	<description><![CDATA[<p>Swelling is a common phenomenon in both biological media, geomaterials and synthetic materials. It is often associated with ionized molecules that attract counterions that in turn attract water through osmosis. The ionised molecules are clay platelets in shale and proteoglycans in biological tissues. Since antiquity, diagnosis of disease has been done partly through observation of swelling of tissues. Swelling is often linked to fracture. This lecture will highlight the present understanding of the phenomenon and numerical simulations performed on finite swelling of ionised porous media as well as the interface conditions along fluid boundaries. The inclusion of fracture propagation is done though a new XFEM-technique particularly suitable of hydraulic fracturing problems. Results from experiments and computational mechanics will be presented and compared.&nbsp;</p>]]></description>
	<dc:creator>Coupled Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016ai</guid>
	<pubDate>Thu, 30 Jun 2016 13:09:10 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016ai</link>
	<title><![CDATA[Particle Methods in Coupled Problems]]></title>
	<description><![CDATA[<p>One of the main drawbacks of all the time integration algorithms using an Eulerian formulations in Coupled Problems is the restricted time-step to be used to have acceptable results.</p><p>For the case of fluid-structure interactions (FSI) with or without free-surfaces or for the case of fluid with moving internal interfaces (multi-fluids), it is well known that in the explicit integrations, the time-step to be used in the solution is stable only for time-step smaller than two critical values: the Courant-Friedrichs-Lewy (CFL) number and the Fourier number. The first one is concerning with the convective terms and the second one with the diffusive ones. Both numbers must be less than one to have stable algorithms. For convection dominant problems the condition CFL&lt;1 becomes crucial and limit the use of explicit methods or outdistance its to be efficient. On the other hand, implicit integrations using Eulerian formulations are restricted in the time-step size due to the lack of convergence of the non-linear terms. Both time integrations, explicit or implicit are, in most cases, limited to CFL no much larger than one.</p><p>In this lecture we will present a Particle Method to solve coupled problems like FSI or multi-fluid problems that use in all the domain (solid and fluid) a Lagrangian formulation with explicit or implicit time integration without the CFL&lt;1 restriction. This allows large time-steps, independent of the spatial discretization, having equal or better precision that an Eulerian integration.</p><p>The proposal will be tested numerically for FSI and multi-fluid flows problems using the Particle Finite Element Method second generation (PFEM2). The results show than this Particle Method is largely more efficient compared as well in accuracy as in computing time with other more standard Eulerian formulations.</p>]]></description>
	<dc:creator>Coupled Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016z</guid>
	<pubDate>Wed, 29 Jun 2016 15:32:41 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016z</link>
	<title><![CDATA[Going down to the microscale: new perspectives]]></title>
	<description><![CDATA[<p>Roughly speaking, granular media exhibit three basic scales: the specimen scale, the contact scale, and an intermediate scale made up of a set of adjoining particles. In this lecture, we will discuss this latter scale, in a two dimensional context. More specifically, the granular assembly can be regarded as a two phases medium. Grain column like patterns (force chains) develop within the medium, participating actively to its mechanical strength. These columns are surrounded by grain loops, made up of 3, 4, 5, or 6 grains (larger grain loops are much less frequent). According to the number of constituting grains, the mechanical properties of grain loops are very different. In particular, 6 grains loops are prone to deform, contributing locally to a change in the void ratio. On the contrary, 3 grains loops deform just a little, but resist quite well to a deviatoric stress. According to the initial porosity of the assembly, and depending upon the loading path considered, the nature of grain loops surrounding force chains is versatile, with continuous transition mainly from 3 grains loops to 6 grains loops (or vice versa). This is a new route to investigate from a microstructural point of view why a granular assembly may be destabilized, leading to a localized or diffuse failure pattern. In addition, these ingredients are shown to give rise to a microstructural interpretation of the socalled critical state, according to the failure mode taking place.</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016y</guid>
	<pubDate>Wed, 29 Jun 2016 13:29:50 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016y</link>
	<title><![CDATA[Effect of Particle Shape in Simulation of Particle Flows by Distinct Element Method]]></title>
	<description><![CDATA[<p>Numerical simulations of granular flows based on the Distinct Element Method (DEM) commonly use spherical particles for ease of contact mechanics calculations and for having fast contact searching algorithms. However, for irregularly shaped particles, rotation is not only affected by friction but also by mechanical interlocking (Cleary, 2010). Only tangential forces lead to the rotation of spherical particles, whereas for irregularly shaped particles, rotation can be as a result of both normal and tangential contact forces (Favier et al., 2001). Mechanical interlocking of irrregularly shaped particles can be simulated in DEM by (i) limiting the rolling friction of spherical particles (Morgan, 2004), (ii) using overlapping spheres (Favier et al., 2001), (iii) using polyhydra (Potapov and Campbell, 1997). The first two method have been critically evaluated for the flow of corn seeds and spray-dried powders. A comparison is made of the estimated solid fraction and the tangential and radial velocity distributions of the particles from DEM and those measured experimentally. The shapes of the corn seeds and spray-dried powders have been captured using X-Ray micro tomography, and ASG2013 software has been used to generate the coordinates of the overlapping spheres. It is shown that the approximation of particle shape is only critical for dense shearing flows. The use of polyhydra enables particle fracture to be simulated more realistically, but necessitates implementation of fracture mechanics to be predictive. In this paper the results of our evaluations are reported.</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016x</guid>
	<pubDate>Wed, 29 Jun 2016 13:12:18 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016x</link>
	<title><![CDATA[A new reproducing kernel formulation with embedded kernel stability for modeling extreme events]]></title>
	<description><![CDATA[<p>Reproducing Kernel Particle Method (RKPM) has been applied to many large deformation problems. RKPM relies on polynomial reproducing conditions to yield desired accuracy and convergence properties, but requires appropriate kernel support coverage of neighboring particles to ensure kernel stability. This kernel stability condition is difficult to achieve for problems with large particle motion such as the fragment-impact processes commonly exist in many extreme events. A new reproducing kernel formulation with &ldquo;quasi-linear&rdquo; reproducing conditions is introduced. In this approach, the first order polynomial reproducing conditions are approximately enforced to yield a nonsingular moment matrix. With proper error control of the first completeness, nearly 2 nd order convergence rate in L2 norm can be achieved while maintaining kernel stability. The effectiveness of this new quasi-linear RKPM formulation is demonstrated by modelling fragment-impact and penetration extreme events.</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016w</guid>
	<pubDate>Wed, 29 Jun 2016 13:10:08 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016w</link>
	<title><![CDATA[Coarse-Grained Atomistics Via Meshless and Mesh-Based Quasicontinuum Techniques]]></title>
	<description><![CDATA[<p>Spatial coarse-graining techniques are powerful methods to overcome the computational limits of molecular dynamics. In order to extend atomistic simulations of crystalline materials to the micronscale and beyond, the quasicontinuum (QC) approximation reduces large crystalline atomistic ensembles to a significantly smaller number of representative atoms with suitable interpolation schemes to infer the motion of all particles. In contrast to most other concurrent multiscale techniques, this allows for the simulation of large systems solely based on interatomic potentials and thus without the need for (oftentimes phenomenological) continuum constitutive models. This promises superior accuracy for predictive simulations at the meso- and macroscales.</p><p>Here, we will discuss one such coarse-graining scheme, viz. a fully-nonlocal energy-based QC technique&nbsp;which excels by minimal approximation errors and vanishing force artefacts (a common problem in concurrent scale-coupling methods). Our model is equipped with automatic adaptation techniques to effectively tie atomistic resolution to regions of interest while efficiently coarse-graining the remaining solid. We review both mesh-based and meshless formulations. The former adopts methods from finite elements (using an affine interpolation on a Delaunay triangulation), whereas the latter is based on local maximum-entropy interpolation schemes. In both cases, the result is a computational toolbox for coarse-grained atomistic simulations, whose computational challenges are quite similar to those of molecular dynamics. Finite temperature extensions as well as coarse-graining in time can be incorporated in the presented framework.</p><p>We will review the underlying theory and give an overview of the state of the art, followed by a suite of numerical examples demonstrating the benefits and limitations of the nonlocal energy-based QC method. Examples range from nanoindentation and material failure to defect interactions and nanoscale mechanical size effects.&nbsp;</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016v</guid>
	<pubDate>Wed, 29 Jun 2016 13:07:38 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016v</link>
	<title><![CDATA[Granular materials by design: new approaches for mapping desired properties of the aggregate to properties of individual particles]]></title>
	<description><![CDATA[<p>When we think of materials &ldquo;by design&rdquo;, we are envisioning a process that gets us from a design target, namely certain desired overall materials properties, to requirements for the constituent components. This is challenging because it requires us to invert the typical modeling approach in physics and material science, which starts from microscale components in order to predict macroscale behavior. How can one tackle this inverse problem for granular materials that are inherently disordered and far from equilibrium, and for which the target is not a thermodynamically favored &lsquo;ground state&rsquo;? I will discuss how concepts from artificial evolution make it possible to find with high efficiency particle-scale parameters best adapted to given target properties. In particular, I will show how one can find particle shapes that are optimized for specific desired outcomes, such as low aggregate porosity or high stiffness under compression. This approach uses large numbers of parallel molecular dynamics simulations together with optimization techniques based on artificial evolution. Optimized shapes are then validated by physical measurements that test large aggregates of 3D-printed versions of the particles. This approach has general applicability and opens up new opportunities for granular materials design as well as discovery.</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016u</guid>
	<pubDate>Wed, 29 Jun 2016 13:05:04 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016u</link>
	<title><![CDATA[The Particle Finite Element Method-Second Generation: an overview]]></title>
	<description><![CDATA[<p>The main idea of the Particle Finite Element Method in both versions: with moving mesh or with fixed mesh, are to have a set of particles that move in a Lagrangian frame convecting all the physical and mathematical variables (for instance, the density, the viscosity or the conductivity, but also the velocity, the pressure or/and the temperature). These physical and mathematical values are projected at the end of each time-step on a moving mesh or on a fixed mesh. The second possibility has been named PFEM-Second Generation or simply PFEM-2.</p><p>One of the main drawback of the time integrations using Eulerian formulations are the restricted time-step size that is necessary to use due to the lack of accuracy of the convective terms. Both time integrations, explicit or implicit are, in most cases, limited to small CFL numbers. The cases in which the problem to be solved include free-surfaces or moving internal interfaces, like multi-fluids of fluid-structure interactions this time-step limitation is even worse.</p><p>The objective of this presentation is to make an overview of recent examples solved using PFEM-2 and to demonstrate why this method based on particles that move in a Lagrangian frame projecting the results on a fixed mesh is faster than a classical Eulerian Finite Element Method. The authors claim that nowadays, the best way to improve the efficiency of the majority of the CFD problems is the use of a particle-based method like PFEM-2.</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016t</guid>
	<pubDate>Wed, 29 Jun 2016 13:03:10 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016t</link>
	<title><![CDATA[Particles in turbulent flow]]></title>
	<description><![CDATA[<p>In many situations ranging from geophysics to chemical engineering, turbulent drag moves particle clouds. I will present and compare various numerical approaches. On the one hand the mean velocity profile above ground is systematically constructed subtracting momentum loss; on the other hand the intrinsic spatio-temporal fluctuations are imposed from empirical distributions on point-like fluid particles. Various applications are explored. One is saltation, i.e. Aeolian transport of sand, discovering that the onset of particles flux exhibits a first order transition with hysteresis. The inclusion of mid-air grain collisions is found to increase the flux considerably due to the formation of a floating &ldquo;soft bed&rdquo; that screens energy-rich grains (saltons) from hitting the ground. Solving the fluid motion with the Lattice Boltzmann Method the effect of particle-particle collisions on preferential concentration is also investigated. Another application is powder mixing in a channel due to turbulent fluctuations. Following A.M. Reynolds (2003), a stochastic differential equation is solved for the motion of fluid particles that are attached to real particles. The dependence of the observed diffusive behaviour on Reynolds and Stokes number is monitored. Finally, also spatial correlations in the velocity field are imposed by a Heisenberg-type Hamiltonian.&nbsp;</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016s</guid>
	<pubDate>Wed, 29 Jun 2016 13:00:23 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016s</link>
	<title><![CDATA[The amazing simulation powers of particles and what we can/not do with them]]></title>
	<description><![CDATA[<p>Particles are used to simulate phenomena spanning twenty orders of magnitude, from the folding of proteins to the formation of our universe. I distinguish particle methods for the discretisation of continuum conservation laws and particle models of complex systems. In this talk, I will emphasize the need for controlling the accuracy of continuum particle methods and demonstrate how particle remeshing allows for a seamless integration of grids and particles. I will also discuss the need for data driven, uncertainty quantification of particle models. I will provide examples from the capabilities and challenges for particle methods through flow simulations in massively parallel computing architectures, ranging from fish swimming and cavitation to cell sorting in microfluidic channels.</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016r</guid>
	<pubDate>Wed, 29 Jun 2016 12:57:55 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016r</link>
	<title><![CDATA[Real-time micro-modelling of a million pedestrians]]></title>
	<description><![CDATA[<p>A first-principles model for the simulation of pedestrian flows and crowd dynamics capable of computing the movement of millions of pedestrians in real time has been developed. The pedestrians are treated as `intelligent&#39; particles subjected to a series of forces, such as: will forces (the desire to reach a place at a certain time), pedestrian collision avoidance forces, obstacle/wall avoidance forces; pedestrian contact forces, and obstacle/wall contact forces. In order to allow for general geometries a so-called background triangulation is used to carry all geographic information. At any given time the location of any given pedestrian is updated on this mesh. The code has been ported to shared and distributed memory parallel machines. The results obtained show that the stated aim of computing the movement of millions pedestrians in real time has been achieved. This is an important milestone, as it enables faster-than-real-time simulations of large crowds (stadiums, airports, train and bus stations, concerts) as well as evacuation simulations for whole cities. This may enable the use of validated, micro-model-based pedestrian simulation for design, operation and training within the context of large crowds.&nbsp;</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016q</guid>
	<pubDate>Wed, 29 Jun 2016 12:40:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016q</link>
	<title><![CDATA[Contributions and limitations of the Non Smooth Contact Dynamics for the simulation of dense granular systems]]></title>
	<description><![CDATA[<p>The numerical simulation of complex dynamical systems is an important way for studying phenomena that are difficult to investigate experimentally. We could then speak about numerical granular media as a specific scientific field similarly to the numerical fluids twenty years ago. The numerical investigation progresses so quickly with respect with the experiments that the comparison between simulations and experiments is often rather coarse. Moreover the numerical tools may be used beyond their limits of validity. We propose to analyse the contributions, but also the limits of the NonSmooth Contact Dynamics (NSCD), developed by J.J. Moreau, applied to the granular systems starting from some experiences and from the numerous remarks given by Moreau himself in his papers.</p><p>The NSCD method has been developed for dealing with large collections of packed bodies and then for simulating the behaviour of granular materials. The Nonlinear Gauss-Seidel (NLGS) algorithm is the generic solver applied to the NSCD formulation. This combination allows simulation of the behaviour of a collection of (especially rigid) bodies involving different and mixed regimes: static, slow dynamics (solid), fast dynamics (fluid). Some examples illustrate the ability of the Moreau&rsquo;s approach for dealing with a large range of granular problems.</p><p>For illustrating the limits of the NSCD approach we focus our attention on dense granular systems that are strongly confined. In order to respect the &ldquo;elegant rusticity&rdquo; of the Moreau&rsquo;s approach we restrict the analysis to a collection of rigid bodies without considering global or local deformations of the grains. Some simple examples highlight the issue of inconsistencies, i.e. some configurations for which no solution exists, as well as indeterminacies, i.e. configurations that lead to non-uniqueness of solutions. We recover here the Painlev&eacute; paradox underlined at the beginning of the twentieth century. The non existence of solutions is the more important challenge we have to face. We can first identify the situations leading to this non existence among them the granular systems submitted to moving walls. If such a case may not be avoided another response consists in changing the Coulomb friction law.</p><p>The NSCD approach is well adapted to inelastic shocks that predominate in granular media. However J.J. Moreau introduced the concept of formal velocity to account for an elastic restitution. This concept is richer than a restitution coefficient (Newton or Poisson type) involving a binary shock; this permits to deal with multicontact situations without introducing either deformable grains or elastic-plastic contact laws. However this does not allow to reproduce shock propagation as it occurs for instance in the famous Newton&rsquo;s cradle. Is it then possible to propose an algorithmic solution in the NSCD framework?</p>]]></description>
	<dc:creator>Particles Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016p</guid>
	<pubDate>Wed, 29 Jun 2016 11:58:11 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016p</link>
	<title><![CDATA[Computational Crystal Plasticity for the Design of Material and Processes]]></title>
	<description><![CDATA[<p>The macroscale mechanical behaviour of crystalline materials, such as polycrystalline metals and single crystal semiconductors, is dictated by the anisotropic behaviour of individual crystals/grains and their interactions with neighboring crystals or other materials. Furthermore, the elastic-plastic response of individual crystals is associated with the underlying atomic lattice structure and phenomena of dislocation glide on the slip systems and dislocation multiplication and interactions. As a result, microstructural characteristics such as grain size, shape, and orientation, have a significant effect on the macroscale mechanical properties and performance. Moreover, these microstructural features are strongly affected by the thermal-mechanical process used to create a part. Because of this, tremendous effort has been made to develop crystal plasticity models that explicitly model the crystal (grain) scale behavior to predict the local macroscale response.</p><p>In this talk, a framework for computational modelling of discretized single or polycrystal grain structures subjected to thermal-mechanical loading conditions is presented. The model is general for finite deformations with the crystal plasticity model based on dislocation motion and interactions. A parallel finite element implementation is briefly described. Then, applications including predicting microstructure evolution during large deformation processing, fatigue crack initiation, and defect formation during single crystal AlN crystal growth will be presented</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016o</guid>
	<pubDate>Wed, 29 Jun 2016 11:51:16 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016o</link>
	<title><![CDATA[Linking Mesoscale Plasticity to Atomistics]]></title>
	<description><![CDATA[<p>Dislocation interactions and failure mechanisms at mesoscopic length scales of metallic materials are usually out of reach of atomistic simulations, thus requiring effective continuum models to describe their collective behavior and the resulting constitutive response. Coarse-graining the crystalline atomic ensemble, e.g. by means of the quasicontinuum (QC) approximation&nbsp;combined with techniques to accelerate atomistic simulations, provides an avenue to locally retain atomistic accuracy while being applicable to larger scales. One such method, the fully-nonlocal energy-based QC technique&nbsp;allows us to simulate the response of crystalline solids solely based on interatomic potentials but at significantly larger length scales than conventional molecular dynamics (MD). Here, we will apply this approach to study defect mechanisms in representative copper and aluminum single- and polycrystals. Among others, we will demonstrate the importance of coarsegrained atomistic simulations to avoid modeling artifacts inherited from nanoscale MD simulations.</p><p>Void nucleation, growth and coalescence are important mechanisms responsible for spall and ductile failure. By simulating individual nano-voids and collections of voids under hydrostatic and multiaxial loading, we investigate (i) the nucleation of defects and the associated failure mechanisms at sufficiently-large loads, and (ii) the importance of coarse-grained atomistic techniques to avoid modeling artefacts and size effects in small representative volume elements treated by conventional atomistic methods.</p><p>Grain boundaries (GBs) play a central role in polycrystal plasticity through their interactions with lattice defects as well as through GB relaxation mechanisms. We will use the aforementioned coarsegrained atomistic technique to study the behavior of GBs in three-dimensional crystals with a particular focus on the GB strength and the interaction with dislocations. As in the case of void expansion, the QC simulations enable us to consider sample sizes outside the realm of conventional atomistic techniques.</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016n</guid>
	<pubDate>Fri, 10 Jun 2016 12:04:28 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016n</link>
	<title><![CDATA[The numerical solution of large scale dynamic soil-structure interaction problems]]></title>
	<description><![CDATA[<p>In civil engineering, and more particularly in structural mechanics, computational tools are used to understand and predict the behaviour of complete structures (bridges, buildings, &hellip;) or their individual components (cables, floors, &hellip;) in several limit states. A major complexity lies in the fact that many civil engineering structures, if not all, are in direct contact with the surrounding soil domain. The dynamic interaction between the structure and its environment often plays a crucial role and should be accounted for in numerical models. An efficient solution of dynamic soil&ndash;structure interaction (SSI) problems is indispensable, for example, for the assessment of damage to structures (buildings, nuclear power plants, bridges, tunnels) caused by earthquakes, the evaluation of annoyance in the built environment due to vibrations originating from road and railway traffic, or the design of offshore structures (wind turbines, oil and gas platforms) subjected to wind and wave loadings. These problems are of large societal and economic importance but are challenging from a computational point of view. Despite the advance of high performance computers, the numerical solution of large scale dynamic SSI problems remains very challenging and in many cases beyond current computer capabilities.</p><p>This talk gives an overview of computational techniques that have been developed within the frame of the first author&rsquo;s doctoral research for solving large dynamic SSI problems. A domain decomposition approach is employed, where finite elements for the structure(s) are coupled to boundary elements for the soil, accounting for the soil&rsquo;s stratification. A fast boundary element method is developed, resulting in a significant reduction of the required memory and CPU time with respect to traditional formulations. This allows for an increase of the problem size by at least one order of magnitude. Furthermore, innovative algorithms for an efficient coupling of finite and boundary elements are presented, considering three&ndash;dimensional as well as two&ndash;and&ndash;a&ndash;half&ndash; dimensional formulations. The computational performance of the proposed procedures is assessed and their suitability is illustrated through numerical examples.</p><p>The novel techniques are subsequently employed for the solution of challenging problems related to the prediction of railway induced ground vibrations. In particular, the efficiency of a stiff wave barrier for impeding the propagation of Rayleigh waves from the railway track to the surrounding buildings is studied in detail, providing fundamental insight in the underlying physical mechanism. The numerical results are validated by means of a full scale experimental test, confirming the efficacy of the proposed type of barrier.</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016m</guid>
	<pubDate>Fri, 10 Jun 2016 11:59:04 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016m</link>
	<title><![CDATA[Computation on dislocation-based crystal plasticity at micron-nano scales]]></title>
	<description><![CDATA[<p>Plastic flow in crystal at micron-nano scales involves many new interesting issues. Some results are obtained for uniaxial compression experiments conducted on FCC single crystal micro-pillars, e.g. size effect and strain burst, etc.. In these experiments, the surfaces are transmissible and loading gradients are absent. Therefore, the strain gradient theory could not well explain these new mechanical behaviors. This in turn has led to several hypotheses based on intuitive insights, classical theory and dislocation plasticity in order to study the size effect at submicron scale. In the model proposed, mobile dislocations may escape from the free surface leading to the state of dislocation starved whereby an increase in the applied stress is necessary to nucleate or activate new dislocation sources. By performing in-situ TEM, the dislocation motion affected the material properties is observed. However, the atypical plastic behavior at submicron scales cannot be effectively investigated by either traditional crystal plastic theory or large-scale molecule dynamics simulation.</p><p>Accordingly, the discrete dislocation dynamics (DDD) coupling with finite element method (FEM), so a discrete-continuous crystal plastic model (DCM) is developed. Three kinds of plastic deformation mechanisms for the single crystal pillar at submicron scale are investigated. (1) Single arm dislocation source (SAS) controlled plastic flow. It is found that strain hardening is virtually absent due to continuous operation of stable SAS and weak dislocation interactions. When the dislocation density finally reaches stable value, a ratio between the stable SAS length and pillar diameter obeys a constant value. A theoretical model is developed to predict DDD simulation results and experimental data. (2) Confined plasticity in coated micropillars. Based on the simulation results and stochastic distribution of SAS, a theoretical model is established to predict the upper and lower bounds of stress-strain curve in the coated micropillars. (3) Dislocation starvation under low amplitude cyclic loading. This work argued that the dislocation junctions can be gradually destroyed during cyclic deformation, even when the cyclic peak stress is much lower than that required to break them under monotonic deformation. The cumulative irreversible slip is found to be the key factor of leading to junction destruction and promoting dislocation starvation under low amplitude cyclic loadings. Based on this mechanism, a proposed theoretical model successfully reproduces dislocation annihilation behavior observed experimentally for small pillar and dislocation accumulation behavior for large pillar. The predicted critical conditions of dislocation starvation agree well with the experimental data.</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016l</guid>
	<pubDate>Fri, 10 Jun 2016 11:55:58 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016l</link>
	<title><![CDATA[Computational modeling of ductile fracture processes]]></title>
	<description><![CDATA[<p>Two fundamental questions in the mechanics and physics of fracture are: (i) What is the relation between observable features of a material&rsquo;s microstructure and its resistance to crack growth? (ii) What is the relation between observable features of a material&rsquo;s microstructure and the roughness of the fracture surface? An obvious corollary question is: What is the relation, if any, between a material&rsquo;s crack growth resistance and the roughness of the corresponding fracture surface? 3D finite element calculations of mode I ductile crack growth aimed at addressing these questions will be discussed. In the calculations, ductile fracture of structural metals by void nucleation, growth and coalescence is modeled using an elastic-viscoplastic constitutive relation for a progressively cavitating plastic solid. A material length scale is introduced via a discretely modeled microstructural feature, such as the spacing of inclusions that nucleate voids or the mean grain size. A particular focus will be on the use of such analyses to suggest the design of material microstructures for improved fracture resistance.&nbsp;</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016k</guid>
	<pubDate>Fri, 10 Jun 2016 11:52:12 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016k</link>
	<title><![CDATA[Recent advances in non-intrusive coupling strategies]]></title>
	<description><![CDATA[<p>In the last decade, many innovative modeling or solution techniques have been introduced in the field of computational mechanics. These techniques, such as enriched finite elements or multiscale modeling, enable performing complex simulations that are out of reach of conventional finite element analysis (FEA) tools, in terms of computational or human costs. Although these techniques have proved their performance by extensive testing on academic applications, they are scarcely applied on actual industrial problems because they cannot be conveniently implemented into commercial FEA software packages. Therefore a scientific and practical challenge is to allow realistic simulation of complex industrial problems including all their physical and technological complexity. The prerequisite of the proposed non-intrusive framework is to keep unchanged the global numerical model as well as the solver used for its treatment. Therefore two or several models are used concurrently, the untouched global model and locals ones which are iteratively substituted where needed. The exchanges between the two models are such that the data should be &quot;natural&quot; ones for the global model such as prescribed forces. Possible applications are numerous even though the approach as to be adapted depending on the context.&nbsp;</p><p>In this presentation we intend focusing on some recent works and associated possibilities and difficulties regarding:</p><ul><li>the extension of the method in explicit and implicit-explicit coupling in dynamics</li>
	<li>the coupling between plate and 3D models for bolted and multi-bolted plates</li>
	<li>the treatment of complex non-linear visco-plastic structures&nbsp;</li>
</ul><p>This work is partially funded by the French National Research Agency as part of project ICARE (ANR-12-MONU-0002-04).&nbsp;</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016j</guid>
	<pubDate>Fri, 10 Jun 2016 11:49:24 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016j</link>
	<title><![CDATA[Data-driven computational mechanics]]></title>
	<description><![CDATA[<p>We develop a new computing paradigm, which we refer to as data-driven computing, according to which calculations are carried out directly from experimental material data and pertinent constraints and conservation laws, such as compatibility and equilibrium, thus bypassing the empirical material modeling step of conventional computing altogether. Data-driven solvers seek to assign to each material point the state from a prespecified data set that is closest to satisfying the conservation laws. Equivalently, data-driven solvers aim to find the state satisfying the conservation laws that is closest to the data set. The resulting data-driven problem thus consists of the minimization of a distance function to the data set in phase space subject to constraints introduced by the conservation laws. We motivate the data-driven paradigm and investigate the performance of data-driven solvers by means of two examples of application, namely, the static equilibrium of nonlinear three-dimensional trusses and linear elasticity. In these tests, the data-driven solvers exhibit good convergence properties both with respect to the number of data points and with regard to local data assignment. The variational structure of the data-driven problem also renders it amenable to analysis. We show that, as the data set approximates increasingly closely a classical material law in phase space, the data-driven solutions converge to the classical solution. We also illustrate the robustness of data-driven solvers with respect to spatial discretization. In particular, we show that the data-driven solutions of finite-element discretizations of linear elasticity converge jointly with respect to mesh size and approximation by the data set.</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016i</guid>
	<pubDate>Fri, 10 Jun 2016 11:46:23 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016i</link>
	<title><![CDATA[Numerical material and plate tests: Advantages and challenges]]></title>
	<description><![CDATA[<p>The methods of two-scale analysis based on the method of numerical material testing (NMT) and plate testing (NPT) have indisputable superiority over FE2 -type micro-macro coupling schemes, though there are some issues to be resolved or examined. In particular, the decoupling of micro- and macroscopic analyses makes the homogenization-based two-scale analysis methods computationally law-cost and thus practical in view of industrial applications, but at the same time requires us to prepare reliable macroscopic constitutive models. To identify promising research directions for two-scale analyses, we introduce three selected topics described below to discuss the advantages and challenges of NMT and NPT.</p><p>A major advantage in the first topic is that macroscopic inelastic constitutive models for a variety of composite materials can easily be determined with reference to the material models assumed for periodic microstructures (unit cells), if the small strain assumption is valid. However, NMTs with finite deformation of resins often cause some trouble. That is, even though isotropic multiplicative finite visco-plastic models is originally developed and introduced for NMTs, the formulation of the corresponding anisotropic model for macroscopic analyses is not always possible.</p><p>The second topic arises from the method of NPT for composite plates, which enables us to evaluate the relationship between macroscopic resultant stresses and generalized strains. The originally formulated microscopic problem is featured by the in-plane periodic boundary conditions, which properly reproduces all the plate&rsquo;s deformation modes. If we confine ourselves to linearly elastic material behavior, even the topology optimization of microscopic plate&rsquo;s cross-sections is successfully conducted to maximize the performance at macro-scale. Nonetheless, we may not meet a macroscopic plate model that can accommodate the NPT results of nonlinear material behavior assumed for the in-plane unit cell.</p><p>The third subject of study is related to the method of isogeometric analyses (IGA) for NMT and NPT. Since the treatment of the combination of different materials in IGA models is not trivial especially along with periodicity constraints, the first priority is to clearly specify points at issue in the numerical modeling, or equivalently mesh generation, for IG homogenization analysis (IGHA). The most important issue is how to generate patches for NURBS representation of the geometry of a rectangular parallelepiped unit cell to realize appropriate deformations in consideration of the convex-full property of IGA and the in-plane periodicity. A promising coping technique is proposed and numerically demonstrated.&nbsp;</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016h</guid>
	<pubDate>Fri, 10 Jun 2016 11:42:45 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016h</link>
	<title><![CDATA[Multiscale analysis applied to material modeling]]></title>
	<description><![CDATA[<p>The presentation is aimed to cover areas related to modeling of material behaviour using different numerical schemes. Special emphasis is laid on homogenization procedures and multi- scale approaches that include inelastic microstructural deformations and development of inter- face cracks. In detail the inelastic responses of polycrystals is investigated including induced anisotropy and nonlinear hardening. The necessary numerical procedures will be discussed and examples from different areas are introduced.&nbsp;</p><p>Included in this presentation is the design of macroscopic constitutive equations with only few parameters that are obtained from homogenization of polycrystal assemblies. The results are validated at micro and macro scale by means of experiments. These include as well results from microstructural observation as from classical pullout tests. Typical and important industrial applications range from ceramic to ductile materials.</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016g</guid>
	<pubDate>Wed, 08 Jun 2016 14:59:57 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016g</link>
	<title><![CDATA[Efficient model order reduction in computational thermo-mechanics: Application to material forming processes]]></title>
	<description><![CDATA[<p>Despite the impressive progresses attained by simulation capabilities and techniques, some challenging problems remain today intractable. These problems, that are common to many branches of science and engineering, are of different nature. Among them, we can cite those related to high-dimensional models, on which mesh-based approaches fail due to the exponential increase of degrees of freedom. Other challenging scenarios concern problems requiring many direct solutions (optimization, inverse identification, uncertainty quantification &hellip;) or those needing very fast solutions (real time simulation, simulation based control &hellip;).</p><p>We are developing a novel technique, called Proper Generalized Decomposition (PGD) based on the assumption of a separated form of the unknown fields that has demonstrated its capabilities in dealing with high-dimensional problems overcoming the strong limitations of classical approaches. But the main opportunity given by this technique is that it allows for a completely new approach for addressing standard problems, not necessarily high dimensional. Many challenging problems can be efficiently cast into a multidimensional framework opening new possibilities to solve old and new problems with strategies not envisioned until now. For instance, parameters in a model can be set as additional extra-coordinates of the model. In a PGD framework, the resulting model is solved once for life, in order to obtain a general solution that includes all the solutions for every possible value of the parameters, that is, a sort of &ldquo;Computational Vademecum&rdquo;. Under this rationale, optimization of complex problems, uncertainty quantification, simulation-based control and real-time simulation are now at hand, even in highly complex scenarios, by combining an off-line stage in which the general PGD solution, the &ldquo;vademecum&rdquo;, is computed, and an on-line phase in which, even on deployed, handheld, platforms such as smartphones or tablets, real-time response is obtained as a result of our queries.&nbsp;</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016f</guid>
	<pubDate>Tue, 07 Jun 2016 17:41:31 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016f</link>
	<title><![CDATA[Characterization of magneto-electric composites: product properties and multiscaling]]></title>
	<description><![CDATA[<p>Coupling between electric and magnetic fields enables smart new devices and may find application in sensor technology and data storage. Materials showing magneto-electric (ME) coupling properties combine two or more ferroic characteristics and are known as multiferroics. Since singlephase materials show an interaction between polarization and magnetization at very low temperatures and at the best a too small ME coefficient at room temperature, composite materials become important. These ME composites consist of magnetically and electrically active phases and generate the ME coupling as a strain-induced product property. It has to be emphasized that for each of the two phases the ME coupling modulus is zero and the overall ME modulus is generated by the interaction between both phases. Here we distinguish between the direct and converse ME effect. The direct effect characterizes magnetically induced polarization, where an applied magnetic field yields a deformation of the magneto-active phase which is transferred to the electro-active phase. As a result, a strain-induced polarization in the electric phase is observed. On the other hand, the converse effect characterizes electrically activated magnetization. Several experiments on composite multiferroics showed remarkable ME coefficients that are orders of magnitudes higher than those of single-phase materials. Due to the significant influence of the microstructure on the ME effect, we derived a two-scale finite element (FE2) homogenization framework, which allows for the consideration of microscopic morphologies. A further major influence on the overall ME properties is the polarization state of the ferroelectric phase. With this in mind, a material model is implemented that considers the switching behavior of the spontaneous polarization&nbsp;and enables a more exact comparison to experimental measurements.</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016e</guid>
	<pubDate>Tue, 07 Jun 2016 17:02:28 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016e</link>
	<title><![CDATA[Computational design of engineering materials: an integrated approach for multiscale topological structure optimization]]></title>
	<description><![CDATA[<p>Multiscale topological material design, aiming at obtaining optimal distribution of the material at several scales in structural materials is still a challenge. In this case, the cost function to be minimized is placed at the macro-scale (compliance function), but the design variables (material distribution) lie at both the macro-scale and the micro-scale. The large number of involved design variables and the multi-scale character of the analysis, resulting into a multiplicative cost of the optimization process, often make such approaches prohibitive, even if in 2D cases.</p><p>In this work, an integrated approach for multi-scale topological design of structural linear materials is proposed. The approach features the following properties:</p><ul><li>The &ldquo;topological derivative&rdquo; is considered the basic mathematical tool to be used for the purposes of determining the sensitivity of the cost function to material removal. In conjunction with a level-set-based &ldquo;algorithm&rdquo;&nbsp;it provides a robust and well-founded setting for material distribution optimization.</li>
	<li>The computational cost associated to the multiscale optimization problem is dramatically reduced by resorting to the concept of the online/offline decomposition of the computations. A &ldquo;Computational Vademecum&rdquo; containing the micro-scale solution for the topological optimization problem in a RVE for a large number of discrete macroscopic stress-states, is used for solving that problem by simple consultation.</li>
	<li>Coupling of the optimization problem at both scales is solved by a simple iterative &ldquo;fixedpoint&rdquo; scheme, which is found to be robust and convergent.</li>
	<li>The proposed technique is enriched by the concept of &ldquo;manufacturability&rdquo;, i.e.: obtaining sub-optimal solutions of the original problems displaying homogeneous material over finite sizes domains at the macrostructure: the &ldquo;structural components&rdquo;.</li>
</ul><p>The approach is tested by application to some engineering examples, involving minimum compliance design of material and structure topologies, which show the capabilities of the proposed framework.</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016d</guid>
	<pubDate>Tue, 07 Jun 2016 16:58:02 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016d</link>
	<title><![CDATA[Computational inelasticity at different scales - FE technology and beyond]]></title>
	<description><![CDATA[<p>The necessity to provide physically reasonable and mathematically sound descriptions of mechanical behaviour at different scales is without discussion. Nevertheless, for engineering design quick estimations of important quantities such as stresses and strain are needed. This is not even enough. At a larger scale, information about the overall behaviour of complex systems has to be supplied. For this reason, we need to develop computational methods which on the one hand enable a detailed material description, on the other hand allow the bridging to coarser scales without losing too much information. In the present contribution, methods such as the phase field method are combined with FE technology, and, FE technology is combined with model reduction in order to reach this goal.</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016c</guid>
	<pubDate>Tue, 07 Jun 2016 16:49:19 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016c</link>
	<title><![CDATA[Isogeometric Phase-field Modeling of Brittle and Ductile Fracture]]></title>
	<description><![CDATA[<p>The phase-field approach to predicting crack initiation and propagation relies on a damage accumulation function to describe the phase, or state, of fracturing material. The material is in some phase between either completely undamaged or completely cracked. A continuous transition between the two extremes of undamaged and completely fractured material allows cracks to be modeled without explicit tracking of discontinuities in the geometry or displacement fields. A significant feature of these models is that the behavior of the crack is completely determined by a coupled system of partial differential equations. There are no additional calculations needed to determine crack nucleation, bifurcation, and merging.<br />
In this presentation, we will review our current work on applying second-order and fourth-order phase-field models to quasi-static and dynamic fracture of brittle and ductile materials, within the framework of isogeometric analysis. We will present results for several two- and three-dimensional problems to demonstrate the ability of the phase-field models to capture complex crack propagation patterns.&nbsp;</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016b</guid>
	<pubDate>Tue, 07 Jun 2016 13:07:29 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016b</link>
	<title><![CDATA[Plasticity for crushable granular materials via DEM]]></title>
	<description><![CDATA[<p>The mechanical behavior of granular materials is characterized by strong non-linearity and irreversibility. These properties have been described by a variety of constitutive models, a large proportion of them are developed within an elasto-plastic framework. On top of the usual grain rearrangement mechanism, the presence of crushable grains adds one extra source of irreversibility to granular materials, a source that is frequently associated with instabilities. In his context, it is very instructive to obtain incremental responses of crushable granular materials but the experimental difficulties are formidable. This contribution describes a procedure to obtain incremental responses of this type of materials using the discrete element method.</p><p>The DEM model is calibrated to represent Fontaineblau sand. The resulting granular assembly is incrementally tested starting from an initial oedometric (no lateral deformation) condition. The incremental behavior of the numerical models is studied by performing axisymmetric stress probes of equal magnitude but varying direction. Recent advances to enhance the efficiency of the numerical procedure are adopted. The cascading nature of crushing events complicates stress probe control but damping is effectively used to overcome this problem.</p><p>The contribution of grain crushing to the incremental irreversible strain is identified and separately measured. Three components of the incremental strains are distinguished: elastic, plastic-unbreakable and plastic-crushing. Particular focus is placed on the effects of crushing on the direction of plastic flow.</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Contents_2016a</guid>
	<pubDate>Mon, 06 Jun 2016 16:56:21 +0200</pubDate>
	<link>https://www.scipedia.com/public/Contents_2016a</link>
	<title><![CDATA[Linking Process, Structure, and Property in Additive Manufacturing Applications through advanced materials modeling.]]></title>
	<description><![CDATA[<p>Additive manufacturing (AM) processes have the ability to build complex geometries from a wide variety of materials. A popular approach for metal-based AM processes involves the deposition of material particles on a substrate followed by fusion of those particles together using a high intensity heat source, e.g. a laser or an electron beam, in order to fabricate a solid part. These methods are of high priority in engineering research, especially in applications for the energy, health, and defense sectors. The primary reasons behind the rapid growth in interest for AM include: (1) the ability to create complex geometries which are otherwise cost-prohibitive or difficult to manufacture, (2) increased freedom of material composition design through the adjustment of the ratios of the composing powders, (3) a reduction in wasted materials, and (4) the fast, low-volume, production of prototype and functional parts without the additional tooling and die requirements of conventional manufacturing methods. However, the highly localized and intense nature of these processes elicits many experimental and computational challenges. These challenges motivate a strong need for computational investigation, as does the need to more accurately characterize the response of parts built using AM. The present work will discuss these challenges and methods for creating multiscale material models that account for the complex phenomena observed in the AM production environment. The linkage between process, structure, and property&nbsp;of AM components, e.g., anisotropic plastic behavior&nbsp;combined anisotropic microstructural descriptors afforded through enhanced data compression techniques, will also be discussed.&nbsp;</p>]]></description>
	<dc:creator>Complas Contents</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cotela_2016</guid>
	<pubDate>Wed, 25 May 2016 18:21:02 +0200</pubDate>
	<link>https://www.scipedia.com/public/Cotela_2016</link>
	<title><![CDATA[Applications of turbulence modeling in civil engineering]]></title>
	<description><![CDATA[<p>This work explores the use of stabilized finite element formulations for the incompressible Navier-Stokes equations to simulate turbulent flow problems. Turbulence is a challenging problem due to its complex and dynamic nature and its simulation if further complicated by the fact that it involves fluid motions at vastly different length and time scales, requiring fine meshes and long simulation times. A solution to this issue is turbulence modeling, in which only the large scale part of the solution is retained and the effect of smaller turbulent motions is represented by a model, which is generally dissipative in nature.</p><p>In the context of finite element simulations for fluids, a second problem is the apparition of numerical instabilities. These can be avoided by the use of stabilized formulations, in which the problem is modified to ensure that it has a stable solution. Since stabilization methods typically introduce numerical dissipation, the relation between <i>numerical</i> and <i>physical</i> dissipation plays a crucial role in the accuracy of turbulent flow simulations. We investigate this issue by studying the behavior of stabilized finite element formulations based on the Variational Multiscale framework and on Finite Calculus, analyzing the results they provide for well-known turbulent problems, with the final goal of obtaining a method that both ensures numerical stability and introduces physically correct turbulent dissipation.</p><p>Given that, even with the use of turbulence models, turbulent flow problems require significant computational resources, we also focused on programming and parallel implementation aspects of finite element codes, and in particular in ensuring that our solver can perform efficiently on distributed memory architectures and high-performance computing clusters.</p><p>Finally, we have developed an adaptive mesh refinement technique to improve the quality of unstructured tetrahedral meshes, again with the goal of enabling the simulation of large turbulent flow problems. This technique combines an error estimator based on Variational Multiscale principles with a simple refinement procedure designed to work in a distributed memory context and we have applied it to the simulation of both turbulent and non-Newtonian flow problems.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ribo_2000</guid>
	<pubDate>Tue, 24 May 2016 19:15:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ribo_2000</link>
	<title><![CDATA[Development of an integrated system for geometry modelling, mesh generation and data management for the analysis by the finite element method]]></title>
	<description><![CDATA[<p>En esta tesis se describe el desarrollo e implementaci&oacute;n de un sistema inform&aacute;tico para el tratamiento de toda la informaci&oacute;n necesaria para un an&aacute;lisis por el M&eacute;todo de los Elementos Finitos o por otros m&eacute;todos num&eacute;ricos (diferencias finitas, vol&uacute;menes finitos, m&eacute;todos de contorno, m&eacute;todos de puntos, etc.).Algunas de sus partes se refieren principalmente al dise&ntilde;o y organizaci&oacute;n de un sistema de estas caracter&iacute;sticas. En otras, se describen los nuevos algoritmos que ha sido necesario desarrollar para cumplir los objetivos propuestos.Las diversas disciplinas que se describen a lo largo de la tesis se pueden diferenciar en:* Organizaci&oacute;n del sistema. Consiste en definir el tratamiento de todos los datos de un an&aacute;lisis gen&eacute;rico de manera uniforme. Tambi&eacute;n se dan criterios sobre el ordenamiento interno de los datos.* Modelaci&oacute;n geom&eacute;trica. Se presenta una serie de algoritmos que se han desarrollado para tratar y modificar la geometr&iacute;a del modelo.* Generaci&oacute;n de malla. Se describen diferentes t&eacute;cnicas y algoritmos para poder realizar la generaci&oacute;n de la malla.* Adaptabilidad del sistema a diferentes an&aacute;lisis. Se describe como se ha dise&ntilde;ado y como se realiza la adaptaci&oacute;n del sistema a un c&oacute;digo de an&aacute;lisis cualquiera.La implementaci&oacute;n del conjunto de criterios y algoritmos descritos a lo largo de esta tesis, ha permitido la creaci&oacute;n de un sistema que da soporte al proceso de an&aacute;lisis mediante m&eacute;todos num&eacute;ricos de modelos, tanto a nivel acad&eacute;mico como industrial.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Pouplana_2015</guid>
	<pubDate>Tue, 24 May 2016 19:07:02 +0200</pubDate>
	<link>https://www.scipedia.com/public/Pouplana_2015</link>
	<title><![CDATA[An isotropic damage model for geomaterials in the KRATOS framework]]></title>
	<description><![CDATA[<p>Progressive fracture in quasi-brittle materials such as concrete, rocks, soils, is often treated as strain softening in continuum damage mechanics. Such constitutive relations favour spurious strain localization and ill-posedness of boundary value problems, and call for some kind of regularization. In the present work, two different approaches are presented: a partially regularized local damage model that adjusts the softening part of a stress-strain law depending on the size of the element, and a fully regularized non-local damage model that introduces the characteristic length as an additional material parameter controlling the size of the fracture process zone.In addition, the strain softening of such models usually results in highly complex structural responses, including the snap-back type, and thus in this work we will be discussing non-linearity associated to damage modelling, and a global arc-length method for tracing the equilibrium path will be exposed.Furthermore, in the context of non-local damage models it is crucial to work with fine spatial discretizations at the damage progress zone, so that elements are smaller than the characteristic length. In this regard, a mesh-adaptive technique has been implemented with the purpose of enhancing the efficiency of the numerical analysis.Finally, two classical examples, the three-point bending test, and the single-edge notched beam test, are performed in order to analyse the mesh objectivity of the implemented integral-type non-local damage model, and assess the strengths and limitations of the mesh-adaptive procedure.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Vargas_et_al_2015</guid>
	<pubDate>Tue, 24 May 2016 18:31:02 +0200</pubDate>
	<link>https://www.scipedia.com/public/Vargas_et_al_2015</link>
	<title><![CDATA[A differential evolution based algorithm for constrained multiobjective structural optimization problems]]></title>
	<description><![CDATA[<p>Structural optimization problems aim at increasing the performance of the structure while decreasing its costs guaranteeing, however, the applicable safety requirements. As these aspects are conflicting, the formulation of the structural optimization problem as multiobjective is natural but uncommon, and has the advantage of presenting a diverse set of solutions to the decision maker(s). The literature shows that Evolutionary Algorithms (EAs) are effective to obtain solutions in multiobjective optimization problems, and that the Differential Evolution (DE) based ones are efficient when solving structural mono-objective structural optimization problems, specially those with a real encoding of the design variables. On the other hand, one can note that DE has not yet been applied to the multiobjective version of these problems. This article presents a performance analysis of a DE-based algorithm in five multiobjective structural optimization problems. The obtained results are compared to those found in the literature, and the comparisons indicate the potential of the proposed algorithm.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Urrecha_Romero_2015</guid>
	<pubDate>Tue, 24 May 2016 18:28:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Urrecha_Romero_2015</link>
	<title><![CDATA[A stabilized meshless method for the solution of the lagrangian equations of newtonian fluids]]></title>
	<description><![CDATA[<p>In this article we present numerical methods for the approximation of incompressible flows. We have addressed three problems: the stationary Stokes&rsquo; problem, the transient Stokes&rsquo; problem, and the general motion of newtonian fluids. In the three cases a discretization is employed that does not require a mesh of the domain but uses maximum entropy approximation functions. To guarantee the robustness of the solution a stabilization technique is employed. The most general problem, that of the motion of newtonian fluids, is formulated in lagrangian form. The results presented verify that stabilized meshless methods can be a competitive alternative to other approached currently in use.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Salgado_Galvez_et_al_2015</guid>
	<pubDate>Tue, 24 May 2016 18:26:02 +0200</pubDate>
	<link>https://www.scipedia.com/public/Salgado_Galvez_et_al_2015</link>
	<title><![CDATA[Probabilistic seismic risk assessment of Lorca through scenario simulations]]></title>
	<description><![CDATA[<p>A comprehensive and probabilistic seismic risk assessment has been performed for the buildings of Lorca, the most affected city in the region of Murcia, Spain, after the May 2011 earthquake. Seismic hazard is represented through a set of stochastic scenarios that allow accounting for small, moderate and extreme events in the future losses, also through spectral transfer functions the dynamic soil response has been considered. A building by building resolution level database has been used allowing the disaggregation of risk results in several categories besides allowing the generation of risk maps to visualize the geographical distribution of the future losses. For each of the identified building classes a vulnerability function has been assigned to determine the expected losses for different acceleration levels. Risk results have been obtained in terms of the loss exceedance curve from where other probabilistic risk metrics such as the average annual loss and the probable maximum loss can be derived. Risk results are useful for the decision-makers in the fields of emergency planning, existent building retrofitting schemes and financial protection through traditional insurance and reinsurance schemes or with alternative risk transfer instruments.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Rivera_et_al_2015</guid>
	<pubDate>Tue, 24 May 2016 18:25:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Rivera_et_al_2015</link>
	<title><![CDATA[Hemodynamic study by numerical simulation to complete the diagnosis in carotid stenosis tributary of endarterectomy]]></title>
	<description><![CDATA[<p>Introduction</p><p>In patients with moderate symptomatic or severe asymptomatic carotid stenosis, prior to an endarterectomy it is recommended to do a detailed study, in which the complet set of morphological and hemodynamic parameters are included.</p><p>In this work the effect that the location and shape of the stenosis has on the distribution of wall shear stresses and their impact on clinical diagnosis in these cases is analyzed.</p><p><br />
Materials and methods</p><p>First the model of the area to study is generated and the numerical simulation is done. Then the wall shear stress, the oscilation index and the wall shear stress exposure time are established, and the impact of the results for embolization risk or the growth of arterial plaque because of intimal hiperpasia are analysed. The methodology is applied to three idealized carotids with different localization and geometrical slope of the stenosis.</p><p><br />
Results</p><p>For the idealized carotids it is obtained that stenosis close to bifurcation present a higher embolization risk that the most distant one and has more risk as more stenosis slope it has. For the clinical case, the results show a high risk of embolization by break located near the atheromatous plaque. As well, it is obtained that the risk the arterial plaque to continue growing is low.</p><p><br />
Conclusions</p><p>The results show that the location of a moderate stenosis related to carotid bifurcation and their geometry they are factors that aid to complete the diagnosis of lesion.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Moreno_Cervera_2015</guid>
	<pubDate>Tue, 24 May 2016 18:14:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Moreno_Cervera_2015</link>
	<title><![CDATA[Stabilized finite elements for Bingham and Herschel-Bulkley confined flows. Part I : Formulation]]></title>
	<description><![CDATA[<p>This work presents a methodology for the solution of the Navier-Stokes equations for Bingham and Herschel-Bulkley viscoplastic fluids using stabilized mixed velocity/pressure finite elements. The theoretical formulation is developed and implemented in a computer code.</p><p>Viscoplastic fluids are characterized by a minimum shear stress called yield stress. Above this yield stress, the fluid is able to flow. Below this yield stress, the fluid behaves as a quasi-rigid body, with zero strain-rate.</p><p>First, the Navier-Stokes equations for incompressible fluid are presented. A review of the viscoplastic rheological models is included, with a detailed description of these models. The regularized viscoplastic models due to Papanastasiou are described. Double viscosity regularized models are proposed as an alternative to the models commonly used.</p><p>The discrete model is developed, and the Algebraic SubGrid Scale (ASGS) stabilization method, the Orthogonal Subgrid Scale (OSS) method and the split orthogonal subscales method are introduced.</p><p>The methodology proposed in this work provides a computational tool to study confined viscoplastic flows, common in industry.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Millan_Begambre_2015</guid>
	<pubDate>Tue, 24 May 2016 18:11:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Millan_Begambre_2015</link>
	<title><![CDATA[Solving topology optimization problems using the Modified Simulated Annealing Algorithm]]></title>
	<description><![CDATA[<p><span style="color: rgb(102, 102, 102); font-size: 14px; font-style: normal; font-weight: normal; text-align: justify;">This work proposes the use of the MSAA stochastic optimization technique to replace the optimality criterion used in the topology optimization method proposed by Andreassen. To evaluate and validate the MSAA performance we studied three plane elasticity problems reported in the literature. Each problem was analyzed with three different finite element mesh types in order to compare the results obtained in terms of topology, strain energy value and average runtimes. It was established that the procedure involving the MSAA, yields lower computational times in problems with more refined meshes. Finally, the material distribution and the energy values obtained were similar to those reported in the work of Andreassen giving validity to the work presented here.</span></p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Llano_Serna_Farias_2015</guid>
	<pubDate>Tue, 24 May 2016 18:08:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Llano_Serna_Farias_2015</link>
	<title><![CDATA[Numerical, theoretical and experimental validation of the material point method to solve geotechnical engineering problems]]></title>
	<description><![CDATA[<p>This paper investigates the possibility of using the material point method (MPM) to solve small strains quasi-static problems and dynamic problems related to large distortions. Traditional methods such as the finite element method (FEM) face difficulties when large strains are involved. Therefore, tools such as the MPM have become more important in recent years. As a new tool, the MPM needs to prove its functionality for geotechnical engineering problems. In first place the MPM mathematical formulation is shortly described. Next, numerical simulations of a shallow foundation, an unconfined compression test and a slope problem are performed in an open source MPM code. The results are compared with FEM simulations, analytical solutions and real laboratory tests. The study shows qualitative and quantitative agreement when compared; a better performance of MPM for solving stresses better than strains is detected. The set of simulations validates the MPM to solve geotechnical engineering problems when dealing with small and large strains. However, the traditional FEM showed a better performance for quasi-static cases.</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/García-Espinosa_2016a</guid>
	<pubDate>Mon, 23 May 2016 17:02:02 +0200</pubDate>
	<link>https://www.scipedia.com/public/García-Espinosa_2016a</link>
	<title><![CDATA[A FEM fluid-structure interaction algorithm for analysis of the seal dynamics of a Surface-Effect Ship]]></title>
	<description><![CDATA[<p>This paper shows the recent work of the authors in the development of a time-domain FEM model for evaluation of the seal dynamics of a surface effect ship. The fluid solver developed for this purpose, uses a potential flow approach along with a stream-line integration of the free surface. The paper focuses on the free surface-structure algorithm that has been developed to allow the simulation of the complex and highly dynamic behavior of the seals in the interface between the air cushion, and the water.</p><p>The developed fluid-structure interaction solver is based, on one side, on an implicit iteration algorithm, communicating pressure forces and displacements of the seals at memory level and, on the other side, on an innovative wetting and drying scheme able to predict the water action on the seals. The stability of the iterative scheme is improved by means of relaxation, and the convergence is accelerated using Aitken&rsquo;s method.</p><p>Several validations against experimental results have been carried out to demonstrate the developed algorithm.</p><p>&nbsp;</p>]]></description>
	<dc:creator>Julio García-Espinosa</dc:creator>
</item>

</channel>
</rss>