<?xml version='1.0'?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:georss="http://www.georss.org/georss" xmlns:atom="http://www.w3.org/2005/Atom" >
<channel>
	<title><![CDATA[Scipedia: Documents published in 2006]]></title>
	<link>https://www.scipedia.com/sitemaps/year/2006?offset=200</link>
	<atom:link href="https://www.scipedia.com/sitemaps/year/2006?offset=200" rel="self" type="application/rss+xml" />
	<description><![CDATA[]]></description>
	
	<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Lawphongpanich_et_al_2006a</guid>
	<pubDate>Mon, 25 Jan 2021 11:36:32 +0100</pubDate>
	<link>https://www.scipedia.com/public/Lawphongpanich_et_al_2006a</link>
	<title><![CDATA[Mathematical and Computational Models for Congestion Charging]]></title>
	<description><![CDATA[
<p>Although transportation economists have advocated the tolling of urban streets as a mechanism for controlling congestion and managing travel demands for over 50 years, it is only recently that this idea has become practical. When compared to the alternative of building more roads, congestion pricing - in particular via electronic tolling - is attractive and has been adopted in countries around the world. Recent implementations in London, Singapore, and various cities in Norway, as well as a number of projects in the United States, have been judged successful.  This book presents rigorous treatments of issues related to congestion pricing. The chapters describe recent advances in areas such as mathematical and computational models for predicting traffic congestion, determining when, where, and how much to levy tolls, and analyzing the impact of tolls on transporation systems. The analyses and methodologies developed in this book provide:  - Mechanisms that aid in determining and comparing congestion pricing schemes  - Methodologies for evaluating the efficiency of existing and proposed congestion pricing schemes  - A means to predict the impact of pricing on urban transporation systems  - Information essential to the financial and political success of congestion pricing programs.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Trujillo_Tovar_2007b</guid>
	<pubDate>Mon, 25 Jan 2021 11:05:40 +0100</pubDate>
	<link>https://www.scipedia.com/public/Trujillo_Tovar_2007b</link>
	<title><![CDATA[The European Port Industry: An Analysis of its Economic Efficiency]]></title>
	<description><![CDATA[
<p>Because of their critical strategic role, ports have all traditionally been subject to some form of government control even if the legal form and the intensity of this control have varied across countries. The member countries of the European Union have not been different from the rest of the world in this respect. A significant difference however is the recurrent effort to integrate, in a coordinated way, the port sector in a trans-European transport network (TEN-T) through the adoption of a common legal framework. In this context, if the objective of the reforms is to ensure that port networks, integrated in combined transport networks, become competitors of the road network, the concept of port efficiency becomes central. This paper provides an overview of the evolution of the European Port Legislation and shows how comparative economic measures can be used to highlight the scope for port efficiency improvements, essential to allow short sea shipping transport to compete with road transport in Europe. To our knowledge, this paper is also the first effort of estimating technical efficiency of European Port Authorities. The average port efficiency in 2002 was estimated to be around 60&percnt;, denoting that ports could have handled 40&percnt; more traffic with the same resources. Maritime Economics & Logistics (2007) 9, 148â171. doi:10.1057/palgrave.mel.9100177</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/King_et_al_2007a</guid>
	<pubDate>Mon, 25 Jan 2021 11:03:44 +0100</pubDate>
	<link>https://www.scipedia.com/public/King_et_al_2007a</link>
	<title><![CDATA[The Political Calculus of Congestion Pricing]]></title>
	<description><![CDATA[
<p>The political feasibility of using prices to mitigate congestion depends on who receives the toll revenue. We argue that congestion pricing on freeways will have the greatest chance of political success if the revenue is distributed to cities, and particularly to cities through which the freeways pass. In contrast to a number of previous proposals, we argue that cities are stronger claimants for the revenue than either individual drivers or regional authorities. We draw on theory from behavioral economics and political science to explain our proposal, and illustrate it with data from several metropolitan areas. In Los Angeles, where potential congestion toll revenues are estimated to be almost $5 billion a year, distributing toll revenues to cities with freeways could be politically effective and highly progressive.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Holyoak_Taylor_2006a</guid>
	<pubDate>Mon, 25 Jan 2021 10:54:58 +0100</pubDate>
	<link>https://www.scipedia.com/public/Holyoak_Taylor_2006a</link>
	<title><![CDATA[Modelling trip timing behaviour and the influence of peak spreading]]></title>
	<description><![CDATA[
<p>As the supply of transport infrastructure struggles to keep pace with ever increasing transport demands from the community, peak period traffic congestion is a problem faced by many urban areas around the world. As Australiaâs largest capital city, Sydney is no exception. With a population of over 4 million generating approximately 15.5 million trips each weekday, much of which occurs during morning and afternoon peak periods. It is for this reason that planners often focus on peak time periods for network provisions and operational management. This can lead to an inefficient allocation of resources, which could be unsustainable for future transport network operations. Peak spreading may be seen as having two broad dimensions. The first may be described as âpassiveâ peak spreading, which is a natural increase in the duration of a peak period as travel demand tests the capacity of a facility so that the levels of peak travel activity persist for a longer period. The second dimension is âactiveâ peak spreading, in which individual travelers deliberately change their travel behavior to avoid peak periods, or transport policies are enacted to encourage people to travel away from the peak periods. The concept of peak spreading thus introduces strategies and management techniques to manage the peak traffic demand as it allows for the spreading of peak period traffic flow profiles in congested areas. It is therefore important to represent the effects of such strategies in a modeling environment for evaluation. After a critical analysis of current international practice for representing trip timing behavior in current travel demand models, this paper provides a summary of observed trip timing behavior in Australian capital cities. It also focuses on the requirements for a travel time model with abilities in the representation of peak spreading strategies suggestions for future research directions.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Amin_et_al_2006c</guid>
	<pubDate>Mon, 25 Jan 2021 10:41:56 +0100</pubDate>
	<link>https://www.scipedia.com/public/Amin_et_al_2006c</link>
	<title><![CDATA[Making Outbound Route Selection Robust to Egress Point Failure]]></title>
	<description><![CDATA[
<p>Offline inter-domain outbound Traffic Engineering (TE) can be formulated as an optimization problem whose objective is to determine primary egress points for traffic exiting a domain. However, when egress point failures happen, congestion may occur if secondary egress points are not carefully determined. In this paper, we formulate a bi-level outbound TE problem in order to make outbound route selection robust to egress point failures. We propose a tabu search heuristic to solve the problem and compare the performance to three alternative approaches. Simulation results demonstrate that the tabu search heuristic achieves the best performance in terms of our optimization objectives and also keeps traffic disruption to a minimum.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Leduc_et_al_2006a</guid>
	<pubDate>Mon, 25 Jan 2021 10:33:12 +0100</pubDate>
	<link>https://www.scipedia.com/public/Leduc_et_al_2006a</link>
	<title><![CDATA[How well do traffic engineering objective functions meet TE requirements?]]></title>
	<description><![CDATA[
<p>We compare and evaluate how well-known and novel networkwide objective functions for Traffic Engineering (TE) algorithms fulfil TE requirements. To compare the objective functions we model the TE problem as a linear program and solve it to optimality, thus finding for each objective function the best possible target of any heuristic TE algorithm. We show that all the objective functions are not equivalent and some are far better than others. Considering the preferences a network operator may have, we show which objective functions are adequate or not. Peer reviewed</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Nylund_et_al_2007a</guid>
	<pubDate>Mon, 25 Jan 2021 10:26:25 +0100</pubDate>
	<link>https://www.scipedia.com/public/Nylund_et_al_2007a</link>
	<title><![CDATA[Fuel consumption and exhaust emissions of urban buses:Performance of the new diesel technology]]></title>
	<description><![CDATA[
<p>The research was carried out by the Finnish Public Transport Association. Altogether seven vehicles were measured, two two-axle Euro 3 -class vehicles as references, three new two-axle Euro 4 -class vehicles and two new three-axle vehicles. The measurements were carried out on a chassis dynamometer, using three cycles describing actual driving. In addition to fuel consumption, exhaust emissions were also recorded for these vehicles. The differences in fuel consumption and operating expenses were after all smaller than first anticipated. In the case of the Euro 3 -class reference vehicles, the difference between the two vehicles was as high as 7-10%. For new two-axle vehicles the difference in fuel consumption, when simulating urban driving, is only 3-4%. Due to different technical solutions, the results were anticipated to be greater. In suburban driving although, the difference is at its most 11%. In the class of two-axle vehicles, lowest fuel consumption was measured for a SCR vehicle, whereas in the case of the two three-axle vehicles, EGR technology resulted in lowest fuel consumption. The measurements do not give an unambiguous answer to whether the EGR- or SCR-technology is preferable regarding fuel consumption. The contemplation is hindered by two factors. On one hand, the order of superiority depends on the driving cycle, on the other, the actual exhaust emissions do not match with expectations. The two EGR vehicles (same make) produced higher NOx -emissions than the manufacturer's Euro 3 -engine. The most fuel efficient SCR -engine is not truly Euro 4 -class what comes to NOx -emissions. Only two of the new vehicles, both with SCR technology, produce NOx -emissions genuinely matching their classes. Both fuel consumption and exhaust emissions have been observed in the study. In case exhaust emissions were completely disregarded, fleet decisions might be directed towards fuel efficient vehicles which after all do not reach the level of emission performance that reasonably could be expected.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Kuipers_et_al_2006a</guid>
	<pubDate>Thu, 21 Jan 2021 15:17:22 +0100</pubDate>
	<link>https://www.scipedia.com/public/Kuipers_et_al_2006a</link>
	<title><![CDATA[Dynamic Routing in QoS-Aware Traffic Engineered Networks]]></title>
	<description><![CDATA[
<p>The problem of finding multi-constrained paths has been addressed by several QoS routing algorithms. While they generally satisfy the application requirements, they often do not consider the perspective of service providers. Service providers aim at maximizing the throughput and the number of accepted requests. These goals have been addressed by traffic engineering algorithms considering bandwidth as the sole application requirement. We propose a proper length function for an existing QoS routing algorithm (SAMCRA) that attempts to optimize network utilization while still offering QoS guarantees. This paper presents a comparison between several proposed algorithms via simulation studies. The simulations show that SAMCRA with a proper length performs similarly or even better than the best among the other algorithms and it has a fast running time.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Crichigno_et_al_2006a</guid>
	<pubDate>Thu, 21 Jan 2021 14:41:34 +0100</pubDate>
	<link>https://www.scipedia.com/public/Crichigno_et_al_2006a</link>
	<title><![CDATA[Multitree-Multiobjective Multicast Routing for Traffic Engineering]]></title>
	<description><![CDATA[
<p>This paper presents a new traffic engineering multitree-multiobjective multicast routing algorithm (M-MMA) that solves for the first time the GMM model for Dynamic Multicast Groups. Multitree traffic engineering uses several trees to transmit a multicast demand from a source to a set of destinations in order to balance traffic load, improving network resource utilization. Experimental results obtained by simulations using eight real net-work topologies show that this new approach gets trade off solutions while simultaneously considering five objective functions. As expected, when M-MMA is compared to an equivalent singletree alternative, it accommodates more traffic demand in a high traffic saturated network.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Nitsch_et_al_2007a</guid>
	<pubDate>Thu, 21 Jan 2021 13:18:20 +0100</pubDate>
	<link>https://www.scipedia.com/public/Nitsch_et_al_2007a</link>
	<title><![CDATA[7th conference on ultra-wideband, short-pulse electromagnetics]]></title>
	<description><![CDATA[
<p>Ultra-wideband (UWB), short-pulse (SP) electromagnetics are now being used for an increasingly wide variety of applications, including collision avoidance radar, concealed object detection, and communications. Notable progress in UWB and SP technologies has been achieved by investigations of their theoretical bases and improvements in solid-state manufacturing, computers, and digitizers. UWB radar systems are also being used for mine clearing, oil pipeline inspections, archeology, geology, and electronic effects testing. Ultra-wideband Short-Pulse Electromagnetics 7 presents selected papers of deep technical content and high scientific quality from the UWB-SP7 Conference, including wide-ranging contributions on electromagnetic theory, scattering, UWB antennas, UWB systems, ground penetrating radar (GPR), UWB communications, pulsed-power generation, time-domain computational electromagnetics, UWB compatibility, target detection and discrimination, propagation through dispersive media, and wavelet and multi-resolution techniques. This book serves as an essential for scientists and engineers working in these applications areas.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Eslinger_et_al_2006a</guid>
	<pubDate>Mon, 26 Oct 2020 15:32:31 +0100</pubDate>
	<link>https://www.scipedia.com/public/Eslinger_et_al_2006a</link>
	<title><![CDATA[A demonstration of the system assessment capability sac rev 1 software for the hanford remediation assessment project]]></title>
	<description><![CDATA[
<p>The System Assessment Capability (SAC) is a suite of interrelated computer codes that provides the capability to conduct large-scale environmental assessments on the Hanford Site. Developed by Pacific Northwest National Laboratory for the Department of Energy, SAC models the fate and transport of radioactive and chemical contaminants, starting with the inventory of those contaminants in waste sites, simulating transport through the environment, and continuing on through impacts to the environment and humans. Separate modules in the SAC address inventory, release from waste forms, water flow and mass transport in the vadose zone, water flow and mass transport in the groundwater, water flow and mass transport in the Columbia River, air transport, and human and ecological impacts. The SAC supports deterministic analyses as well as stochastic analyses using a Monte Carlo approach, enabling SAC users to examine the effect of uncertainties in a number of key parameters. The initial assessment performed with the SAC software identified a number of areas where both the software and the analysis approach could be improved. Since that time the following six major software upgrades have been made: (1) An air pathway model was added to support all-pathway analyses. (2) Models for releases from glass waste moreÂ Â» forms, buried graphite reactor cores, and buried naval reactor compartments were added. (3) An air-water dual-phase model was added to more accurately track the movement of volatile contaminants in the vadose zone. (4) The ability to run analyses was extended from 1,000 years to 10,000 years or longer after site closure. (5) The vadose zone flow and transport model was upgraded to support two-dimensional or three-dimensional analyses. (6) The ecological model and human risk models were upgraded so the concentrations of contaminants in food products consumed by humans are produced by the ecological model. This report documents the functions in the SAC software and provides a number of example applications for Hanford problems. References to theory documents and user guides are provided as well as links to a number of published data sets that support running analyses of interest to Hanford cleanup efforts. Â«Â le</p>

<p>Document type: Report</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Dittmer_2006a</guid>
	<pubDate>Mon, 26 Oct 2020 12:37:35 +0100</pubDate>
	<link>https://www.scipedia.com/public/Dittmer_2006a</link>
	<title><![CDATA[Remaining sites verification package for the 100 b 24 spillway waste site reclassification form 2006 051]]></title>
	<description><![CDATA[
<p>The 100-B-24 Spillway is a spillway that was designed to serve as an emergency discharge point for the 116-B-7 outfall in the event that the 100-B-15 river effluent pipelines were blocked, damaged, or undergoing maintenance. The site meets the remedial action objectives specified in the Remaining Sites ROD. The results of confirmatory sampling show that residual contaminant concentrations do not preclude any future uses and allow for unrestricted use of shallow zone soils. The results also demonstrate that residual contaminant concentrations are protective of groundwater and the Columbia River.</p>

<p>Document type: Report</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Zhang_Keller_2006a</guid>
	<pubDate>Mon, 26 Oct 2020 12:27:36 +0100</pubDate>
	<link>https://www.scipedia.com/public/Zhang_Keller_2006a</link>
	<title><![CDATA[T tank farm interim cover test design plan]]></title>
	<description><![CDATA[
<p>The Hanford Site has 149 underground single-shell tanks that store hazardous radioactive waste. Many of these tanks and their associated infrastructure (e.g., pipelines, diversion boxes) have leaked. Some of the leaked waste has entered the groundwater. The largest known leak occurred from the T-106 Tank in 1973. Many of the contaminants from that leak still reside within the vadose zone beneath the T Tank Farm. CH2M Hill Hanford Group, Inc. seeks to minimize movement of this residual contaminant plume by placing an interim cover on the surface. Such a cover is expected to prevent infiltrating water from reaching the plume and moving it further. Pacific Northwest National Laboratory has prepared a design plan to monitor and determine the effectiveness of the interim cover. A three-dimensional numerical simulation of water movement beneath a cover was conducted to guide the design of the plan. Soil water content, water pressure, and temperature will be monitored using off-the-shelf equipment that can be installed by the hydraulic hammer technique. In fiscal year 2006, two instrument nests will be installed, one inside and one outside of the proposed cover. In fiscal year 2007, two additional instrument nests, both inside the proposed cover, will be installed. EachmoreÂ Â» instrument nest contains a neutron access tube and a capacitance probe (to measure water content), and four heat-dissipation units (to measure pressure head and temperature). A datalogger and a meteorological station will be installed outside of the fence. Two drain gauges will be installed in locations inside and outside the cover for the purpose of measuring soil water flux.Â«Â le</p>

<p>Document type: Report</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Teitsma_Maupin_2006a</guid>
	<pubDate>Mon, 26 Oct 2020 12:26:05 +0100</pubDate>
	<link>https://www.scipedia.com/public/Teitsma_Maupin_2006a</link>
	<title><![CDATA[Reduced mandated inspection by remote field eddy current inspection of unpiggable pipelines]]></title>
	<description><![CDATA[
<p>The Remote Field Eddy Current (RFEC) technique is ideal for inspecting unpiggable pipelines because all of its components can be made much smaller than the diameter of the pipe to be inspected. For this reason, RFEC was chosen as a technology for unpiggable pipeline inspections by DOE-NETL with the support of OTD and PRCI, to be integrated with platforms selected by DOENETL. As part of the project, the RFEC laboratory facilities were upgraded and data collection was made nearly autonomous. The resulting improved data collection speeds allowed GTI to test more variables to improve the performance of the combined RFEC and platform technologies. Tests were conducted on 6-, 8-, and 12-inch seamless and seam-welded pipes. Testing on the 6-inch pipes included using seven exciter coils, each of different geometry with an initial focus on preparing the technology for use on an autonomous robotic platform with limited battery capacity. Reductions in power consumption proved successful. Tests with metal components similar to the Explorer II modules were performed to check for interference with the electromagnetic fields. The results of these tests indicated RFEC would be able to produce quality inspections while on the robot. Mechanical constraints imposed by the platform, power requirements,moreÂ Â» control and communication protocols, and potential busses and connectors were addressed. Much work went into sensor module design including the mechanics and electronic diagrams and schematics. GTI participated in two Technology Demonstrations for inspection technologies held at Battelle Laboratories. GTI showed excellent detection and sizing abilities for natural corrosion. Following the demonstration, module building commenced but was stopped when funding reductions did not permit continued development for the selected robotic platform. Conference calls were held between GTI and its sponsors to resolve the issue of how to proceed with reduced funding. The project was rescoped for 10-16-inch pipes with the intent of looking at lower cost, easier to implement, tethered platform applications. OTD ended its sponsorship.Â«Â le</p>

<p>Document type: Report</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Benoit_et_al_2007a</guid>
	<pubDate>Mon, 26 Oct 2020 11:28:48 +0100</pubDate>
	<link>https://www.scipedia.com/public/Benoit_et_al_2007a</link>
	<title><![CDATA[Multi-criteria scheduling of pipeline workflows]]></title>
	<description><![CDATA[
<p>Mapping workflow applications onto parallel platforms is a challenging problem, even for simple application patterns such as pipeline graphs. Several antagonist criteria should be optimized, such as throughput and latency (or a combination). In this paper, we study the complexity of the bi-criteria mapping problem for pipeline graphs on communication homogeneous platforms. In particular, we assess the complexity of the well-known chains-to-chains problem for different-speed processors, which turns out to be NP-hard. We provide several efficient polynomial bi-criteria heuristics, and their relative performance is evaluated through extensive simulations.; LÂordonnancement et lÂallocation des workflows sur plates-formes parallÃ¨les est un problÃ¨me crucial, mÃªme pour des applications simples comme des graphes en pipeline. Plusieurs critÃ¨res contradictoires doivent Ãªtre optimisÃ©s, tels que le dÃ©bit et la latence (ou une combinaison des deux). Dans ce rapport, nous Ã©tudions la complexitÃ© du problÃ¨me de lÂordonnancement bi-critÃ¨re pour les graphes pipelinÃ©s sur des plate-formes avec communications homogÃ¨nes. En particulier nous Ã©valuons la complexitÃ© du problÃ¨me bien connu Âchains-on-chainsÂ pour les processeurs hÃ©tÃ©rogÃ¨nes, un problÃ¨me qui sÂavÃ¨re NP-difficile. Nous proposons plusieurs heuristiques bi-critÃ¨res efficaces en temps polynomial. Leur performance relative est Ã©valuÃ©e par des simulations intensives.</p>

<p>Document type: Report</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Sarica_Zhang_2006a</guid>
	<pubDate>Mon, 26 Oct 2020 11:27:26 +0100</pubDate>
	<link>https://www.scipedia.com/public/Sarica_Zhang_2006a</link>
	<title><![CDATA[Development of Next Generation Multiphase Pipe Flow Prediction Tools]]></title>
	<description><![CDATA[
<p>The developments of oil and gas fields in deep waters (5000 ft and more) will become more common in the future. It is inevitable that production systems will operate under multiphase flow conditions (simultaneous flow of gas, oil and water possibly along with sand, hydrates, and waxes). Multiphase flow prediction tools are essential for every phase of hydrocarbon recovery from design to operation. Recovery from deep-waters poses special challenges and requires accurate multiphase flow predictive tools for several applications, including the design and diagnostics of the production systems, separation of phases in horizontal wells, and multiphase separation (topside, seabed or bottom-hole). It is crucial for any multiphase separation technique, either at topside, seabed or bottom-hole, to know inlet conditions such as flow rates, flow patterns, and volume fractions of gas, oil and water coming into the separation devices. Therefore, the development of a new generation of multiphase flow predictive tools is needed. The overall objective of the proposed study is to develop a unified model for gas-oil-water three-phase flow in wells, flow lines, and pipelines to predict flow characteristics such as flow patterns, phase distributions, and pressure gradient encountered during petroleum production at different flow conditions (pipe diameter and inclination, moreÂ Â» fluid properties and flow rates). In the current multiphase modeling approach, flow pattern and flow behavior (pressure gradient and phase fractions) prediction modeling are separated. Thus, different models based on different physics are employed, causing inaccuracies and discontinuities. Moreover, oil and water are treated as a pseudo single phase, ignoring the distinct characteristics of both oil and water, and often resulting in inaccurate design that leads to operational problems. In this study, a new model is being developed through a theoretical and experimental study employing a revolutionary approach. The basic continuity and momentum equations is established for each phase, and used for both flow pattern and flow behavior predictions. The required closure relationships are being developed, and will be verified with experimental results. Gas-oil-water experimental studies are currently underway for the horizontal pipes. Industry-driven consortia provide a cost-efficient vehicle for developing, transferring, and deploying new technologies into the private sector. The Tulsa University Fluid Flow Projects (TUFFP) is one of the earliest cooperative industry-university research consortia. TUFFP's mission is to conduct basic and applied multiphase flow research addressing the current and future needs of hydrocarbon production and transportation. TUFFP participants and The University of Tulsa are supporting this study through 55% cost sharing. Â«Â le</p>

<p>Document type: Report</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Thien_2006a</guid>
	<pubDate>Mon, 26 Oct 2020 10:30:17 +0100</pubDate>
	<link>https://www.scipedia.com/public/Thien_2006a</link>
	<title><![CDATA[Pipeline structural health monitoring using macro fiber composite active sensors]]></title>
	<description><![CDATA[
<p>The United States economy is heavily dependent upon a vast network of pipeline systems to transport and distribute the nation's energy resources. As this network of pipelines continues to age, monitoring and maintaining its structural integrity remains essential to the nation's energy interests. Numerous pipeline accidents over the past several years have resulted in hundreds of fatalities and billions of dollars in property damages. These accidents show that the current monitoring methods are not sufficient and leave a considerable margin for improvement. To avoid such catastrophes, more thorough methods are needed. As a solution, the research of this thesis proposes a structural health monitoring (SHM) system for pipeline networks. By implementing a SHM system with pipelines, their structural integrity can be continuously monitored, reducing the overall risks and costs associated with current methods. The proposed SHM system relies upon the deployment of macro-fiber composite (MFC) patches for the sensor array. Because MFC patches are flexible and resilient, they can be permanently mounted to the curved surface of a pipeline's main body. From this location, the MFC patches are used to monitor the structural integrity of the entire pipeline. Two damage detection techniques, guided wave and impedance methods, were implemented as part of the proposed SHM system. However, both techniques utilize the same MFC patches. This dual use of the MFC patches enables the proposed SHM system to require only a single sensor array. The presented Lamb wave methods demonstrated the ability to correctly identify and locate the presence of damage in the main body of the pipeline system, including simulated cracks and actual corrosion damage. The presented impedance methods demonstrated the ability to correctly identify and locate the presence of damage in the flanged joints of the pipeline system, including the loosening of bolts on the flanges. In addition to damage to the actual pipeline itself, the proposed methods were used to demonstrate the capability of detecting deposits inside of pipelines. Monitoring these deposits can prevent clogging and other hazardous situations. Finally, suggestions are made regarding future research issues which are needed to advance this research. Because the research of this thesis has only demonstrated the feasibility of the techniques for such a SHM system, these issues require attention before any commercial applications can be realized.</p>

<p>Document type: Report</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Pan_et_al_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 16:16:52 +0200</pubDate>
	<link>https://www.scipedia.com/public/Pan_et_al_2006a</link>
	<title><![CDATA[Mobile pipelines parallelizing left looking algorithms using navigational programming]]></title>
	<description><![CDATA[
<p>We consider the class of left-looking sequential matrix algorithms: consumer-driven algorithms that are characterized by lazy propagation of data. Left-looking algorithms are difficult to parallelize using the message-passing or distributed shared memory models because they only possess pipeline parallelism. We show that these algorithms can be directly parallelized using mobile pipelines provided by the Navigational Programming methodology. We present performance data demonstrating the effectiveness of our approach.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Rahim-Amoud_et_al_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 15:58:18 +0200</pubDate>
	<link>https://www.scipedia.com/public/Rahim-Amoud_et_al_2006a</link>
	<title><![CDATA[Improvement of mpls performance by implementation of a multi agent system]]></title>
	<description><![CDATA[
<p>Multi-Protocol Label Switching (MPLS) is a network layer packet forwarding technology that provides flexible circuit switched traffic engineering solutions in packet switched networks by explicit path routing. However, the actual weakness of MPLS resides in its inability to provide application-level routing intelligence, which is a fundamental component especially for voice delivery. In this paper we propose to introduce a Multi-Agent System (MAS) within the MPLS network to improve its performance. The introduction of agents takes place into the decision points in MPLS at the flow level, and distributes traffic based on the quality of service required by the type of traffic. We also propose an intelligent framework for network as well as an architecture of our agent in order to improve the efficiency of the Quality of Service (QoS) within MPLS.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Rozzi_et_al_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 15:53:27 +0200</pubDate>
	<link>https://www.scipedia.com/public/Rozzi_et_al_2006a</link>
	<title><![CDATA[Design Sketching for Space and Time]]></title>
	<description><![CDATA[
<p>In this paper we present a case study of how design sketching can be used as a technique for exploring and creating a common understanding between users, designers and software developers, of the representation design requirements for supporting spatial-temporal reasoning in Air Traffic Control (ATC). The safe and expeditious control of aircraft requires the ATC controller to think in terms of 3D air space, and also plan ahead in time. We refer to this mental process as spatial-temporal reasoning. ATC is a 4D (3D plus time) problem but is currently supported by 2D tools such as the Plan Position Indicator-type radar displays that are seen in ATC centres. This requires the air traffic controllers to construct mental models of the air traffic situation to ensure safe vertical and horizontal separations between moving aircraft, and also expedite traffic flow. These objectives require prediction of traffic patterns and potential bottlenecks. To explain how we used design sketching, we report on the Task Analysis of an exemplar ATC task, and the characterisation of this task in spatial-temporal terms, and how the Ecological Interface Design principle of visualisation of constraints was applied to guide the development of the 4D visual form of the representation design.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Trujillo_Tovar_2007a</guid>
	<pubDate>Wed, 14 Oct 2020 15:45:45 +0200</pubDate>
	<link>https://www.scipedia.com/public/Trujillo_Tovar_2007a</link>
	<title><![CDATA[The European port industry: an analysis of its economic efficiency]]></title>
	<description><![CDATA[
<p>Because of their critical strategic role, ports have all traditionally been subject to some form of government control even if the legal form and the intensity of this control have varied across countries. The member countries of the European Union have not been different from the rest of the world in this respect. A significant difference however is the recurrent effort to integrate, in a coordinated way, the port sector in a trans-European transport network (TEN-T) through the adoption of a common legal framework. In this context, if the objective of the reforms is to ensure that port networks, integrated in combined transport networks, become competitors of the road network, the concept of port efficiency becomes central. This paper provides an overview of the evolution of the European Port Legislation and shows how comparative economic measures can be used to highlight the scope for port efficiency improvements, essential to allow short sea shipping transport to compete with road transport in Europe. To our knowledge, this paper is also the first effort of estimating technical efficiency of European Port Authorities. The average port efficiency in 2002 was estimated to be around 60&percnt;, denoting that ports could have handled 40&percnt; more traffic with the same resources. Maritime Economics & Logistics (2007) 9, 148171. doi:10.1057/palgrave.mel.9100177</p>

<p>Document type: Book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/van_Kessel_et_al_2007a</guid>
	<pubDate>Wed, 14 Oct 2020 15:33:38 +0200</pubDate>
	<link>https://www.scipedia.com/public/van_Kessel_et_al_2007a</link>
	<title><![CDATA[Further development and first application of a mud transport model for the Scheldt estuary: in the framework of LTV. Phase 2]]></title>
	<description><![CDATA[
<p>In 2006, a work plan was conceived tor the development of a mud transport model tor the Scheldt estuary in the framework of LTV (Long Term Vision) (Winterwerp and De Kok. 2006). The purpose of this model is to support managers of the Scheldt estuary with the solution of a number of managerial issues. Also in 2006, the first two phascs were initiated. Thc present report discusses the actlvities that have been carried out during 2007, i.e. further improvement of the hydrodynarnic and mud transport model and lirst applicarion of the mud model to the release of fine sediment dredged from Sloe harbour, Vlissingen. At a technical level, all model improvements scheduled tor 2007 have been implemenred. The most important developments are: longer hydrodynarnic simulation period (3 month), more accurate concenration boundary condition, variable wave effects and biological effects. The hydrodynarnic sirnulation demonstrates realistic values for water levels, salinities and residual currents. Upstream of Antwerpen. the propagation of the tidal wave is modelled less accurately. Regarding the mud transport simulations, the following is concluded:"br" 1. A minor shift of two dumping locations near Antwerp rnuch improves the proper modelling of the ETM."br" 2. New concentration BC at sea result in more realistic SPM concentrations and longshore SPM fluxes at sea."br" 3. The difference between simulations with 5 and 10 horizontal layers is only minor."br" 4 . Variable waves temporarily enhance the concenrations in the western part of the Western Scheldt during storrns."br" 5. The biological impact on large-scale SPM concentrations in the Scheldt estuary appears to be minor."br" 6. The SPM levels appear to be rather sensitive to the volume of harbour siltation and dumping."br" 7. The model cornputes an unrealistically high residual sediment flux towards the North Sea (about 2 MT/y)."br" If sediment dumping is in equilibrium with harbour siltation, this net export results in too low equilibrium SPM levels. Application of the model to the dumping of sediment dredged from Sloe harbour shows that a shift of the release location in western direction may be favourable because of a small reduction in local SPM levels and siltation rates.</p>

<p>Document type: Book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Lindgreen_Sorenson_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 15:26:08 +0200</pubDate>
	<link>https://www.scipedia.com/public/Lindgreen_Sorenson_2006a</link>
	<title><![CDATA[Simulation of Energy Consumption and Emissions from Rail Traffic]]></title>
	<description><![CDATA[
<p>This report describes the methodology used in the ARTEMIS rail emissions model.  The approached used is a matrix of operating conditions, speeds and accelerations, for which basic parameters are used to calculated the resistance to motion of trains.  Four types of resistance are included:  rolling, aerodynamic, gravitational and acceleration.  A necessary element in the calculation is the driving pattern, that is, the distribution of speeds and accelerations for typical operation.In the report, data are analyzed to provide operation condition distributions on both a spatial and temporal basis.  The calculation procedure is evaluated with respect to resolution of operation conditions, and then evaluated by comparison with experimental data for a variety of passenger and goods trains.  The results indicate that the energy consumption from modeling approach is valid to better that 10% for known operating characteristics.  Emissions are calculated from the energy consumption using average fuel based emissions factors and electrical production emissions factors.</p>

<p>Document type: Book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Pinto_Baran_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 15:25:20 +0200</pubDate>
	<link>https://www.scipedia.com/public/Pinto_Baran_2006a</link>
	<title><![CDATA[Multiobjective Multicast Routing with Ant Colony Optimization]]></title>
	<description><![CDATA[
<p>This work presents a multiobjective algorithm for multicast traffic engineering. The proposed algorithm is a new version of MultiObjective Ant Colony System (MOACS), based on Ant Colony Optimization (ACO). The proposed MOACS simultaneously optimizes the maximum link utilization, the cost of the multicast tree, the averages delay and the maximum endtoend delay. In this way, a set of optimal solutions, known as Pareto set is calculated in only one run of the algorithm, without a priori restrictions. Experimental results obtained with the proposed MOACS were compared to a recently published Multiobjective Multicast Algorithm (MMA), showing a promising performance advantage for multicast traffic engineering. 5th IFIP International Conference on Network Control & Engineering for QoS, Security and Mobility Red de Universidades con Carreras en Informática (RedUNCI)</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Carrozzo_et_al_2006b</guid>
	<pubDate>Wed, 14 Oct 2020 15:15:16 +0200</pubDate>
	<link>https://www.scipedia.com/public/Carrozzo_et_al_2006b</link>
	<title><![CDATA[Evaluation of bandwidth dependent metrics for te links in a gmpls path computation system]]></title>
	<description><![CDATA[
<p>The GMPLS standardization is paving the way for the implementation of new configurable traffic engineering (TE) policies for transport networks. This paper takes aim at evaluating the effects of using bandwidth-dependent TE metrics in a centralized Path Computation System (PCS), suited for handling the routing requests in an operational transport network with a GMPLS control plane. The results of an intensive testing campaign show an evident improvement in the utilization of network resources when such TE metrics are enabled, whatever survivability requirement is imposed on the LSP (e.g. classical 1+1 protection, pre-planned or On-the-Fly restoration, etc.). Moreover, a simple policy function is suggested as a good trade-off between the achievable performance and the computing load on CPU.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Rozenfeld_Tennenholtz_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 15:15:04 +0200</pubDate>
	<link>https://www.scipedia.com/public/Rozenfeld_Tennenholtz_2006a</link>
	<title><![CDATA[Strong and Correlated Strong Equilibria in Monotone Congestion Games]]></title>
	<description><![CDATA[
<p>The study of congestion games is central to the interplay between computer science and game theory. However, most work in this context does not deal with possible deviations by coalitions of players, a significant issue one may wish to consider. In order to deal with this issue we study the existence of strong and correlated strong equilibria in monotone congestion games. Our study of strong equilibrium deals with monotone-increasing congestion games, complementing the results obtained by Holzman and Law-Yone on monotone-decreasing congestion games. We then present a study of correlated-strong equilibrium for both decreasing and increasing monotone congestion games.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Hardy_Bourgois_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 15:08:53 +0200</pubDate>
	<link>https://www.scipedia.com/public/Hardy_Bourgois_2006a</link>
	<title><![CDATA[Exploring the potential of oss in air traffic management]]></title>
	<description><![CDATA[
<p>This paper introduces a project that aims at defining an Open Source Software (OSS) policy in the field of Air Traffic Management (ATM). In order to develop such a policy, we chose to investigate first a set of predictive hypotheses. Our four initial hypotheses were presented, refined and discussed in bi-lateral meetings with experts in the ATM field and in several conferences and workshops with OSS experts. At a roundtable, jointly organized by CALIBRE and EUROCONTROL, we confronted early open source experiences and insights in the ATM domain with experiences and knowledge from a panel of OSS experts and practitioners from academia and industry. The revised initial hypotheses are presented using a fixed format that should facilitate further evolution of these hypotheses.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Retvari_et_al_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 15:03:37 +0200</pubDate>
	<link>https://www.scipedia.com/public/Retvari_et_al_2006a</link>
	<title><![CDATA[On improving the accuracy of ospf traffic engineering]]></title>
	<description><![CDATA[
<p>The conventional forwarding rule used by IP networks is to always choose the path with the shortest length  in terms of administrative link weights assigned to the links  to forward traffic. Lately, it has been proposed to use shortest-path-first routing to implement Traffic Engineering in IP networks, promising with a big boost in the profitability of the legacy network infrastructure. The idea is to set the link weights so that the shortest paths, and the traffic thereof, follow the paths designated by the operator. Unfortunately, traditional methods to calculate the link weights usually produce a bunch of superfluous shortest paths, often leading to congestion along the unconsidered paths. In this paper, we introduce and develop novel methods to increase the accuracy of this process and, by means of extensive simulations, we show that our proposed solution produces remarkably high quality link weights.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Neves_et_al_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 14:47:14 +0200</pubDate>
	<link>https://www.scipedia.com/public/Neves_et_al_2006a</link>
	<title><![CDATA[Proportional service differentiation with mpls]]></title>
	<description><![CDATA[
<p>This paper describes two traffic engineering techniques for implementing proportional differentiated services based on Multiprotocol Label Switching constraint based routing. Both use a dynamic bandwidth allocation scheme to modify the bandwidth reserved by each traffic class according to the current network load. The first scheme uses an adaptive algorithm that qualitatively determines the required average throughput per source for each class and moves bandwidth between classes for each path as necessary. The second scheme mathematically divides the existing bandwidth through the traffic classes for each path. The quality of service that users get with both techniques is assessed by simulation and compared with a fixed bandwidth allocation scheme. 5th IFIP International Conference on Network Control & Engineering for QoS, Security and Mobility Red de Universidades con Carreras en Informática (RedUNCI)</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ana-Maria-Marhan_et_al_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 14:45:11 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ana-Maria-Marhan_et_al_2006a</link>
	<title><![CDATA[Designing distributed task performance in safety critical systems equipped with mobile devices]]></title>
	<description><![CDATA[
<p>This paper describes a method aiming to support the design of interactive-safety critical systems. The method proposes an original integration of approaches usually considered separately, such as task modelling and distributed cognition. The basic idea is that analysing task performance requires a clear understanding of the information needed to accomplish the task and how to derive such information from both internal cognitive representations and external representations provided by various types of artefacts. We also report on a first application of the method to a case study in the Air Traffic Control (ATC) domain.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Dahal_2007a</guid>
	<pubDate>Wed, 14 Oct 2020 14:44:33 +0200</pubDate>
	<link>https://www.scipedia.com/public/Dahal_2007a</link>
	<title><![CDATA[Intelligent traffic control decision support system]]></title>
	<description><![CDATA[
<p>When non-recurrent road traffic congestion happens, the operator of the traffic control centre has to select the most appropriate traffic control measure or combination of measures in a short time to manage the traffic network. This is a complex task, which requires expert knowledge, much experience and fast reaction. There are a large number of factors related to a traffic state as well as a large number of possible control measures that need to be considered during the decision making process. The identification of suitable control measures for a given non-recurrent traffic congestion can be tough even for experienced operators. Therefore, simulation models are used in many cases. However, simulating different traffic scenarios for a number of control measures in a complicated situation is very time-consuming. In this paper we propose an intelligent traffic control decision support system (ITC-DSS) to assist the human operator of the traffic control centre to manage online the current traffic state. The proposed system combines three soft-computing approaches, namely fuzzy logic, neural network, and genetic algorithm. These approaches form a fuzzy-neural network tool with self-organization algorithm for initializing the membership functions, a GA algorithm for identifying fuzzy rules, and the back-propagation neural network algorithm for fine tuning the system parameters. The proposed system has been tested for a case-study of a small section of the ring-road around Riyadh city. The results obtained for the case study are promising and show that the proposed approach can provide an effective support for online traffic control.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/?engezer_Karasan_2007a</guid>
	<pubDate>Wed, 14 Oct 2020 14:40:13 +0200</pubDate>
	<link>https://www.scipedia.com/public/?engezer_Karasan_2007a</link>
	<title><![CDATA[An efficient virtual topology design and traffic engineering scheme for ip wdm networks]]></title>
	<description><![CDATA[
<p>Date of Conference: 29-31 May 2007 Conference Name: 11th International IFIP TC6 Conference, ONDM 2007 We propose an online traffic engineering (TE) scheme for efficient routing of bandwidth guaranteed connections on a Multiprotocol label switching (MPLS)/wavelength division multiplexing (WDM) network with a traffic pattern varying with the time of day. We first consider the problem of designing the WDM virtual topology utilizing multi-hour statistical traffic pattern. After presenting an effective solution to this offline problem, we introduce a Dynamic tRaffic Engineering AlgorithM (DREAM) that makes use of the bandwidth update and rerouting of the label switched paths (LSPs). The performance of DREAM is compared with commonly used online TE schemes and it is shown to be superior in terms of blocked traffic ratio.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Allalouf_Shavitt_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 13:59:33 +0200</pubDate>
	<link>https://www.scipedia.com/public/Allalouf_Shavitt_2006a</link>
	<title><![CDATA[Achieving bursty traffic guarantees by integrating traffic engineering and buffer management tools]]></title>
	<description><![CDATA[
<p>Traffic engineering tools are applied to design a set of paths, e.g., using MPLS, in the network in order to achieve global network utilization. Usually, paths are guaranteed long-term traffic rates, while the short-term rates of bursty traffic are not guaranteed. The resource allocation scheme, suggested in this paper, handles bursts based on maximal traffic volume allocation (termed TVAfB) instead of a single maximal or sustained rate allocation. This translates to better SLAs to the network customers, namely SLAs with higher traffic peaks, that guarantees burst non-dropping. Given a set of paths and bandwidth allocation along them, the suggested algorithm finds a special collection of bottleneck links, which we term the first cut, as the optimal buffering location for bursts. In these locations, the buffers act as an additional resource to improve the network short-term behavior, allowing traffic to take advantage of the under-used resources at the links that precede and follow the bottleneck links. The algorithm was implemented in MATLAB. The resulted provisioning parameters were simulated using NS-2 to demonstrate the effectiveness of the proposed scheme.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Verchere_et_al_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 13:57:43 +0200</pubDate>
	<link>https://www.scipedia.com/public/Verchere_et_al_2006a</link>
	<title><![CDATA[Multi layer recovery enabled with end to end signaling]]></title>
	<description><![CDATA[
<p>Within GMPLS framework, the signaling protocol Resource reSerVation Protocol with Traffic Engineering extensions (RSVP-TE) is extended to support the requirements of an Automated Switched Optical Network architecture. This paper presents the extensions of the end-to-end connection services in an overlay network built on two control planes. RSVP-TE protocol extensions are first described between an IP/MPLS router and a SDH/GMPLS core optical cross-connect, defining GMPLS-UNI. Dimensioning of three scenarios proving the benefits of GMPLS-UNI is discussed.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Sousa_et_al_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 13:57:15 +0200</pubDate>
	<link>https://www.scipedia.com/public/Sousa_et_al_2006a</link>
	<title><![CDATA[Efficient ospf weight allocation for intra domain qos optimization]]></title>
	<description><![CDATA[
<p>This paper presents a traffic engineering framework able to optimize OSPF weight setting administrative procedures. Using the proposed framework, enhanced OSPF configurations are now provided to network administrators in order to effectively improve the QoS performance of the corresponding network domain. The envisaged NP-hard optimization problem is faced resorting to Evolutionary Algorithms, which allocate OSPF weights guided by a bi-objective function. The results presented in this work show that the proposed optimization tool clearly outperforms common weight setting heuristics.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cinkler_et_al_2006a</guid>
	<pubDate>Wed, 14 Oct 2020 13:52:59 +0200</pubDate>
	<link>https://www.scipedia.com/public/Cinkler_et_al_2006a</link>
	<title><![CDATA[Multi layer traffic engineering through adaptive ? path fragmentation and de fragmentation]]></title>
	<description><![CDATA[
<p>In Multi-Layer networks, where more than one layer is dynamic, i.e., connections are set up using not only the upper, e.g., IP layer but the underlying wavelength layer as well leads often to suboptimal performance due to long wavelength paths, that do not allow routing the traffic along the shortest path. The role of MLTE (Multi-Layer Traffic Engineering) is to cut these long wavelength paths into parts (fragments) that allow better routing at the upper layer (fragmentation), or to concatenate two or more fragments into longer paths (defragmentation) when the network load is low and therefore less hops are preferred.\r \r In this paper we present a new model (GG: Grooming Graph) and an algorithm for this model that supports Fragmentation and De-Fragmentation of wavelength paths making the network always instantly adapt to changing traffic conditions. We introduce the notion of shadow capacities to model lightpath tailoring. We implicitly assume that the wavelength paths carry such, e.g., IP traffic that can be interrupted for a few microseconds and that even allows minor packet reordering.\r \r To show the superior performance of our approach in various network and traffic conditions we have carried out an intensive simulation study.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Lo?pez_et_al_2007a</guid>
	<pubDate>Wed, 14 Oct 2020 13:01:46 +0200</pubDate>
	<link>https://www.scipedia.com/public/Lo?pez_et_al_2007a</link>
	<title><![CDATA[A bayesian decision theory approach for the techno-economic analysis of an all-optical router]]></title>
	<description><![CDATA[
<p>Proceedings of 11th International IFIP TC6 Conference, ONDM 2007, Athens, Greece, May 29-31, 2007. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-540-72731-6_46 Typically, core networks are provided with both optical and electronic physical layers. However, the interaction between the two layers is at present limited, since most of the traditional transport functionalities, such as traffic engineering, switching and restoration, are carried in the IP/MPLS layer. In this light, the research community has paid little attention to the potential benefits of the interaction between layers, multilayer capabilities, on attempts to improve the Quality of Service control. This work shows when to move incoming Label Switched Paths (LSPs) between layers based on a multilayer mechanism that trades off a QoS metric, such as end-to-end delay, and techno-economic aspects. Such mechanism follows the Bayesian decision theory, and is tested with a set of representative case scenarios. The authors would like to thank the support from the European Union VI Framework Programme e-Photon/ONe+ Network of Excellence (FP6-IST-027497). This work has also been partially funded by the IST Project NOBEL II (FP6-IST-027305).</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Jones_et_al_2007a</guid>
	<pubDate>Wed, 14 Oct 2020 12:51:54 +0200</pubDate>
	<link>https://www.scipedia.com/public/Jones_et_al_2007a</link>
	<title><![CDATA[Informing the specification of a large-scale socio-technical system with models of human activity]]></title>
	<description><![CDATA[
<p>In this paper, we present our experience of using rich and detailed models of human activity in an existing socio-technical system in the domain of air traffic control to inform a use case-based specification of an enhanced future system, called DMAN. This work was carried out as part of a real project for Eurocontrol, the European Organisation for the Safety of Air Navigation. We describe, in outline, the kinds of models we used, and present some examples of the ways in which these models influenced the specification of use cases and requirements for the future system. We end with a discussion of lessons learnt.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ploeg_Veugelers_2007a</guid>
	<pubDate>Wed, 14 Oct 2020 12:51:43 +0200</pubDate>
	<link>https://www.scipedia.com/public/Ploeg_Veugelers_2007a</link>
	<title><![CDATA[Higher education reform and the renewed Lisbon strategy: role of member states and the European Commission]]></title>
	<description><![CDATA[
<p>Discussions on problems in higher education in Europe typically focus on rising enrolment rates, access, governance, underperformance in research and teaching, lack of internationalisation, the lack of private and public funding. Our proposals for reform are based on more autonomy for universities, higher tuition fees, more private funding, introduction of income-contingent loans, better governance, more competition and internationalisation. Taking a subsidiarity perspective, the role of the EU in reforming the higher education sector in Europe is providing mutual policy learning opportunities on higher education reforms across Member States and supporting the building of higher education infrastructure in Member States (through the Structural and FP Funds). But beyond the support to Member States policies, the EU should further develop the European dimension, through furthering the goals of the Bologna reforms, cross recognition of qualifications, funding and promoting intra-EU mobility of students, researchers and teachers. The EU should take more initiatives to facilitate global mobility and cooperation. Finally, consistent with the subsidiarity principle, the EU can develop "flagships" initiatives.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Carrozzo_et_al_2006a</guid>
	<pubDate>Tue, 29 Sep 2020 10:01:37 +0200</pubDate>
	<link>https://www.scipedia.com/public/Carrozzo_et_al_2006a</link>
	<title><![CDATA[A centralized path computation system for gmpls transport networks design issues and performance studies]]></title>
	<description><![CDATA[
<p>The GMPLS standardization is paving the way for new configurable Traffic Engineering (TE) policies and new survivability schemes for transport networks. In this context, a centralized Path Computation System (PCS) has been implemented, suited for transport networks with a GMPLS control plane. After a brief description of the requirements for a PCS in a GMPLS network, some design issues for the proposed implementation are drawn, with particular emphasis on the centralized approach and on the strategies for achieving the connection survivability. Some results of an intensive testing campaign are shown for the validation of the design choices.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Jaaskelainen_et_al_2007a</guid>
	<pubDate>Tue, 29 Sep 2020 09:59:32 +0200</pubDate>
	<link>https://www.scipedia.com/public/Jaaskelainen_et_al_2007a</link>
	<title><![CDATA[Resource conflict detection in simulation of function unit pipelines]]></title>
	<description><![CDATA[
<p>Processor simulators are important parts of processor design toolsets in which they are used to verify and evaluate the properties of the designed processors. While simulating architectures with independent function unit pipelines using simulation techniques that avoid the overhead of instruction bit-string interpretation, such as compiled simulation, the simulation of function unit pipelines can become one of the new bottlenecks for simulation speed. This paper evaluates several resource conflict detection models, commonly used in compiler instruction scheduling, in the context of function unit pipeline simulation. The evaluated models include the conventional reservation table based-model, the dynamic collision matrix model, and an finite state automata (FSA) based model. In addition, an improvement to the simulation initialization time by means of lazy initialization of states in the FSA-based approach is proposed. The resulting model is faster to initialize and provides comparable simulation speed to the actively initialized FSA.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Collado_et_al_2006a</guid>
	<pubDate>Tue, 29 Sep 2020 09:55:12 +0200</pubDate>
	<link>https://www.scipedia.com/public/Collado_et_al_2006a</link>
	<title><![CDATA[Adaptative road lanes detection and classification]]></title>
	<description><![CDATA[
<p>Proceeding of: 8th International Conference, ACIVS 2006, Antwerp, Belgium, September 18-21, 2006 This paper presents a Road Detection and Classification algorithm for Driver Assistance Systems (DAS), which tracks several road lanes and identifies the type of lane boundaries. The algorithm uses an edge filter to extract the longitudinal road markings to which a straight lane model is fitted. Next, the type of right and left lane boundaries (continuous, broken or merge line) is identified using a Fourier analysis. Adjacent lanes are searched when broken or merge lines are detected. Although the knowledge of the line type is essential for a robust DAS, it has been seldom considered in previous works. This knowledge helps to guide the search for other lanes, and it is the basis to identify the type of road (one-way, two-way or freeway), as well as to tell the difference between allowed and forbidden maneuvers, such as crossing a continuous line. Publicado</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Kim_et_al_2006a</guid>
	<pubDate>Tue, 29 Sep 2020 09:49:12 +0200</pubDate>
	<link>https://www.scipedia.com/public/Kim_et_al_2006a</link>
	<title><![CDATA[Novel congestion control scheme in next generation optical networks]]></title>
	<description><![CDATA[
<p>In this paper, to improve the burst loss performance, we actively avoid contentions by proposing a novel congestion control scheme that operates based on the highest (called peak load) of the loads of all links over the path between each pair of ingress and egress nodes in an Optical Burst Switching (OBS) network.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Shirazipour_et_al_2006a</guid>
	<pubDate>Tue, 29 Sep 2020 09:45:46 +0200</pubDate>
	<link>https://www.scipedia.com/public/Shirazipour_et_al_2006a</link>
	<title><![CDATA[Inter domain traffic engineering using mpls]]></title>
	<description><![CDATA[
<p>In the Internet, the traffic crosses between two to eight autonomous systems before reaching its destination. Consequently, end-to-end quality of service requires provisioning across more than one domain. This paper proposes a new scheme for introducing MPLS technology into an inter-domain environment. Results obtained using the OPNET simulation platform show that extending MPLS across AS boundaries can improve the QoS perceived by the end users. This means that inter-domain traffic engineering is a promising solution for a QoS aware Internet.</p>

<p>Document type: Part of book or chapter of book</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Pedro_et_al_2006a</guid>
	<pubDate>Wed, 23 Sep 2020 12:44:37 +0200</pubDate>
	<link>https://www.scipedia.com/public/Pedro_et_al_2006a</link>
	<title><![CDATA[An approach to off line inter domain qos aware resource optimization]]></title>
	<description><![CDATA[<p>Inter-domain traffic engineering is a key issue when QoS-aware resource optimization is concerned. Mapping inter-domain traffic flows into existing service level agreements is, in general, a complex problem, for which some algorithms have recently been proposed in the literature. In this paper a modified version of a multi-objective genetic algorithm is proposed, in order to optimize the utilization of domain resources from several perspectives: bandwidth, monetary cost, and routing trustworthiness. Results show trade-off solutions and &ldquo;optimal&rdquo; solutions for each perspective. The proposal is a useful tool in inter-domain management because it can assist and simplify the decision process. Document type: Part of book or chapter of book</p>]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Draft_Samper_859173716</guid>
	<pubDate>Tue, 19 Nov 2019 10:55:06 +0100</pubDate>
	<link>https://www.scipedia.com/public/Draft_Samper_859173716</link>
	<title><![CDATA[Advances in the particle finite element method for fluid-structure interaction problems]]></title>
	<description><![CDATA[<p>We present a general formulation for analysis of fluid-structure interaction problems using the particle finite element method (PFEM). The key feature of the PFEM is the use of a Lagrangian description to model the motion of nodes (particles) in both the fluid and the structure domains. Nodes are thus viewed as particles which can freely move and even separate from the main analysis domain representing, for instance, the effect of water drops. A mesh connects the nodes defining the discretized domain where the governing equations, expressed in an integral from, are solved as in the standard FEM. The necessary stabilization for dealing with the incompressibility of the fluid is introduced via the finite calculus (FIC) method. A fractional step scheme for the transient coupled fluid-structure solution is described. Examples of application of the PFEM to solve a number of fluid-structure interaction problems involving large motions of the free surface and splashing of waves are presented.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Aguero_et_al_2006a</guid>
	<pubDate>Tue, 19 Nov 2019 10:46:52 +0100</pubDate>
	<link>https://www.scipedia.com/public/Aguero_et_al_2006a</link>
	<title><![CDATA[The rotation-free BST shell element for linearized buckling analysis of steel structures]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 17.6px; font-style: normal; font-weight: 400;">In this paper the ability of the basic shell triangular (BST) element to perform linearized buckling analysis is evaluated. The results have been compared with analytical solutions and other finite elements in the literature, such as ANDES3, ANDES4, QSEL, FFQC and BCIZ. This type of analysis is applied to the design of steel structures to obtain the collapse load of panels using the effective width of the plate. In this approach, which is currently recommended by most of the design standards, a semi-empirical/analytical method is used to take into account the nonlinear geometric and material behavior, geometric imperfections and residual stresses. To obtain the critical loads two methods have been mainly applied: the finite strip method and the finite element method, which is more appropriate to deal with any boundary condition and loading pattern. In practice, to obtain the local and distortional buckling, some assumptions related to the interaction between plates are made in order to obtain practical formulae that can be applied; these simplifications can lead to unsafe results, so the linearized buckling analysis must be carried out in a proper way, taking into account the interactions. It is concluded that the BST element presents an excellent behavior to predict the critical loads in compression and shear, and therefore this element must be utilized in future codes when this type of analysis turn out to be mandatory.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Recarey_Morfa_et_al_2006a</guid>
	<pubDate>Tue, 19 Nov 2019 10:22:13 +0100</pubDate>
	<link>https://www.scipedia.com/public/Recarey_Morfa_et_al_2006a</link>
	<title><![CDATA[Simulación de problemas de desgaste, en la interacción herramienta de corte terreno, empleando el método de los elementos discretos]]></title>
	<description><![CDATA[<p><span style="color: rgb(33, 37, 41); font-size: 16px; font-style: normal; font-weight: 400;">Se presenta un modelo num&eacute;rico que emplea elementos discretos esf&eacute;ricos o tambi&eacute;n denominados elementos distintos. Este modelo se aplica en la simulaci&oacute;n de problemas de desgaste. El movimiento de elementos esf&eacute;ricos se describe por medio de las ecuaciones de Newton-Euler. Se emplea en la formulaci&oacute;n una integraci&oacute;n expl&iacute;cita, la cual, brinda una buena eficiencia computacional. Los elementos esf&eacute;ricos interact&uacute;an rec&iacute;procamente entre s&iacute; a trav&eacute;s de las fuerzas de contacto. El esquema de b&uacute;squeda de contacto es muy eficaz, y est&aacute; basado en las estructuras de oct-tree. El modelo constitutivo de contacto presenta una singularidad muy especial, ya que puede simular las fuerzas de cohesi&oacute;n que permiten modelar la fractura y descohesi&oacute;n de los materiales. El estudio de desgastes se realiza a partir de una formulaci&oacute;n termo&shy;acoplada que se aplica a nivel experimental, o escala reducida, y adem&aacute;s in-situ o escala real.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Draft_Samper_747930736</guid>
	<pubDate>Fri, 25 Oct 2019 10:40:26 +0200</pubDate>
	<link>https://www.scipedia.com/public/Draft_Samper_747930736</link>
	<title><![CDATA[Subdomain-based flux-free a posteriori error estimators]]></title>
	<description><![CDATA[<p><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">A new residual type flux-free error estimator is presented. It estimates upper and lower bounds of the error in energy norm. The proposed approach precludes the main drawbacks of standard residual type estimators, circumvents the need of flux-equilibration and results in a simple implementation that uses standard resources available in .nite element codes. This is specially interesting for 3D applications where the implementation of this technique is as simple as in 2D. Recall that on the contrary, the complexity of the flux-equilibration techniques increases drastically in the 3D case. The bounds for the energy norm of the error are used to produce upper and lower bounds of linear functional outputs, representing</span><br style="color: rgb(116, 116, 116); font-size: 18px;"><span style="color: rgb(116, 116, 116); font-size: 18px; font-style: normal; font-weight: 400;">quantities of engineering interest. The presented estimators demonstrate their efficiency in numerical tests producing sharp estimates both for the energy and the quantities of interest.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Codina_2006d</guid>
	<pubDate>Tue, 03 Sep 2019 10:01:22 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Codina_2006d</link>
	<title><![CDATA[Analysis of a stabilized finite element approximation of the transient convection-diffusion equation using an ALE framework]]></title>
	<description><![CDATA[<p>In this paper we analyze a stabilized finite element method to approximate the convection‐diffusion equation on moving domains using an arbitrary Lagrangian Eulerian (ALE) framework. As basic numerical strategy, we discretize the equation in time using first and second order backward differencing (BDF) schemes, whereas space is discretized using a stabilized finite element method (the orthogonal subgrid scale formulation) to deal with convection dominated flows. The semidiscrete problem (continuous in space) is first analyzed. In this situation it is easy to identify the error introduced by the ALE approach. After that, the fully discrete method is considered. We obtain optimal error estimates in both space and time in a mesh dependent norm. The analysis reveals that the ALE approach introduces an upper bound for the time step size for the results to hold. The results obtained for the fully discretized second order scheme (in time) are associated to a weaker norm than the one used for the first order method. Nevertheless, optimal convergence results have been proved. For fixed domains, we recover stability and convergence results with the strong norm for the second order scheme, stressing the aspects that make the analysis of this method much more involved.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Codina_Houzeaux_2006a</guid>
	<pubDate>Tue, 03 Sep 2019 09:55:16 +0200</pubDate>
	<link>https://www.scipedia.com/public/Codina_Houzeaux_2006a</link>
	<title><![CDATA[Numerical approximation of the heat transfer between domains separated by thin walls]]></title>
	<description><![CDATA[<p><span style="color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">In this paper, we analyse the numerical approximation of the heat transfer problem between two subdomains that we will consider filled with a fluid and separated by a thin solid wall. First of all, we state the problem in the whole domain with discontinuous physical properties. As an alternative and under certain assumptions on the separating walls, a classical Robin boundary condition between the fluid domains is obtained, thus eliminating the solid wall, and according to which the heat flux is proportional to the temperature difference between the two subdomains. Apart from discussing the relation between both approaches, we consider their numerical approximation, considering different alternatives for the first case, that is, the case in which temperatures are also computed in the solid wall.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Codina_Hernandez-Silva_2006a</guid>
	<pubDate>Tue, 03 Sep 2019 09:47:36 +0200</pubDate>
	<link>https://www.scipedia.com/public/Codina_Hernandez-Silva_2006a</link>
	<title><![CDATA[Stabilized finite element approximation of the stationary magneto-hydrodynamics equations]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 17px; font-style: normal; font-weight: 400;">In this work we present a stabilized finite element method for the stationary magneto-hydrodynamic equations based on a simple algebraic version of the subgrid scale variational concept. The linearization that yields a well posed linear problem is first identified, and for this linear problem the stabilization method is designed. The key point is the correct behavior of the stabilization parameters on which the formulation depends. It is shown that their expression can be obtained only on the basis of having a correct error estimate. For the stabilization parameters chosen, a stability estimate is proved in detail, as well as the convergence of the numerical solution to the continuous one. The method is then extended to nonlinear problems and its performance checked through numerical experiments.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Houzeaux_Codina_2006a</guid>
	<pubDate>Tue, 03 Sep 2019 09:38:19 +0200</pubDate>
	<link>https://www.scipedia.com/public/Houzeaux_Codina_2006a</link>
	<title><![CDATA[Finite element modeling of the lost foam casting process tackling back‐pressure effects]]></title>
	<description><![CDATA[<p>Purpose &ndash; To develop a numerical methodology to simulate the lost foam casting (LFC), including the gas back-pressure effects.</p><p>Design/methodology/approach &ndash; Back-pressure effects are due to the interactions of many physical processes. The strategy proposed herein tries to model all these processes within a simple formula. The main characteristic of the model consists of assuming that the back-pressure is a known function of the external parameters (coating, temperature, gravity, etc.) that affects directly the heat transfer coefficient from the metal to the foam. The general framework of the simulation is a finite element model based on an arbitrary Lagrangian Eulerian (ALE) approach and the use of level set function to capture the metal front advance.</p><p>Findings &ndash; After experimental tunings, the model provides a way to include the back-pressure effects in a simple way.</p><p>Research limitations/implications &ndash; The method is not completely predictive in the sense that a priori tuning is necessary to calibrate the model.</p><p>Practical implications &ndash; Provides more realistic results than classical models.</p><p>Originality/value &ndash; The paper proposes a theoretical framework of a finite element method for the simulation of LFC process. The method uses an ALE method on a fixed mesh and a level-set function to capture metal front advance. It proposes an original formula for the heat transfer coefficient that enables one to include back-pressure effects</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Nithiarasu_et_al_2006a</guid>
	<pubDate>Tue, 03 Sep 2019 09:28:24 +0200</pubDate>
	<link>https://www.scipedia.com/public/Nithiarasu_et_al_2006a</link>
	<title><![CDATA[The Characteristic‐Based Split (CBS) scheme—a unified approach to fluid dynamics]]></title>
	<description><![CDATA[<p><span style="color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">This paper presents a comprehensive overview of the characteristic‐based methods and Characteristic‐Based Split (CBS) scheme. The practical difficulties of employing the original characteristic schemes are discussed. The important features of the CBS scheme are brought out by studying several problems of compressible and incompressible flows. All special consideration necessary for solving these problems are thoroughly discussed. The CBS scheme is presented in such a way that any interested researcher should be able to develop a code using the information provided. Several invicid and viscous flow examples are also provided to demonstrate the unified CBS approach. For sample two‐dimensional codes, input files and instructions, the readers are referred to &lsquo;</span><a href="http://www.nithiarasu.co.uk/" style="color: rgb(0, 82, 116); cursor: pointer; font-size: 16px; font-weight: 600; font-style: normal;">www.nithiarasu.co.uk</a><span style="color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">&rsquo;&nbsp;</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Codina_et_al_2006a</guid>
	<pubDate>Mon, 02 Sep 2019 17:21:18 +0200</pubDate>
	<link>https://www.scipedia.com/public/Codina_et_al_2006a</link>
	<title><![CDATA[Numerical comparison of CBS and SGS as stabilization techniques for the incompressible Navier–Stokes equations]]></title>
	<description><![CDATA[<p><span style="color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">In this work, we present numerical comparisons of some stabilization methods for the incompressible Navier&ndash;Stokes. The first is the characteristic‐based split (CBS). It combines the characteristic Galerkin method to deal with convection‐dominated flows with a classical splitting technique, which in some cases allows us to use equal velocity&ndash;pressure interpolations. The other two approaches are particular cases of the subgrid scale (SGS) method. The first, obtained after an algebraic approximation of the subgrid scales, is very similar to the popular Galerkin/least‐squares (GLS) method, whereas in the second, the subscales are assumed to be orthogonal to the finite element space. It is shown that all these formulations display similar stabilization mechanisms, provided the stabilization parameter of the SGS methods is identified with the time step of the CBS approach. This paper provides the numerical experiments for the comparison of formulations made by Codina and Zienkiewicz in a previous article.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Codina_Badia_2006b</guid>
	<pubDate>Mon, 02 Sep 2019 17:05:59 +0200</pubDate>
	<link>https://www.scipedia.com/public/Codina_Badia_2006b</link>
	<title><![CDATA[On some pressure segregation methods of fractional-step type for the finite element approximation of incompressible flow problems]]></title>
	<description><![CDATA[<p><span style="font-size: 12.8px; font-style: normal; font-weight: 400;">In this paper we treat several aspects related to time integration methods for the incompressible Navier-Stokes equations that allow to uncouple the calculation of the velocities and the pressure. The first family of schemes consists of classical fractional step methods, of which we discuss several possibilities for the pressure extrapolation and the time integration of first and second order. The second family consists of schemes based on an explicit treatment of the pressure in the momentum equation followed by a Poisson equation for the pressure. It turns out that this &ldquo;staggered&rdquo; treatment of the velocity and the pressure is stable. Finally, we present predictor-corrector methods based on the above schemes that aim to converge to the solution of the monolithic time integration method. Apart from presenting these schemes and check its numerical performance, we also present a complete set of stability results for the fractional step methods that are independent of the space stability of the velocity-pressure interpolation, that is, of the classical inf-sup condition.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Chiumenti_et_al_2006a</guid>
	<pubDate>Fri, 26 Jul 2019 11:55:15 +0200</pubDate>
	<link>https://www.scipedia.com/public/Chiumenti_et_al_2006a</link>
	<title><![CDATA[Thermo-Mechanical Contact in Casting Analysis]]></title>
	<description><![CDATA[]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Boroomand_Barekatein_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 14:25:08 +0200</pubDate>
	<link>https://www.scipedia.com/public/Boroomand_Barekatein_2006a</link>
	<title><![CDATA[Topology optimization of plates]]></title>
	<description><![CDATA[<p>In this report we propose a stabilization method for topology optimization of planes. The method can be classified in the category of continuation methods. The new continuation method is based on using continuous of design variables (DV) defined on a set meshes different from the one used for the finite element solution. The optimization procedure stars with using a coarse DV-mesh compared to finite element one. Once the convergence is obtained in the optimizations steps, a finer DV-mesh is nominated for further steps. With such a continuation method one can control the bounds of the gradients of the DV while simultaneously smooth the values in a more logical fashion, compared to what conventional filters perform. The DV-mesh refinement can be continued until the final mesh becomes similar to the finite element mesh. Depending on the formulation and elements used for the plate problems, e.g. with Kirchhoff or Mindlin-Reissner hypothesis, the refinement may further be continued so that the DV elements become smaller than the plate elements. Application of the method is shown over a wide range of plate problems. Linear and nonlinear plate behaviors formulated by Kirchhoff or Mindlin Reissner hypothesis, while using several forms of DV, are considered to show the performance of the proposed method. As one of the main DV, density is used in a power-law approach (or in an artificial material approach). This is performed in two forms, on in obtaining the topology of Thickness is also used as a realistic design variable in order to show the performance of the method in a rather well-posed optimization problem. We have also included results from a homogenization approach. Comparison in made with conventional element/nodal based approaches using filter. The results show excellent and robust performance of the proposed method. Due to the wide range of cases studied, some inserting side conclusions are also given in this report. &nbsp;</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Pares_et_al_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 14:04:27 +0200</pubDate>
	<link>https://www.scipedia.com/public/Pares_et_al_2006a</link>
	<title><![CDATA[Bounds of functional outputs for parabolic problems. Part I: Exact bounds of the discontinuous Galerkin time discretization]]></title>
	<description><![CDATA[<p>Classical implicit residual type error estimators require using an underlying spatial finer mesh to compute bounds for some quantity of interest. Consequently, the bounds obtained are only guaranteed asymptotically that is with respect to the reference solution computed with the fine mesh. Exact bounds, that is bounds guaranteed with respect to the exact solution, are needed to properly certify the accuracy of the results, especially if the meshes are coarse. The paper introduces a procedure to compute strict upper and lower bounds of the error in linear functional outputs of parabolic problems. In this first part, the bounds account for the error associated with the spatial discretization. The error coming from the time marching scheme is therefore assumed to be negligible in front of the spatial error. The time discretization is performed using the discontinuous Galerkin method, both for the primal and adjoint problems. In the error estimation procedure, equilibrated fluxes at interelement edges are calculated using hybridization techniques.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cervera_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 13:58:22 +0200</pubDate>
	<link>https://www.scipedia.com/public/Cervera_2006a</link>
	<title><![CDATA[An orthotropic mesh corrected crack model]]></title>
	<description><![CDATA[<p><span style="font-weight: 400; font-style: normal; font-size: 18px; color: rgb(46, 46, 46);">This paper recovers the original spirit of the continuous crack approaches, where displacements jumps across the crack are smeared over the affected elements and the behaviour is established through a softening stress&ndash;(total) strain law, using standard finite element displacement interpolations and&nbsp;</span><em style="font-weight: 400; font-size: 18px; color: rgb(46, 46, 46);">orthotropic local</em><span style="font-weight: 400; font-style: normal; font-size: 18px; color: rgb(46, 46, 46);">constitutive models. The paper focuses on the problem of shear locking observed in the discrete problem when orthotropic models are used. The solution for this drawback is found in the form of a&nbsp;</span><em style="font-weight: 400; font-size: 18px; color: rgb(46, 46, 46);">mesh corrected</em><span style="font-weight: 400; font-style: normal; font-size: 18px; color: rgb(46, 46, 46);">&nbsp;crack model where the structure of the inelastic strain tensor is linked to the geometry of the cracked element. The discrete model is formulated as a non-symmetric orthotropic local damage constitutive model, in which the softening modulus is regularized according to the material fracture energy and the element size. The resulting formulation is easily implemented in standard non-linear FE codes and suitable for engineering applications. Numerical examples show that the results obtained using this crack model do not suffer from dependence on the mesh directional alignment, comparing very favourably with those obtained using related standard isotropic or orthotropic damage models.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cervera_Chiumenti_2006c</guid>
	<pubDate>Wed, 19 Jun 2019 13:54:49 +0200</pubDate>
	<link>https://www.scipedia.com/public/Cervera_Chiumenti_2006c</link>
	<title><![CDATA[Mesh objective tensile cracking via a local continuum damage model and a crack tracking technique]]></title>
	<description><![CDATA[<p><span style="font-weight: 400; font-style: normal; font-size: 18px; color: rgb(46, 46, 46);">This paper describes a procedure for the solution of problems involving tensile&nbsp;</span><span style="font-weight: 400; font-style: normal; font-size: 18px; color: rgb(46, 46, 46);"><span style="font-size: 18px;"><span style="font-size: 18px;">cracking using the so-called smeared crack approach, that is, standard finite elements with continuous&nbsp;displacement fields&nbsp;and a standard local&nbsp;</span>constitutive model<span style="font-size: 18px;">&nbsp;with strain-softening. An&nbsp;isotropic<span style="font-size: 18px;">&nbsp;Rankine<span style="font-size: 18px;">&nbsp;damage model is considered. The softening modulus is adjusted according to the material&nbsp;fracture energy&nbsp;and the&nbsp;</span></span></span></span>element size<span style="font-size: 18px;"><span style="font-size: 18px;"><span style="font-size: 18px;">. The resulting&nbsp;continuum&nbsp;and discrete&nbsp;</span>mechanical problems<span style="font-size: 18px;"><span style="font-size: 18px;">&nbsp;are analyzed and the question of predicting correctly the direction of&nbsp;crack propagation<span style="font-size: 18px;">&nbsp;is deemed as the main difficulty to be overcome in the discrete problem. It is proposed to use a crack&nbsp;tracking technique&nbsp;to attain the desired stability and&nbsp;</span></span>convergence properties&nbsp;of the corresponding formulation.&nbsp;</span></span>Numerical examples&nbsp;show that the resulting procedure is well-posed, stable and remarkably robust; the results obtained do not seem to suffer from spurious mesh-size or mesh-bias dependence.</span></span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Onate_et_al_2006g</guid>
	<pubDate>Wed, 19 Jun 2019 13:44:50 +0200</pubDate>
	<link>https://www.scipedia.com/public/Onate_et_al_2006g</link>
	<title><![CDATA[Modeling bed erosion in free surface flows by the particle finite element method]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 17px; font-style: normal; font-weight: 400; background-color: rgb(252, 252, 252);">We present a general formulation for modeling bed erosion in free surface flows using the particle finite element method (PFEM). The key feature of the PFEM is the use of an updated Lagrangian description to model the motion of nodes (particles) in domains containing fluid and solid subdomains. Nodes are viewed as material points (called particles) which can freely move and even separate from the fluid and solid subdomains representing, for instance, the effect of water drops or soil/rock particles. A mesh connects the nodes defining the discretized domain in the fluid and solid regions where the governing equations, expressed in an integral form, are solved as in the standard FEM. The necessary stabilization for dealing with the incompressibility of the fluid is introduced via the finite calculus (FIC) method. An incremental iterative scheme for the solution of the nonlinear transient coupled fluid-structure problem is described. The erosion mechanism is modeled by releasing the material adjacent to the bed surface according to the frictional work generated by the fluid shear stresses. The released bed material is subsequently transported by the fluid flow. Examples of application of the PFEM to solve a number of bed erosion problems involving large motions of the free surface and splashing of waves are presented.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Oller_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 13:40:46 +0200</pubDate>
	<link>https://www.scipedia.com/public/Oller_2006a</link>
	<title><![CDATA[Modelo constitutivo para el comportamiento de tejidos biológicos blandos]]></title>
	<description><![CDATA[<p>El objetivo de este trabajo es obtener una formulaci&oacute;n constitutiva general que permita representar el comportamiento &ndash;crecimiento y decrecimiento natural / patol&oacute;gico, remodelaci&oacute;n de absorci&oacute;n y de regeneraci&oacute;n- de los tejidos biol&oacute;gicos blandos. Enti&eacute;ndase por tejidos biol&oacute;gicos blandos a aquellos que conforman la piel y &oacute;rganos del cuerpo humano y que pueden est&aacute;s sometidos a acciones externas e internas de origen mec&aacute;nico y metab&oacute;lico. Estas acciones pueden producir grandes deformaciones transitorias y permanentes.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Fragakis_2006b</guid>
	<pubDate>Wed, 19 Jun 2019 13:35:33 +0200</pubDate>
	<link>https://www.scipedia.com/public/Fragakis_2006b</link>
	<title><![CDATA[A study on the lumped preconditioner and memory requirements of feti and related primal domain decompositions methods]]></title>
	<description><![CDATA[<p>In recent years, Domain Decomposition Methods (DDM) have emerged as advanced solvers in several areas of computational mechanics. In particular, during the last decade, in the area of solid and structural mechanics, they reached a considerable level of advancement and were shown to be more efficient than popular solvers, like advanced sparse direct solvers. The present contribution follows the lines of a series of recent publications by author on DDM. In the papers, the authors developed a unified theory of primal and dual methods and presented a family of DDM that were shown to be more efficient than previous methods. The present paper extends this work, presenting a new family of related DDM, thus enriching the theory of the relations between primal and dual methods. It also explores memory requirement issues, suggesting also a particularly memory efficient formulation.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Fragakis_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 13:23:18 +0200</pubDate>
	<link>https://www.scipedia.com/public/Fragakis_2006a</link>
	<title><![CDATA[Force and displacement duality in domain decomposition methods for solid and structural mechanics]]></title>
	<description><![CDATA[<p>In recent years, Domain Decomposition Methods (DDM) have emerged as advanced solvers in several of computational mechanics. In particular, during the last decade, in the area of solid and structural mechanics, they reached a considerable level of advancement and were shown to be more efficient than popular solvers, like advanced sparse direct solvers. The present paper explores the extent of application of the general concept of force-displacement duality in DDM. A general framework for the definition of DDM is set up and it is shown that if the definition of a DDM meets some requirements, then it can lead to one primal and one dual formulation. A number of DDM are included in this setting and particular implications for each one of them is researched.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Codina_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 13:14:20 +0200</pubDate>
	<link>https://www.scipedia.com/public/Codina_2006a</link>
	<title><![CDATA[Analysis of a stabilized finite element approximation of the Oseen equations using orthogonal subscale]]></title>
	<description><![CDATA[<p>In this paper we present a stabilized finite element formulation to solve the Oseen equations as a model problem involving both convection effects and the incompressibility restriction. The need for stabilization techniques to solve this problem arises because of the restriction in the possible choices for the velocity and pressure spaces dictated by the inf&ndash;sup condition, as well as the instabilities encountered when convection is dominant. Both can be overcome by resorting from the standard Galerkin method to a stabilized formulation. The one presented here is based on the subgrid scale concept, in which unresolvable scales of the continuous solution are approximately accounted for. In particular, the approach developed herein is based on the assumption that unresolved subscales are orthogonal to the finite element space. It is shown that this formulation is stable and optimally convergent for an adequate choice of the algorithmic parameters on which the method depends.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Valls_et_al_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 13:08:23 +0200</pubDate>
	<link>https://www.scipedia.com/public/Valls_et_al_2006a</link>
	<title><![CDATA[LES turbulence models. Relation with stabilized numerical methods]]></title>
	<description><![CDATA[<p>One of the aims of this text is to show some important results in LES modelling and to identify which are main mathematical problems for the development of a complete theory. A relevant aspect of LES theory, which we will consider in our work, is the close relationship between the mathematical properties of LES models and the numerical methods used for their implementation.</p><p>In last years it is more and more common the idea in the scientific community, especially in the numerical community, that turbulence models and stabilization techniques play a very similar role. Methodologies used to simulate turbulent flows, RANS or LES approaches, are based on the same concept: unability to simulate a turbulent flow using a finite discretization in time and space. Turbulence models introduce additional information (impossible to be captured by the approximation technique used in the simulation) to obtain physically coherent solutions. On the other side, numerical methods used for the integration of partial differential equations (PDE) need to be modified in order to able to reproduce solutions that present very high localized gradients. These modifications, known as stabilization techniques, make possible to capture these sharp and localized changes of the solution. According with previous paragraphs, the following natural question appears: Is it possible to reinterpret stabilization methods as turbulence models? This question suggests a possible principle of duality between turbulence modelling and numerical stabilization. More than to share certain properties, actually, it is suggested that the numerical stabilization can be understood as turbulence. The opposite will occur if turbulence models are only necessary due to discretization limitations instead of a need for reproducing the physical behaviour of the flow. Finally: can turbulence models be understood as a component of a general stabilization method?</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Codina_2006b</guid>
	<pubDate>Wed, 19 Jun 2019 13:04:19 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Codina_2006b</link>
	<title><![CDATA[On some fluid-structure iterative algorithms using pressure segregation methods. Applications to aeroelasticity]]></title>
	<description><![CDATA[<p>In this paper we suggest some algorithms for the fluid-structure interaction problem stated using a domain decomposition framework. These methods involve stabilized pressure segregation methods for the solution of the fluid problem and fixed point iterative algorithms for the fluid-structure coupling. These coupling algorithms are applied to the aeroelastic simulation of suspension bridges. We assess flexural and torsional frequencies for a given inflow velocity. Increasing this velocity we reach the value for which the flutter phenomenon appears.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Codina_2006c</guid>
	<pubDate>Wed, 19 Jun 2019 13:00:53 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Codina_2006c</link>
	<title><![CDATA[Velocity correction methods based on a discrete pressure Poisson equation an algebraic approach]]></title>
	<description><![CDATA[<p>In this paper we introduce some pressure segregation methods obtained from a non-standard version of the discrete monolithic system, where the continuity equation has been replaced by a pressure Poisson equation obtained at the discrete level. In these methods it is the velocity instead of the pressure the extrapolated unknown. Moreover, predictor corrector schemes are suggested, again motivated by the new monolithic system. Key implementation aspects are discussed, and a complete stability analysis is performed. We end with a set of numerical examples in order to compare these methods with classical pressure correction schemes.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Badia_Codina_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 12:55:45 +0200</pubDate>
	<link>https://www.scipedia.com/public/Badia_Codina_2006a</link>
	<title><![CDATA[Analysis of a stabilized finite element approximation of the transient convection-diffusion equation using an ale framework]]></title>
	<description><![CDATA[<p>In this paper we analyze a stabilized finite element method to approximate the convection diffusion equation on moving domains using an ALE framework. As basic numerical strategy, we discretize the equation in time using first and second order backward differencing (BDF) schemes, whereas space is discretized using a stabilized finite element method (the orthogonal subgrid scale formulation) to deal with convection dominated flows. The semi-discrete problem (continuous in space) is first analyzed. In this situation it is easy to identify the error introduced by the ALE approach. After that, the fully discrete method is considered. We obtain optimal error estimates in both space and time in a mesh dependent norm. The analysis reveals that the ALE approach introduces an upper bound for the time step size for the results to hold. The results obtained for the fully discretized second order scheme (in time) are associated to a weaker norm than the one used for the first order method. Nevertheless, optimal convergence results have been proved. For fixed domains, we recover stability and convergence results with the strong norm for the second order scheme, stressing the aspects that make the analysis of this method much more involved.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Codina_Badia_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 12:50:56 +0200</pubDate>
	<link>https://www.scipedia.com/public/Codina_Badia_2006a</link>
	<title><![CDATA[On some pressure segregation methods of fractional-step type for the finite element approximation of incompressible flow problems]]></title>
	<description><![CDATA[<p>In this paper we treat several aspects related to time integration methods for the incompressible Navier-Stokes equations that allow to uncouple the calculation of the velocities and the pressure. The first family of schemes consists of classical fractional step methods, of which we discuss several possibilities for the pressure extrapolation and the time integration of first and second order. The second family consists of schemes based on an explicit treatment of the pressure in the momentum equation followed by a Poisson equation for the pressure. It turns out that this &ldquo;staggered&rdquo; treatment of the velocity and the pressure is stable. Finally, we present predictor-corrector methods based on the above schemes that aim to converge to the solution of the monolithic time integration method. Apart from presenting these schemes and check its numerical performance, we also present a complete set of stability results for the fractional step methods that are independent of the space stability of the velocity-pressure interpolation, that is, of the classical inf-sup condition.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Onate_Felippa_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 12:45:09 +0200</pubDate>
	<link>https://www.scipedia.com/public/Onate_Felippa_2006a</link>
	<title><![CDATA[Variational formulation of the finite calculus equations in solid mechanics and diffusion-reaction problems]]></title>
	<description><![CDATA[<p>We present a variational formulation of the finite calculus (FIC) equations for problems in mechanics governed by differential equations with symmetric operators. Applications considered include solid mechanics, diffusion-transport and diffusion-reaction problems. The key of the variational formulation is the identification of the FIC governing equations with the classical differential equations of mechanics written in terms of modified non-local variables. A total potential energy (TPE) functional is found in terms of the modified variables. The FIC equations in the domain and the boundary are recovered as the Euler-Lagrange equations and the natural boundary condition of the TPE functional, respectively. Symmetric finite element equations are obtained after discretization of the TPE functional, therefore preserving the symmetry of the governing infinitesimal equations. The variational FIC expression is reinterpreted as a Petrov Galerkin weighted residual form of the original FIC equations with non-local weighting functions. The analogy of the variational FIC-FEM formulation with a discontinuous Galerkin method is recognized. Extensions to multidimensional linear elastostatics and diffusion-reaction problems are presented.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Onate_et_al_2006h</guid>
	<pubDate>Wed, 19 Jun 2019 12:29:34 +0200</pubDate>
	<link>https://www.scipedia.com/public/Onate_et_al_2006h</link>
	<title><![CDATA[FIC/FEM formulation with matrix stabilizing terms for incompressible floes at low and high Reynolds numbers]]></title>
	<description><![CDATA[<p><span style="color: rgb(102, 102, 102); font-size: 14px; font-style: normal; font-weight: 400; text-align: justify;">We present a general formulation for incompressible fluid flow analysis using the finite element method. The necessary stabilization for dealing with convective effects and the incompressibility condition are introduced via the Finite Calculus method using a matrix form of the stabilization parameters. This allows to model a wide range of fluid flow problems for low and high Reynolds numbers flows without introducing a turbulence model. Examples of application to the analysis of incompressible flows with moderate and large Reynolds numbers are presented.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Lohner_et_al_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 12:26:01 +0200</pubDate>
	<link>https://www.scipedia.com/public/Lohner_et_al_2006a</link>
	<title><![CDATA[On the simulation  of flows with violent free surface motion]]></title>
	<description><![CDATA[<p><span style="color: rgb(102, 102, 102); font-size: 14px; font-style: normal; font-weight: 400; text-align: justify;">A volume of fluid (VOF) technique has been developed and coupled with an incompressible Euler/Navier&ndash;Stokes solver operating on adaptive, unstructured grids to simulate the interactions of extreme waves and three-dimensional structures. The present implementation follows the classic VOF implementation for the liquid&ndash;gas system, considering only the liquid phase. Extrapolation algorithms to obtain velocities and pressure in the gas region near the free surface have been implemented. The VOF technique is validated against the classic dam-break problem, as well as series of 2D sloshing experiments and results from smoothed particle hydrodynamics (SPH) calculations. These and a series of other examples demonstrate that the present CFD method is capable of simulating violent free surface flows with strong nonlinear behavior.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Arteaga-Gomez_et_al_2006a</guid>
	<pubDate>Wed, 19 Jun 2019 12:21:40 +0200</pubDate>
	<link>https://www.scipedia.com/public/Arteaga-Gomez_et_al_2006a</link>
	<title><![CDATA[Coupling of Feflo with Simpact]]></title>
	<description><![CDATA[<p>This paper describes the coupling of FEFLO, a general purpose compressible and incompressible flow solver base on adaptive unstructured grids with SIMPACT, a general purpose, large deformation, explicit structural dynamics code developed at the Center for Numerical Methods in Engineering (CIMNE). Details on the codes, as well as the compiling strategy employed are given. Examples illustrate the possibilies the present fluid-structure capability offers.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Lopez_Onate_2006a</guid>
	<pubDate>Wed, 29 May 2019 14:05:06 +0200</pubDate>
	<link>https://www.scipedia.com/public/Lopez_Onate_2006a</link>
	<title><![CDATA[A variational formulation for the multilayer perceptron]]></title>
	<description><![CDATA[<p><span style="color: rgb(51, 51, 51); font-size: 17px; font-style: normal; font-weight: 400;">In this work we present a theory of the multilayer perceptron from the perspective of functional analysis and variational calculus. Within this formulation, the learning problem for the multilayer perceptron lies in terms of finding a function which is an extremal for some functional. As we will see, a variational formulation for the multilayer perceptron provides a direct method for the solution of general variational problems, in any dimension and up to any degree of accuracy. In order to validate this technique we use a multilayer perceptron to solve some classical problems in the calculus of variations.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Agelet_de_Saracibar_et_al_2006b</guid>
	<pubDate>Thu, 04 Apr 2019 12:07:10 +0200</pubDate>
	<link>https://www.scipedia.com/public/Agelet_de_Saracibar_et_al_2006b</link>
	<title><![CDATA[Current Developments on the Coupled Thermomechanical Computational Modeling of Metal Casting Processes]]></title>
	<description><![CDATA[<p>In this paper, current developments on the coupled thermomechanical computational simulation of metal casting processes are presented A thermodynamically consistent constitutive material model is derived from a thermoviscoplastic free energy function. A continuous transition between the initial fluid-like and the final solid-like is modeled by considering a J2 thermoviscoplastic model. Thus, an thermoelastoviscoplastic model, suitable for the solid-like phase, degenerates into a pure thermoviscous model, suitable for the liquid-like phase, according to the evolution of the solid fraction function. A thermomechanical contact model, taking into account the insulated effects of the air-gap due to thermal shrinkage of the part during solidification and cooling, is introduced. A fractional step method, arising from an operator split of the governing differential equations, is considered to solve the coupled problem using a staggered scheme. Within a finite element setting, using low-order interpolation elements, a multiscale stabilization technique is introduced as a convenient framework to overcome the Babuska-Brezzi condition and avoid volumetric locking and pressure instabilities arising in incompressible or quasi-incompressible problems. Computational simulation of industrial castings show the good performance of the model.&nbsp;</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cervera_Chiumenti_2006b</guid>
	<pubDate>Thu, 04 Apr 2019 11:03:37 +0200</pubDate>
	<link>https://www.scipedia.com/public/Cervera_Chiumenti_2006b</link>
	<title><![CDATA[Smeared crack approach: back to the original track]]></title>
	<description><![CDATA[<p><span style="color: rgb(28, 29, 30); font-size: 16px; font-style: normal; font-weight: 400;">This paper briefly reviews the formulations used over the last 40 years for the solution of problems involving tensile cracking, with both the discrete and the smeared crack approaches. The paper focuses on the smeared approach, identifying as its main drawbacks the observed mesh‐size and mesh‐bias spurious dependence when the method is applied &lsquo;straightly&rsquo;. A simple isotropic local damage constitutive model is considered, and the (exponential) softening modulus is regularized according to the material fracture energy and the element size. The continuum and discrete mechanical problems corresponding to both the weak discontinuity (smeared cracks) and the strong discontinuity (discrete cracks) approaches are analysed and the question of propagation of the strain localization band (crack) is identified as the main difficulty to be overcome in the numerical procedure. A tracking technique is used to ensure stability of the solution, attaining the necessary convergence properties of the corresponding discrete finite element formulation. Numerical examples show that the formulation derived is stable and remarkably robust. As a consequence, the results obtained do not suffer from spurious mesh‐size or mesh‐bias dependence, comparing very favourably with those obtained with other fracture and continuum mechanics approaches.</span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cervera_Chiumenti_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 10:53:20 +0100</pubDate>
	<link>https://www.scipedia.com/public/Cervera_Chiumenti_2006a</link>
	<title><![CDATA[Mesh objective tensile cracking via a local continuum damage model and a crack tracking technique]]></title>
	<description><![CDATA[<p><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">This paper describes a procedure for the solution of problems involving tensile&nbsp;</span><span style="color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;"><span><span>cracking using the so-called smeared crack approach, that is, standard finite elements with continuous&nbsp;displacement fields&nbsp;and a standard local&nbsp;</span>constitutive model<span>&nbsp;with strain-softening. An&nbsp;isotropic<span>&nbsp;Rankine<span>&nbsp;damage model is considered. The softening modulus is adjusted according to the material&nbsp;fracture energy&nbsp;and the&nbsp;</span></span></span></span>element size<span><span><span>. The resulting&nbsp;continuum&nbsp;and discrete&nbsp;</span>mechanical problems<span><span>&nbsp;are analyzed and the question of predicting correctly the direction of&nbsp;crack propagation<span>&nbsp;is deemed as the main difficulty to be overcome in the discrete problem. It is proposed to use a crack&nbsp;tracking technique&nbsp;to attain the desired stability and&nbsp;</span></span>convergence properties&nbsp;of the corresponding formulation.&nbsp;</span></span>Numerical examples&nbsp;show that the resulting procedure is well-posed, stable and remarkably robust; the results obtained do not seem to suffer from spurious mesh-size or mesh-bias dependence.</span></span></p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Agelet_de_Saracibar_et_al_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 10:46:17 +0100</pubDate>
	<link>https://www.scipedia.com/public/Agelet_de_Saracibar_et_al_2006a</link>
	<title><![CDATA[On the orthogonal subgrid scale pressure stabilization of finite deformation J2 plasticity]]></title>
	<description><![CDATA[<p style="margin-bottom: 16px; color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">The use of stabilization methods is becoming an increasingly well-accepted technique due to their success in dealing with numerous numerical pathologies that arise in a variety of applications in computational mechanics.</p><p style="margin-bottom: 16px; color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">In this paper a multiscale finite element method technique to deal with pressure stabilization of nearly incompressibility problems in nonlinear solid mechanics at finite deformations is presented. A J2-flow theory plasticity model at finite deformations is considered. A mixed formulation involving pressure and displacement fields is used as starting point. Within the finite element discretization setting, continuous linear interpolation for both fields is considered. To overcome the Babu&scaron;ka&ndash;Brezzi stability condition, a multiscale stabilization method based on the orthogonal subgrid scale (OSGS) technique is introduced. A suitable nonlinear expression of the stabilization parameter is proposed. The main advantage of the method is the possibility of using linear triangular or tetrahedral finite elements, which are easy to generate and, therefore, very convenient for practical industrial applications.</p><p style="margin-bottom: 16px; color: rgb(46, 46, 46); font-size: 18px; font-style: normal; font-weight: 400;">Numerical results obtained using the OSGS stabilization technique are compared with results provided by the P1 standard Galerkin displacements linear triangular/tetrahedral element, P1/P1 standard mixed linear displacements/linear pressure triangular/tetrahedral element and Q1/P0 mixed bilinear/trilinear displacements/constant pressure quadrilateral/hexahedral element for 2D/3D nearly incompressible problems in the context of a nonlinear finite deformation J2 plasticity model.</p>]]></description>
	<dc:creator>María Jesús Samper</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/_2006d</guid>
	<pubDate>Fri, 29 Mar 2019 10:00:32 +0100</pubDate>
	<link>https://www.scipedia.com/public/_2006d</link>
	<title><![CDATA[Reseñas]]></title>
	<description><![CDATA[]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/_2006c</guid>
	<pubDate>Fri, 29 Mar 2019 10:00:21 +0100</pubDate>
	<link>https://www.scipedia.com/public/_2006c</link>
	<title><![CDATA[Apuntes]]></title>
	<description><![CDATA[]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Pedroso_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 10:00:13 +0100</pubDate>
	<link>https://www.scipedia.com/public/Pedroso_2006a</link>
	<title><![CDATA[Joan Brossa and cinema]]></title>
	<description><![CDATA[
<p>The relationship between poetry and cinema has always been very fruitful. A special case is that of Joan Brossa, a Catalanian contemporary poet, whose work is rich in poems about cinema, and whose artistic production includes writing cinematographic scripts. J. Brossa was interested in the far out movements, especially in Dadaism, with its objective poetry and visual creation. His interest in the cinema, both the European far.out cinema and the commercial Nonh American cinema, springs from these three sources. In this work it is possible to find poems in which the poetical production (the blank sheet of paper) is put on the sarne level as the cinematographic one (the blank screen).</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Garcia_2006c</guid>
	<pubDate>Fri, 29 Mar 2019 10:00:03 +0100</pubDate>
	<link>https://www.scipedia.com/public/Garcia_2006c</link>
	<title><![CDATA[Literacy, literature and nechnologies in favour of special education]]></title>
	<description><![CDATA[
<p>In this paper some contributions that new technologies have made to the literacy world are analyzed and special emphasis is given to show how new technologies can facilita te the access and the process of learning literacy and literature to people who have some special education needs. This paper will focus on the benefits of computer science and particulary on the role of the Internet to make literature available to everybody.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Rodriguez_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:59:56 +0100</pubDate>
	<link>https://www.scipedia.com/public/Rodriguez_2006a</link>
	<title><![CDATA[Alternative means to access mediatic information for people with sensory impairments]]></title>
	<description><![CDATA[
<p>This paper aims at stating the difficulties which sensory disabled people face in order to access to mediatic information. However, instead of emphasizing and regretting at the difficulties, we try to evidence the needs which users demand to get a whole access to the media and, consequently, highlight their potencial towards the disabled social integration. The goal is to raise different professional's awareness: those which produce andlor make use of the mediato transmit information about their task importance and the possibility of adapting them, if necessary, for sensory disabled people, both related to technological or procedurd adaptations.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Sedeno_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:59:49 +0100</pubDate>
	<link>https://www.scipedia.com/public/Sedeno_2006a</link>
	<title><![CDATA[University tutoring: reflections aboot aodiovisoal mass-media subjects]]></title>
	<description><![CDATA[
<p>Tutoring is one of the most important tasks that university teaching profession will work on in a few yean. This work must be integrated in learning, at the same level as traditional class teaching. It seems to me that thinking and theorizing about this topic is a fint step but, rnoreover, we have to set up a range of new practical means to re-invent the univenity tutoring work in order to bring ir near to students.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ambros_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:59:41 +0100</pubDate>
	<link>https://www.scipedia.com/public/Ambros_2006a</link>
	<title><![CDATA[The education in communication in Cataloniaº]]></title>
	<description><![CDATA[
<p>The society of infonnation where we are living is changing very quickly, especially the way we communicate to each other. Apart from the wrinen and spoken language. we must be awared of the fact that the most important part of the messages we receive use the audiovisual language code, so we exist side by side with a multiplicity of languages around us Where and how do we learn about them? The Spanish and Catalonian education programmes are not ready to achieve this aim. For this reason, we suggest to talk about the necessity of an education in communication, which two recent Catalonian studies take into account from different points of view: Llibre Blanc: L'educació en l'entorn audiovisual (White Book: The education around the audiouisual) from CAC (Audiouisual Council from Catalonia) and «I Manifest per I'Educació en Comunicació» (I Manifest for the Education in Communication) from AulaMedia.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Gonzalez_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:59:34 +0100</pubDate>
	<link>https://www.scipedia.com/public/Gonzalez_2006a</link>
	<title><![CDATA[Assessment: strategies to improve the quality in communication processes]]></title>
	<description><![CDATA[
<p>This paper aims at granting to evaluation a high priority character to guarantee the quality of the communication processes. Conceived as a systematic, deliberate group process, the text which is presented contributes a series of functional elements used in the assessment of the communicative action, as well as a series of techniques, quantitative as much as qualitative, appropiate for the achievement of the required excellence obiectives.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cascajosa_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:59:26 +0100</pubDate>
	<link>https://www.scipedia.com/public/Cascajosa_2006a</link>
	<title><![CDATA[Studying at the hellmouth: the school experiencein «Buffy, the vampire slayer»]]></title>
	<description><![CDATA[
<p>In this paper we will analyze the representation of the school experience particularly higheschool and college in the American televisión program «Buffy, the vampire slayer». The series tells a story about maturation and responsibility showing values with which young people can easily identify themselves, but it is also an intelligent and honest representation of the school life using the resource of fantasy to symbolize the fears which teens face during their educational life.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Gallego_Gurpegui_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:59:18 +0100</pubDate>
	<link>https://www.scipedia.com/public/Gallego_Gurpegui_2006a</link>
	<title><![CDATA[«Cinema and Health» Programme: A ipublic nitiative to promote adolescents' health]]></title>
	<description><![CDATA[
<p>«Cinema and Health» Programme has been created as a tool to improve the quality of health education on teenagers and it is developed by ESO teachers. Films go deeply into emotions, feelings and personal abilities and they also show everyday life situations to make easier for young people to reflect on them. Every year, the Programme «Cinema and Healthx involves more than 20.000 students from Aragon.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Ortigosa_Ibanez_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:59:11 +0100</pubDate>
	<link>https://www.scipedia.com/public/Ortigosa_Ibanez_2006a</link>
	<title><![CDATA[Communication in Internet: social constructivism and development of a virtual identity]]></title>
	<description><![CDATA[
<p>Internet Relay Chat (IRC) is a virtual meeting point where people from al1 over the world can meet and talk. On context new strategies for creating shared systems of significance, and strategies for constructing an identity, have evolved. These strategies consist mostly of linguistic resources, since the access to visual or auditory features such as appearance or accent, which are significant identiwcreating factors in face-to-face interaction, is quite limited. In this paper, and following the theoretical framework of social constructivism, it is analyzed how chat participants may develop and sustain an identity in IRC chatrooms by using several linguistic and/or graphic resources on the web.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Tesouro_Puiggali_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:59:03 +0100</pubDate>
	<link>https://www.scipedia.com/public/Tesouro_Puiggali_2006a</link>
	<title><![CDATA[The virtual school: technology as an educative tool]]></title>
	<description><![CDATA[
<p>The virtual school es neither restricted to a physical place (the school) nor a fiexed timetable. On the wntrary, students and teachen interact without spacial or temporary restriaions. Virtual school provide big opportunities to the students who cannot attend regular classes for different reasons and in some cases technolow turns out to be a more successful model than wnventional education.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Marcolla_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:58:53 +0100</pubDate>
	<link>https://www.scipedia.com/public/Marcolla_2006a</link>
	<title><![CDATA[Educative and communicative technologies in teacher's training programs]]></title>
	<description><![CDATA[
<p>This paper analyses how professors of the teacher's training programs at federal University of Pelotas (UFPel) consider the introduction of educative and communicative technologies (ECT) -in this case, basically, personal computers, interfaces and the Internet- related to teacher's training programs. In doing so, this paper attempts to articulate theoretical discussions with analytical references of professors' view about the use of technologies in teacher' s training programs and some contradictions identified in the empirical research.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Santibanez_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:58:46 +0100</pubDate>
	<link>https://www.scipedia.com/public/Santibanez_2006a</link>
	<title><![CDATA[Virtual museums as a teaching and learning tool]]></title>
	<description><![CDATA[
<p>Teachen have in real and virtual museums a didactic tool with innovative and creative potential that allows students to acquire knowledge based on the observation of the natural, historical, artistic, scientific and technical enviroment. The use of virtual museums as a didactic tool helps furthermore to familiarize students with the cultural and scientific patrimony they have inherited and they should be able to increase in the future. The principles that rule every didactic action influence on any strategy that we may apply to the use of virtual museums as a didactic tool.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Jimenez_Llitjos_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:58:39 +0100</pubDate>
	<link>https://www.scipedia.com/public/Jimenez_Llitjos_2006a</link>
	<title><![CDATA[Communication processes in virtual cooperative environments]]></title>
	<description><![CDATA[
<p>In this paper we present a teaching resource consisting on a virtual environment which prornotes cooperation among students through different user,to-user and environment-to-user communication strategies, which are free be used for academic and research purposes.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Garcia_2006b</guid>
	<pubDate>Fri, 29 Mar 2019 09:58:31 +0100</pubDate>
	<link>https://www.scipedia.com/public/Garcia_2006b</link>
	<title><![CDATA[A current view of the elearning communities]]></title>
	<description><![CDATA[
<p>This paper tries to show up the different learning communities, interrelated thanks to the possibilities that the communication and media technologies offer. From this point of view, people can take part in different e-learning communities at the same time and help to build, rebuild and share knowledge in the media society.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Galarza_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:58:24 +0100</pubDate>
	<link>https://www.scipedia.com/public/Galarza_2006a</link>
	<title><![CDATA[Coeducation in Andalusian telematic educative network «Averroes»]]></title>
	<description><![CDATA[
<p>This paper makes an analytical approach to the coeducational page developed within the «Averroes» website created by the Junta de Andalucía. The conceptual characteristics and the techniques of this webpage will be studied. Coeducation and the use of TIC for its leaming will be thoroughly viewed in order to achievement of the coeducative aim required by the civil goverment.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Cortinas_Pont_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:58:16 +0100</pubDate>
	<link>https://www.scipedia.com/public/Cortinas_Pont_2006a</link>
	<title><![CDATA[Relationship between journalists and politicians in critical situations: a case study]]></title>
	<description><![CDATA[
<p>This article tries to study the relationship between iournalists and politicians in critical situations. The writers have analyzed the information broadcast by the television networks in Spain: Televisión Española, Antena 3 and Tele 5 on March 2004 11th, 12th and 13th, just after the terrorist attack in Madrid. From this study research, it has been concluded that the journalists had little critical capacity, they were in collusion with the politicians, spreading certain badly argued theses and subject maner, using unwarranted expressions.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Moreno_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:58:03 +0100</pubDate>
	<link>https://www.scipedia.com/public/Moreno_2006a</link>
	<title><![CDATA[Magic ingredients and clinical tests in commercials as advertising strategies]]></title>
	<description><![CDATA[
<p>In this paper we analyse several television commercials shown on Spanish national TV channels. There are two different kinds of advertising slogans to increase credibility of the advertised products: spots which make reference to nmagic ingredientsn, which are subject to trends and designed to make the target audience fantazise about the characteristics of these products; and secondly, we detect a growing number of advertisements which make reference to the product having been scientifically tested. However, complaints by consumers' organisations and the scientific community are on the rise concerning the abusive use of scientific terrninology on television advertising, as well as the lack of scientific reality of the claims made in the commercials.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>
<item>
	<guid isPermaLink="true">https://www.scipedia.com/public/Marta_2006a</guid>
	<pubDate>Fri, 29 Mar 2019 09:57:55 +0100</pubDate>
	<link>https://www.scipedia.com/public/Marta_2006a</link>
	<title><![CDATA[Parents' guidance on TV use: a quantitative and qualitative model for content acquisition]]></title>
	<description><![CDATA[
<p>Parents' behavioural patterns as regards TV use define children's interaction with the medium. Parents' supervision helps the child to establish a ser of personal criteria concerning both duration and frequency of exposure, as well as age appropriateness. Furthermore, as our research evinces, children in the habit of discussing TV shows with their parents reveal a higher degree of activeness in their visualisation process together with a notably finer ability of disclosing and decoding content in TV messages.</p>
]]></description>
	<dc:creator>Scipedia content</dc:creator>
</item>

</channel>
</rss>