The development of high-speed train lines has increased significantly during the last twenty-five years, leading to more demanding loads in railway infrastructures. Most of these infrastructures were constructed using railway ballast, which is a layer of granular material placed under the sleepers whose roles are: resisting to vertical and horizontal loads and facing climate action. Moreover, the Discrete Element Method was found to be an effective numerical method for the calculation of engineering problems involving granular materials. For these reasons, the main objective of the work is the development of a numerical modelling tool based on the Discrete Element Method which allows the users to understand better the mechanical behaviour of railway ballast.
The first task was the review of the specifications that ballast material must meet. Then, the features of the available Discrete Elements code, called DEMPack, were analysed. After those revisions, it was found that the code needed some improvement in order to reproduce correctly and efficiently the behaviour of railway ballast. The main deficiencies identified in the numerical code were related to the contact between discrete element particles and planar boundaries and to the geometrical representation of such a irregular material as ballast.
Contact interactions between rigid boundaries and Discrete Elements are treated using a new methodology called the Double Hierarchy method. This new algorithm is based on characterising contacts between rigid parts (meshed with a Finite Element-like discretisation) and spherical Discrete Elements. The procedure is described in the course of the work. Moreover, the method validation and the assessment of its limitations are also displayed.
The representation of irregular particles using the Discrete Element Method is a very challenging issue, leading to different geometrical approaches. In this work, a deep revision of those approaches was performed. Finally, the most appropriate methods were chosen: spheres with rolling friction and clusters of spheres. The main advantage of the use of spheres is their low computational cost, while clusters of spheres stand out for their geometrical versatility. Some improvements were developed for describing the movement of each kind of particles, specifically, the imposition of the rolling friction and the integration of the rotation of clusters of spheres.
In the course of this work the way to fill volumes with particles (spheres or clusters) was also analysed. The aim is to control properly the initial granulometry and compactness of the samples used in the calculations.
After checking the correctness of the numerical code with simplified benchmarks, some laboratory tests for evaluating railway ballast behaviour were computed. The aim was to calibrate the ballast material properties and validate the code for the representation of railway ballast. Once the material properties were calibrated, some examples of a real train passing through a railway ballast track were reproduced numerically. This calculations allowed to prove the possibilities of the implemented tool.
This publication is a revised version of the text of the PhD Thesis “Numerical analysis of railway ballast behaviour using the Discrete Element Method” of Joaquín Irazábal, presented at the Technical University of Catalonia on October 6th 2017
I would like to thank all people who contributed in one way or the other to the achievement of this work.
First of all I want to express my gratitude to my PhD advisors, Prof. Eugenio Oñate, for his support all over these years. It has been a pleasure to work with him.
I want to specially thank Fernando Salazar, the Director of CIMNE Madrid. He has been not only my boss but also my college. Without his invaluable help and advices this work would not be what it is. I want to thank also my other colleges in the CIMNE Madrid's office, David and Javier, for the good times we have spent and for letting me share my experiences with them, always giving me useful opinions.
Thanks to Miguel Ángel and Salva for their cooperation and assistance and also for making me feel one more of the team when I go to Barcelona. I can not forget to thank the other DEM Team members: Guillermo and Ferran for their outstanding work, which has been fundamental for the code developments implemented during this work. Thanks to Miquel Santasusana for making our collaboration so enjoyable, we shared an article and many hours of work, but also great moments out of the office. I should also thank Pablo and his patience with me during my first months in CIMNE helping me to start understanding KRATOS, Antonia who helped me always since I started studying the Master and the colleges in CIMNE Cuba, Roberto and Carlos, who assisted me from the moment I asked for help. I also want to thank Merce for her kind help in the difficult administrative issues. May be because I am in Madrid I am missing some people in Barcelona, but my intention is to thank everybody belonging to CIMNE family.
Last but not least to all my friends and family, specially my parents, Juanjo and Charo, and my sister, Lucía, for being always at my side. Finally, I would like to thank the person with whom I have shared more time and experiences the last ten years: thanks Patry for supporting me everyday but above all making me happy.
This work was carried out with the financial support of CIMNE via the CERCA Programme / Generalitat de Catalunya and the Spanish MINECO within the BALAMED (BIA2012-39172) and MONICAB (BIA2015-67263-R) projects.
El desarrollo de las líneas de alta velocidad ha aumentado significativamente durante los últimos veinticinco años, dando lugar a cargas más exigentes sobre las infraestructuras ferroviarias. La mayor parte de estas infraestructuras se construyen con balasto, que es una capa de material granular colocada bajo las traviesas cuyas funciones principales son: resistir las cargas verticales y horizontales repartiéndolas sobre la plataforma y soportar las acciones climáticas. Por otra parte, se encontró que el Método de los Elementos Discretos es muy eficaz para el cálculo de problemas de ingeniería que implican materiales granulares. Por estas razones se decidió que el objetivo principal de la tesis fuera el desarrollo de una herramienta de modelación numérica basada en el Método de los Elementos Discretos que permita a los usuarios comprender mejor el comportamiento mecánico del balasto ferroviario.
La primera tarea fue la revisión de las especificaciones que el balasto debe cumplir. A continuación, se analizaron las características del código de Elementos Discretos disponible, denominado DEMPack. Después de esas revisiones, se encontró que el código necesitaba alguna mejora para poder reproducir correcta y eficientemente el comportamiento del balasto ferroviario. Las principales deficiencias identificadas en el código numérico están relacionadas con el contacto entre partículas y contornos planos y con la representación geométrica de un material tan irregular como es el balasto.
Los contactos entre contornos rígidos y elementos discretos se tratan usando una nueva metodo-logía llamada el Double Hierarchy method. Este nuevo algoritmo se basa en la caracterización de contactos entre elementos rígidos (discretizados de forma similar a los elementos finitos) y elementos discretos esféricos. El procedimiento de búsqueda y caracterización de contactos se describe detalladamente a lo largo de la tesis. Además, también se muestra la validación del método y se evalúan sus limitaciones.
La representación de partículas irregulares utilizando el Método de los Elementos Discretos se puede abordar desde diferentes enfoques geométricos. En este trabajo, se realizó una revisión de estos enfoques. Finalmente, se escogieron los métodos más adecuados: esferas con resistencia a la rodadura y clusters de esferas. La principal ventaja del uso de las esferas es su bajo coste computacional, mientras que los clusters de esferas destacan por su versatilidad geométrica. Se han desarrollado algunas mejoras para describir el movimiento de cada uno de los tipos de partículas, concretamente, la imposición de la resistencia a la rodadura y la integración de la rotación de clusters de esferas.
En el curso de este trabajo también se analizó la forma de llenar volúmenes con partículas (esferas o clusters). El objetivo es controlar adecuadamente la granulometría inicial y la compacidad de las muestras utilizadas en los cálculos.
Después de comprobar el comportamiento del código numérico con tests simplificados, se emularon numéricamente algunos ensayos de laboratorio con balasto ferroviario. El objetivo era calibrar las propiedades del balasto y validar el código para representar con exactitud su comportatimiento. Una vez calibradas las propiedades del material, se reprodujeron numéricamente algunos ejemplos de un tren pasando sobre una vía con balasto. Estos cálculos permiten demostrar las posibilidades de la herramienta numérica implementada.
The railway line system plays an important role in the transport network of any country, and its maintenance is essential. Among other solutions, ballast is traditionally used to support railway tracks as it is relatively inexpensive and easy to maintain .
Before the advent of high-speed trains, most attention was given to the track superstructure (consisting of rails, fasteners and sleepers) than to the substructure (consisting of ballast, subballast and subgrade). However, the maintenance cost of the substructure is not negligible at all , and a better knowledge of railway ballast behaviour may allow a more efficient use of it, saving money and increasing rail transport safety.
Besides the importance of studying ballast properties, in economic and safety terms, it should be pointed that a new variable appeared in the last two decades: the increase of trains speed. High-speed trains improved people mobility all around the world, at the cost of increasing vibrations and loads on the railway track , making its behaviour more uncertain.
|Figure 1: High-speed train travelling over a ballasted railway track.|
Efficiency, safety and the more demanding loads and vibrations raised the interest on studying railway infrastructures deeply. Within those infrastructures, the ballast layer is one of the elements whose behaviour is more uncertain, due to its granular nature. For this reason, railway ballast laboratory tests are being preformed by different construction companies and research teams. However, real tests are difficult and expensive to carry out. In addition, there are parameters that can not be easily measured in the laboratory.
To overcome these problems, the calculation of railway ballast and other granular materials is being addressed by means of the development of different computational methods. Regarding the computational analysis, granular materials can be considered either as discrete individuals that interact through proper contact laws, or as a continuum, whose behaviour is governed by some constitutive law.
Although, in general, granular materials can be defined as numerous collections of rigid or deformable bodies that interact with one another , they can be classified based on various criteria. There are granular materials whose particles can be of different grain sizes, from nanoscale powders to groups of stones; there are granular materials whose particles appear round and others with very sharp particles; and, there also exist very compacted granular materials and others whose particles appear disperse. These and other characteristics lead to no single predictive behaviour of granular materials, and their calculation must be addressed from diverse numerical perspectives depending on their properties.
In this particular case, the interest lies in the calculation of railway ballast behaviour. Railway ballast refers to the layer of crushed stones placed between and underneath the sleepers whose purpose is to provide drainage and structural support for the dynamic loading applied by trains. Railway ballast forms a dense collection of particles with relatively large grain size, as compared to the depth of the ballast bed. It typically works under high frequency loads exerted by high-speed trains travelling over the rails.
Traditionally, complex geomechanic problems were addressed using refined constitutive models based on continuum assumptions. Although these models may be accurate in the evaluation of the critical state of soils [5,6,7,8], or the flow of bulk material masses , they are not able to represent local discontinuities which typically play a fundamental role in the behaviour of granular materials. This discontinuous nature induces special features such as anisotropy or local instabilities, which are difficult to understand or model based on the principles of continuum mechanics .
The most natural choice to represent granular materials is the use of discrete techniques, intrinsically appropriate to account for their discontinuity and heterogeneity. The two principal approaches within the discrete framework are: Non-smooth and Smooth Discrete Element Methods .
Within Non-smooth Discrete Element Methods, interactions are described by shock laws and Coulomb friction, written as non-differentiable relations. Meanwhile, Smooth Discrete Element Methods describe the interactions between grains by regular functions involving gaps, relative velocities and reaction forces.
The most popular Non-smooth discrete element methods are:
Smooth Discrete Element Methods can also be addressed by different approaches, where two methods can be highlighted:
The main features of the four mentioned methods are summarised in Table 1.
|Non-smooth DEM||Smooth DEM|
|Valid for dense collections of particles||⨉||✓||⨉||✓|
|Valid for loose collections of particles||✓||⨉||✓||✓|
|Contact mechanism accurately characterised||⨉||✓||⨉||✓|
|Allows multiple contacts||⨉||✓||✓||✓|
|Implicit scheme for time integration||⨉||✓||⨉||⨉|
|Explicit scheme for time integration||✓||⨉||✓||✓|
DEM can be used for the calculation of loose collections of particles, but the computational time would be very high compared to ED method or MD.
In this research, ED and MD methods are discarded because they are not conceived for the calculation of dense collections of particles and the contact mechanism in both methods is not accurately treated. Between DEM and NSCD the main difference is the use of explicit and implicit time integration schemes respectively. Explicit algorithms are more straightforward than implicit ones, but their computational stability limits the calculation time step. On the other hand, the time step for implicit schemes is not limited by the system stability, at the cost of a higher computational demand.
Many studies [17,18,19] tried to evaluate which of DEM or NSCD is better for the calculation of dense collections of particles. In all of them the conclusion is that it depends on the case study, so there is not a general rule to choose one or another.
In this particular case, considering that high-speed trains exert high frequency loads over the ballast bed, the use of explicit time integration schemes could be computationally advantageous [20,21]. These schemes do not need to include any additional term to take into account the frequency of the input load. In order to correctly capture the dynamic response, the calculation time step should be several times smaller than the input load period. Under this condition, the explicit integration yields sufficient accuracy and the time step is generally below its stability limits. Another important outcome of the use of an explicit integration is the easier parallelisation of the code and the avoidance of linearisation and employment of system solvers. Such considerations induce to conclude that, although NSCD is an interesting method to address this problem, the DEM is more appropriate.
This work falls within the framework of two research projects funded by the Spanish Ministry of Economy, Industry and Competitiveness (MINECO):
BALAMED project aims to develop a numerical tool, based on the DEM, for calculating the response of the rail-sleeper-ballast system under the loads of the high speed trains. It started in 2013 and finished in 2015. Meanwhile, MONICAB project started in 2016 and will finish in 2018. Its main objective is the development of a numerical modelling tool, based on the DEM for the analysis of the behaviour of the sand-fouled railway ballast.
Current thesis shows most of the progresses made within those projects. The research is conducted in parallel with the improvements performed in the open-source DEM code developed in CIMNE1, the so-called DEMPack (www.cimne.com/dempack/). The data structures and algorithms, used in DEMPack are implemented through the Kratos multiphysics software suite , an Open-Source framework for the development of numerical methods for solving multidisciplinary engineering problems.
|Figure 2: DEMpack logo.|
From all the above-mentioned, the general objective of this thesis can be stated as follows: develop a numerical tool based on the DEM, within DEMpack, able to reproduce ballast behaviour, including its interaction with other structures. To reach this goal some partial objectives should be fulfilled.
(1) Centre Internacional de Metodes Numerics en Enginyeria.
The work is divided into eight chapters, including current one:
Regarding particles arrangement, the generation of compacted groups of spherical particles and sphere clusters is also addressed in Chapter 5.
This chapter is divided in three different sections. The first one presents a brief introduction of railway infrastructures and a review of its components, from the superstructure to the substructure, focusing on the ballast layer. Meanwhile, section 2.2 describes the main features of ballast and sleepers. Finally, section 2.3 is a review of the state of the art of the laboratory tests used to evaluate railway ballast behaviour.
Railway infrastructures can be split in two parts: the superstructure, that transmits the train loads to the substructure and the substructure, that distributes those loads to the soil. The superstructure is directly in contact with the train wheels, while the substructure is in contact with the superstructure via the sleepers, that are embedded in the ballast layer as it is shown in Figure 3.
|Figure 3: Scheme showing the position of the rails, the sleepers and the ballast layer.|
Traditionally, the cross-section of the substructure of conventional railway tracks consisted of a simple ballast layer over the platform. Besides distributing traffic loads, the ballast layer contributes to rainwater evacuation, platform protection to moisture variance, longitudinal and lateral stabilisation and high loads damping .
The typical cross-section of conventional and high-speed railway tracks is presented in Figure 4. The differences between both are due to the more demanding loads exerted by high-speed trains. Although the function of the ballast layer is the same in both types of tracks, in high-speed railway infrastructures the ballast layer is thicker and more layers of various materials (subballast, gravel and sand) are placed over the platform, in order to distribute more efficiently train loads and prevent the platform damage due to erosion. These differences lead to a higher stiffness of the substructure in high-speed railway tracks.
|(a) Conventional railway substructure.||(b) High-speed railway substructure.|
|Figure 4: Typical cross-section of conventional and high-speed railway tracks. Scheme including typical values for the vertical stiffness of the platform and subgrades (). Source: López Pita .|
Railway superstructure primary role is to transmit loads from train wheels to the substructure. Figure 5 shows its main components.
|Figure 5: Superstructure configuration.|
In addition to the components shown in Figure 5, there is another significant element , called rubber sole, that forms the superstructure. Rubber soles are elastic elements located under the sleepers to modify track stiffness and increase elasticity. They are normally used in viaducts where track elasticity should be increased in order to reduce its rapid damage.
When a train travels over the track, it has three degrees of freedom corresponding to the displacement along its principal axes, which are: longitudinal (travelling direction), vertical and lateral. In addition, the train has other three degrees of freedom corresponding to the rotation about those axes, which are called: roll (about the longitudinal axis), yaw (about the vertical axis) and pitch (about the lateral axis). Figure 6 shows a diagram of those movements.
|Figure 6: Train translational and rotational degrees of freedom. The vehicle travels in the longitudinal direction.|
These displacements and rotations produce a set of loads that have to be taken into account for dimensioning the railway track. Vertical loads are the main criteria for designing the components of the track, lateral loads determine the maximum speed of vehicles movement, whereas longitudinal loads can cause horizontal buckling of the track.
The high loads to which the railway track is subjected, in any of the three directions, can trigger defects in the railway infrastructure. To measure the impact of this defects, in safety and comfort terms, it is important to quantify the track quality, which can be achieved by the following parameters :
Figure 7 shows schematically some of these defects.
|Figure 7: Typical railway track alignment defects. Source: López Pita .|
Railway stiffness refers to the resistance of the whole track against deformations in the vertical direction . Burrow  highlighted that the magnitude of the railway stiffness is mainly influenced by the substructure. Considering only the layers that compose the substructure, ballast is the most relevant material. It should be noted that, comparatively, the platform influences more, but its stiffness is almost imposed and can not be adjusted.
From Burrow's developments , it can be concluded that there is an optimal railway stiffness (at least from the theoretical point of view). When railway stiffness is excessively low, too much rail deformation may occur, while, if it is high, the infrastructure would be damaged rapidly. López Pita et al.  proposed a range of optimal values for railway stiffness. Their work was based on optimizing maintenance costs and energy dissipation costs (due to excessive deformations). They used data from the high-speed tracks Paris-Lyon and Madrid-Sevilla. The study concluded that the overall railway stiffness must be between 70 and 80 kN/mm.
The EUROBALT II European project also dealt with the deterioration of railway tracks. Its main objective was to identify the parameters that should be controlled to reduce damage in railway tracks. The findings of this study were that the most influencing parameters to the track behaviour were: the stiffness, the displacement of the rails and the settlement of the different layers.
Railway ballast is the material under consideration in this thesis, for that reason it is important to perform a deep study of its properties. In the first part of this section, ballast specifications are described. Moreover, in the second part the main features of sleepers used in Spain are defined. Sleepers characteristics are significant in this research due to the fact that they directly interact with ballast. Finally, some aspects about the fasteners will be commented briefly.
Ballast, as structural element, is formed by a set of particles of different size and shape. The ballast is, thus, a layer of granular material which is placed under the sleepers and therefore develops important roles: resisting to vertical and horizontal loads (produced by the passing train over the rail) and facing climate action.
|Figure 8: Ballast stones.|
The Standard UNE-EN 13450:2015  specifies the technical characteristics of the ballast used as a supporting layer in railway tracks. The Standard defines quality controls to which ballast should be subjected. Those quality controls have to do with five material properties: granulometry, Los Angeles coefficient (CLA)1, micro-Deval coefficient (MDS)2, flakiness index and particle length.
The size distribution of particles established by the Standard UNE-EN 13450:2015 is presented in Table 2. Ballast particles size should be between and . In addition, the presence of angular stones is discouraged, because those kind of particles slip easily and can obstruct tamping operation.
|Sieve size (mm)||Cumulative % passing through each sieve|
Tamping operation is performed by the tamping machine which moves each sleeper and rail into their correct vertical and horizontal position. Then, it introduces its tines into the ballast layer. When the tines reach the required depth they start to vibrate, in order to arrange the ballast stones under the sleeper in the most compacted manner. The need to carry out this operation lies in the necessity of correcting track misalignments by compacting the ballast under the sleepers, providing a solid foundation. If the track is not rigid enough when a vehicle is travelling through it, two different phenomenon can appear simultaneously: vertical deflection and lifting wave . Vertical deflection affects a rail length of to approximately. Its maximum value in the point of application of the load ranges from to , considering a load of per wheel. Otherwise, lifting wave refers to the lift of the front part of the railway track in the direction of movement, the magnitude use to be about the of vertical deflection.
Regarding the train passage, it has to be said that each train axle produces a sleeper stroke to the ballast layer (on a freight train 150 strokes can be reached) which, together with the increasing weight of the sleepers (about ) may lead to a rapid deterioration of ballast stones. Aiming to minimise this deterioration and to resist tamping operation, a hard enough rock, difficult to break, is needed. According to the Railway Standard UNE 22950-1:1990 , this requirement is met if the original rock has a compression resistance of . The Standard UNE-EN 13450:2015 established a maximum value for the coefficient of Los Angeles, which should be less than . Abrasion needs are achieved by requiring the ballast a value over 15 in the Deval coefficient.
Particle shape also affects ballast behaviour, that is why flakiness index and particle length are evaluated. The flakiness index test is performed to determine the number of slabs in the granular material used in the construction of railway tracks. Stretched particles can break, leading to modifications of the particle size and decreasing the load capacity. In this context, slabs are the fraction of granular material whose minimum dimension (thickness) is less than 3/5 of the average size of the considered material fraction. The test consists of two sifting operations, the first to separate particles into groups, depending on their size, and the second with a parallel bars sieve. The bars of the second sieve are separated times the size of the first sieve. The flakiness index is expressed as the weight percentage of ballast passing through the bars sieve. According to the regulations, this index must be less than or equal to .
The particle length index is defined as, the mass percentage of ballast particles greater than or equal to 100 mm length, in a ballast sample weighting more than 40 kg. The test is performed by measuring each of the particles with a calliper. According to the legislation the particle length index must be lower than 4%
|Vertical||Elasticity and damping||Elastic modulus||Load plate test|
|Ballast thickness||Minimum thickness|
|Abrasion resistance||Resistance to abrasion||Wet MicroDeval|
|Alleviate pressure on the railway platform||Ballast thickness||Minimum thickness|
|Withstand rail shocks||Impact resistance||Los Angeles coefficient|
|Climate||Assist the drainage||Granulometry||
|Ice resistance||Frost resistance||
Table 3 summarises other features required for materials used as ballast. To comply all those functions, the layer thickness should be between 25 and 35 cm . The lower limit is set by the need of achieving the loads resistance objectives, with less ballast the specifications could not be met, while the upper value is set by the need to restrict seats and geometrical defects on the railway track.
From all the above-mentioned, it can be concluded that, in addition to deal with climate actions, the main functions of the ballast layer are:
(1) Wear coefficient of Los Angeles (CLA). It measures wear resistance by attrition and impact of aggregates. It is the ratio of the difference in weight of the initial sample and the material retained by the sieve 1.6 UNE (once subjected to an abrasive and standardised process using iron balls), divided by the initial weight of the sample.
(2) Deval coefficient: determined by the value obtained in the Deval test, consisting of introducing 44 stones of 7 weighing 5 , in a cylinder that rotates around an inclined axis. The cylinder rotates for 5 hours until 10000 turns. Then the set of dispersed materials are weighed getting "P" in grams. The Deval coefficient is given by the ratio 400/P .
The elongated pieces made of various materials, located between the rails and the ballast layer are called sleepers, whose main functions are :
The variables that influence in a more significant way the sleepers behaviour are : their dimensions, weight and elasticity.
Sleepers dimensions are important because they affect the supporting area available, which can reduce the stresses transmitted to the ballast layer. A high weight contributes to increase longitudinal and lateral stability of the track, however, heavier sleepers are more difficult to install in the construction phase. Regarding sleepers elasticity, it provides stability absorbing mechanical forces and preventing deterioration, which minimizes maintenance costs.
Different types of sleepers can be classified according to the material they are made of or based on their shape. Sleepers can be made of wood, steel, cast iron, reinforced concrete, pre-stressed concrete or synthetic materials. According to their shape, sleepers can be monoblock, semi-sleepers or twin block, as it is shown in Figure 9.
|(a) Monoblock.||(b) Semi-sleeper.||(c) Bi-block.|
|Figure 9: Different types of sleepers depending on their shape. Source: Álvarez and Luque .|
Although the list of different sleepers may seem large, in Spain, basically three types are used:
|(a) Wood sleepers.||(b) Monoblock concrete sleepers.||(c) Twin block concrete sleepers.|
|Figure 10: Different kinds of sleepers, by material and shape.|
Wood sleepers, defined in the Standard NRV 3-1-0.0 , may be of different kinds of trees such as oak, beech or pine. These sleepers must comply the established regulations, which came from 1966 .
In the first railway tracks, constructed in the nineteenth century, the only material used to manufacture sleepers was wood, since their physical and mechanical properties, added to its abundance, made this material the best choice. However, time passes and the possibility of using concrete to build sleepers make wood sleepers almost disappear. Nowadays, wood sleepers are used very rarely and never in high-speed tracks. Wood sleepers can be used, for example, in tracks where the stiffness is very high, like in metal bridges.
Prestressed concrete displaced wood as a material for sleepers manufacturing. These are the most well-known advantages and drawbacks of concrete sleepers over wood sleepers :
The Standards N.R.V. 3-1-2.1  and N.R.V. 3-1-3.1  describe, respectively, the main features of monoblock and twin block sleepers. In Spain, monoblock sleepers are widely used because of their resistance (monoblock sleepers can be prestressed) and their bigger bearing surface, that allows a better distribution of loads. The advantages of twin block sleepers are that they are lighter and their behaviour to lateral movement is good (in France they are still used frequently). Twin block sleepers main problems are :
Regarding the materials used, technical specifications require a high quality cement with high strength and uniform size aggregates and siliceous. The compressive strength of the concrete must be greater than , and the tensile strength of the steel should be about .
There are many types of concrete sleepers depending on their dimensions which can be modified if it is necessary. If a new sleeper design wants to be used, it should be fully defined in a draft drawing signed by the applicant. The draft drawing must include basic dimensions that the department responsible for the railway infrastructure administration determines. Sleeper length should be about and the base width in the outer part is equal to , excluding very few exceptions.
For monoblock sleepers, which will be the object of study in this work as they are the most used in Spain, technical specification ET 03.360.571.8  establishes the parameters that should be defined in the draft. Those parameters are displayed in in Table 4. Figure 11 shows graphically those parameters.
|Concrete element total length|
|,||Concrete element lower and upper part thickness|
|Height in each position along the entire concrete element|
|Distance from the outer reference point to the fasteners|
|Distance between the outer reference point and the end of the concrete element|
|Tilting of the rail support plane|
|Flatness of each supporting area to two points far away 150 mm|
|Relative torsion between the supporting planes of both rails|
|Figure 11: Sleeper geometry parameters .|
Direct interaction between ballast and sleepers makes ballast-sleeper friction a key parameter that affects greatly the behaviour of the whole track. Specifically, it is fundamental in the evaluation of the resistance of the track, mainly against loads in the lateral and longitudinal directions. However, it is a feature very difficult to obtain.
Zand and Moraal  compared the variation of the lateral force (force needed to move the sleeper laterally) with the variation of vertical load (weight on the sleeper), obtaining a friction coefficient value of 0.7247. They also compared their results with other previous studies whose conclusion was that the value of the ballast-sleeper friction coefficient used to be between 0.665 and 0.872.
The interaction between rails and sleepers is performed through the fasteners. There are many different types of fasteners since each administration uses the most convenient ones. Regardless of the model, they attach each rail to the sleepers in two points. The fasteners are elastic and their main function is fixing the foot of the rail to the sleeper that has to be prepared for the inclusion of these devices.
Well-defined numerical and laboratory tests are available in the literature. These laboratory tests are suitable for validating the numerical code. In this section, some of the most used tests are briefly described.
One of the problems that may appear in railway infrastructures is lateral buckling, which is one of the most critical troubles in railway tracks . It can greatly affect the circulation and may cause catastrophic derailments . Lateral buckling can be caused by mechanical or thermal loads, being relatively common in countries with large deviations in temperature between winter and summer. For this reason, lateral resistance of the track is one of the most important parameters regarding track stability, being the design of the ballast layer a key aspect.
The evaluation of the ballast layer lateral resistance was studied by several researchers. Zand and Moraal  carried out some tests that consist of pulling on laterally a track panel with five sleepers inside a ballast bed, measuring the necessary force to move it. Moreover, Kabo  developed a numerical study, using the FEM, evaluating the influence of the ballast layer geometry on its lateral resistance. Figure 12 shows one of the analysed geometries.
|Figure 12: Perspective view of FE model of a sleeper embedded inside a ballast bed. Source: Kabo .|
Repeated train loading implies ballast deterioration, which triggers undesired seats in the railway track and fouling of the ballast layer because of breakage. One of the available tests to evaluate the consequences of cyclic loading in the ballast layer is the box compression test (Figure 13).
|(a) Test set-up, top view.||(b) Test set-up, front view.|
|Figure 13: Box compression test set-up. Source: Lim .|
The box compression test consists of applying a cyclic load on a ballast bed inside a rigid box. The ballast deformation is measured comparing it with the number of load cycles. Indraratna et al  reproduced this test using a two-dimensional breakable DE model for ballast.
Figure 14 shows the layout of the typical large-scale direct shear test. This laboratory test consists of introducing ballast inside a box divided in two halves, pulling from the lower box and computing the stress-strain curve of the ballast sample. Ballast can be confined, in order to reproduce more accurately railtrack ballast conditions.
| Figure 14: Box shear test sample.
Source: Huang and Tutumluer .
This test was already reproduced using the DEM by Indraratna et al.  and Huang and Tutumluer , among other authors. Numerically, the major interest of this test is that it is performed on a relatively small domain. This implies that the computational cost is moderate, so it can be very useful to perform multiple tests in the calibration phase.
The most frequently used test to characterize railway ballast is the triaxial test. The reason is that, in the triaxial test, the ballast sample is confined to a certain pressure, simulating its working conditions under the track. There exist several research works in which ballast triaxial tests were developed using the DEM [46,47,1,48].
One of the difficulties in the simulation of triaxial tests using the DEM is the introduction of the confinement pressure. In laboratory tests, this pressure is applied on a membrane that confines the sample, as it can be seen in Figure 15. Some authors use simplifications to introduce pressure on the sample, but there are methods for calculating discrete (for ballast particles) and finite (for the membrane) elements coupled, using explicit schemes . The possibility of considering the DEs and FEs totally coupled would increase the possibilities of practical application of the model.
| Figure 15: Triaxial test device with ballast
inside. Source: Harkness et al. .
Since Cundall and Strack  presented the first ideas of the DEM in 1979, this numerical technique has increased its popularity being known, nowadays, as a powerful and efficient tool to reproduce the behaviour of granular materials.
For the analysis of granular materials with the DEM, each grain is represented as a rigid particle. In the first DEM approaches, those particles used to be spheres (3D) and circles (2D), but now, numerous advances were developed, in order to represent different geometries. Regardless the geometry of the particles, they interact among themselves in the normal and tangential directions, assuming material deformation concentrated at the points of contact. Material properties are defined by appropriate contact laws that can be seen as the formulation of the material model at the microscopic level.
In this chapter, the main features of the classical DEM, with spheres, is presented. The solving strategy is based on three main steps: contact detection, evaluation of forces and integration of motion.
Due to the method formulation, the definition of the appropriate contact laws is fundamental and a fast contact detection is something of significant importance in DEM calculations. Contact status between two DE particles can be calculated from their relative position in the previous time step and it is used for updating the contact forces at the current step. The relative computational cost of the contact detection over the total computational cost is high in most of DEM simulations, and so, the problem of how to recognize all contacts precisely and efficiently has received considerable attention in the literature [50,51].
Contact detection for non-spherical particles may involve the resolution of a non-linear system of equations (see the case of superquadrics [49,52] or polyhedra [53,54,55]), for that reason, the use of spheres is the most efficient choice.
Traditionally, the contact detection is split into two stages:
Cell based algorithms [57,58] divide the entire calculation domain in rectangles (2D) or cuboids (3D). Meanwhile, the target particle is surrounded by a simple bounding box, that is used to check rapidly which cells intersects with it. The list of intersecting cells are stored for each particle (see Figure 16). The potential neighbours for every target particle are all the elements stored in the different cells to which the bounding box belongs.
|Figure 16: Cell based algorithm scheme in 2D.|
Tree based algorithms [59,60] start splitting the entire domain into two sub-domains, taking into account the position of a centred particle. Then, two particles, one in each of those sub-domains, are used to divide them into smaller regions, alternating the splitting dimension (X, Y and Z in 3D). The process continues till the required number of sub-domains is obtained (see Figure 17). The potential neighbours are determined following the tree in upward direction.
|Figure 17: Two-dimensional sub-domain decomposition of tree based algorithm.|
Numerical tests showed better performance for the cell based algorithms (D-Cell  and NBS ) over the tree based ones (ADT  and SDT ), specially for large-scale problems. It should also be noted that the efficiency is dependent on the cell dimension and, in general, the size distribution can affect the performance. Han et al.  suggest a cell size of three times the average discrete object size for 2D and five times for 3D problems. It is worth noting that (using these or other efficient algorithms) the cost of the GNS represents typically less than 5 percent of the total computation while the total cost of the search can reach values over 75 percent . In this sense the focus should be placed on the next stage rather than on optimizing the GNS algorithms.
Figure 18 shows the LCR stage for spherical particles. It can be seen that the only data needed to completely define the contact are the centres of the spheres and their radius. In the example of Figure 18, contact exists between the target particle and particle because the distance between their centres is less than the sum of their radius
where , are the centres and , the radius of the particles and respectively.
|Figure 18: Local Contact Resolution (LCR).|
On the other hand, Figure 18 shows that potential contact between particles , and , , which were found in the GNS stage, are discarded in the LCR stage.
It is worth mentioning that, depending on the nature of the calculation, it may be advantageous not to perform contact detection at every time step, due to its computational cost. In most of the numerical calculations, contact duration lasts several time steps because particles position varies very little, specially in the computation of dense collections of particles, such as railway ballast numerical calculations. In this regard, computational efficiency can be improved by developing contact detection after several time steps, only updating the characteristics of previous contacts (point of contact, normal and tangential force direction and penetration). However, numerical stability should be taken into account, because if contacts are detected late, indentations (and forces) will achieve high values, leading to instabilities and inaccurate results.
In this work, GNS stage is carried out using a basic cell-based algorithm parallelised using OMP. Meanwhile, LCR is developed between spheres, leading to an efficient contact characterisation. In this work, LCR is always performed between spheres because particles used are spheres and clusters of spheres (rigid particles formed by groups of spheres overlapped in a rigid way, see Section 5.3). Issues related with contact between DEs and planar boundaries are explained in detail in chapter 4.
The behaviour of granular materials is governed by grain-grain contact interactions. This is the basis of the DEM approach, where the material is characterised by means of defining the interactions between its constituent particles.
In the basic DEM formulation, standard rigid body dynamic equations define the translational and rotational motion of particles. For the special case of spherical particles these equations, for the -th particle, can be written as
where , and are respectively the particle centroid displacement and its first and second derivative in a fixed coordinate system , is the angular acceleration, is the particle mass, is the second order inertia tensor with respect to the particle centre of mass, is the resultant force, and is the resultant moment about the central axes.
and are computed as the sum of:
From the above-mentioned, and can be expressed as
where is the vector connecting the centre of mass of the -th particle and the point of contact with the -th particle (Fig. 19). The contact between the two interacting spheres can be represented by the contact forces and , which satisfy .
|Figure 19: Contact between two spherical particles.|
The contact force can be decomposed into the normal and tangential components at the point of contact . The local reference frame in the point of contact is defined by normal and tangential unit vectors, as it is shown in Figure 20.
|Figure 20: Decomposition of the contact force into normal and tangential components.|
The unit normal vector is defined along the line connecting the centres of the two particles. Its direction is towards the centre of particle .
The indentation or inter-penetration is calculated from the distance between the centres of the particles and their radius (Figure 21).
|Figure 21: Indentation or inter-penetration .|
Vectors from the centre of particles to the point of contact and depend on the contact model. Therefore, the position of the point of contact is influenced by the contact model used.
The relative velocity between both contacting particles at the point of contact is determined by
which can be decomposed in the local reference frame as:
The tangential unit vector can be obtained from the tangential component of the relative velocity between the particles and .
The forces and are obtained from the contact constitutive model.
Contact between granular material particles is, in general, a complex and highly non-linear problem, which, in the DEM framework, is simplified. For spherical particles, the constitutive contact models used depend on a few parameters such as the particles relative velocity, inter-penetration, radius and material properties. To take into account energy losses, some damping parameters need to be defined.
The main parameters used in standard constitutive models in the DEM are shown in Figure 22.
|Figure 22: Standard DEM contact interface [Onate2013].|
Those parameters are:
The first contact model presented by Cundall and Strack  was the linear spring-dashpot model. It uses, for both normal and tangential direction, a linear elastic stiffness device and a damper to introduce viscous energy dissipation. Within the commonly used contact constitutive models, this is the simplest one.
Other contact models consider the relationship between force and displacement non-linear. The most popular non-linear contact models come from the Hertz-Mindlin and Deresiewicz theory. Hertz  proposed the relationship between normal force and normal displacement. Deresiewicz and Mindlin  requested a general tangential force model where the force-displacement relationship depends on the whole loading history as well as on the instantaneous rate of change of the normal and tangential force or displacement.
In the literature, there can be found other contact models, but the model used in this work is the Hertz-Mindlin model simplified by Thornton et al. , due its balance between simplicity and accuracy .
Throughout the Hertzian theory, the normal contact force is decomposed into the elastic part and the damping contact force :
The elastic part of the normal compressive contact force is calculated as:
while the contact damping force is assumed to be of viscous type and given by:
The damping coefficient is taken as a fraction of the critical damping , which depends on the particles mass and on the contact stiffness :
The fraction is related with the coefficient of restitution , which is a material property that defines the kinetic energy that remains after a collision between two objects. It can also be defined as the fractional value representing the ratio of speeds after and before an impact
Taking this into account, can be expressed as a function of through the following expression :
However, as the coefficient of restitution is a material parameter and the parameter to be used in the DEM contact force evaluation, is more convenient to express as a function of . The solution can be accurately fitted by a curve of the form
Thornton et al.  defined which are the appropriate values of the parameters for each linear or Hertzian contact model respectively.
The elastic tangential force at time step depends on the previous elastic tangential force, at time step . It should be noted that the tangential stiffness will be different if the normal force is increasing (loading phase) or decreasing (unloading case). For the loading phase, the relationship between the elastic tangential force and the relative tangential displacement is defined through a regularised Coulomb model. On the other hand, in the unloading phase, the elastic tangential force must be reduced due to the reduction in the contact area, which implies that the previous tangential force can not be longer supported.
The increment of tangential displacement is computed from the integration of the relative displacement of the point of contact in the local frame , due to particles translation and rotation.
For the calculation of the tangential contact force , not only the elastic contact force but also the damping contact force should be considered. The damping coefficient is, as for the normal direction, a fraction of the critical damping :
After calculating the elastic and damping tangential forces at time step , it has to be ensured that the total tangential force does not exceed the Coulomb's friction limit:
Till this point, forces have been expressed as a function of contact parameters. However, when the numerical model is applied to a real case, those contact parameters should be computed from the material properties and particles geometry. For the calculation of the contact stiffness, the Hertz theory considers the curvature of the contacting surfaces . When two particles with radius and are in contact, the contact area is a circle of radius , as shown in Figure 23.
|Figure 23: Hertz contact model scheme.|
Which leads to the definition of and 
Meanwhile, the damping parameters are defined as:
Considering the contact between the two spheres and from Figure 23, if the material they are composed of has different Young modulus (, ), Poisson's ratio (, ), radius (, ) and damping coefficient (, ), the equivalent contact properties are computed as:
The value of the shear modulus for each particle can be obtained from its Young modulus and the Poisson's ratio :
As it was introduced in section 1.1, equations 3.2 and 3.3 are integrated in time using an explicit scheme, where the information at the previous step suffices to predict the solution at the next step. Explicit integration schemes require the time step to be below a certain limit in order to achieve computational stability. The critical time step depends on the material properties, particles geometry and constitutive model .
There exist several integration schemes, which can be of different order. In this work, low order (first and second) integration schemes are used, due to the fact that with such small computational time steps the use of higher order methods would not increase the accuracy significantly.
The explicit methods to be described below come from the application of the Taylor series approximation of the equations that describe the problem (second order differential equations of motion 3.2 and 3.3).
It should be mentioned that, the time integration of equation 3.3 is only applicable to spherical particles, due to the fact that spheres inertia tensor is independent of the orientation, furthermore, because of the symmetry of the sphere, the moment of inertia taken about any diameter is
and the angular acceleration can be directly computed
As for spheres, the orientation of the particle is not a key parameter, the use of the following explicit integration schemes, for calculating rotation increments, can be acceptable. However, in Section 5.3.1 the weaknesses of this schemes will be highlighted.
The forward Euler scheme is referred to as a first order approximation of the displacement and velocities.
Regarding the rotational motion, the angular velocity is also approximated using a first order approach
such as the vector of incremental rotation
The Symplectic Euler is a modification of the previous method which uses a backward difference approximation for the derivative of the position and the increment of rotation.
The Taylor scheme applies a first order integrator for the linear and angular velocities and a second order integrator for the position and rotation increment.
This algorithm is also known as the Central Differences algorithm . Within this approach, the approximate velocity at instant is computed in order to calculate the displacement along this time step.
The velocity and the displacement are then used to estimate the force in the following time step.
Finally the velocity is updated.
For the rotational motion, the procedure is the same as for the translational motion.
The torque in the following time step is estimated.
And the angular velocity is updated.
Santasusana  developed different benchmark cases in order to evaluate which is the better scheme to use, for the translational motion. From this study it was concluded that, Symplectic Euler and Velocity Verlet accurately approximate the indentation, however, regarding the velocity, the Velocity Verlet scheme is superior to the other schemes. From the tests concerning the numerical stability, the results showed that Velocity Verlet scheme is the only one which has an acceptable performance in the limit of the critical time step, as it is a second order scheme.
Concerning the computational cost, it was found that the difference in computational time between the quickest (Forward Euler) and the slowest (Velocity Verlet) is lower than 3%
From the above-mentioned it was concluded that the better performance is obtained with the Velocity Verlet scheme, which is the scheme chosen in the calculations along this work.
Regarding the integration of the rotational motion, these schemes can be assumed accurate enough for calculating rotation increments of spherical particles in most of the cases. However, if the orientation of the particle needs to be tracked, the use of specific schemes would be recommended. Those numerical schemes are presented in Section 5.3.1.
Till this point, the basic features of the DEM with spherical particles, were explained. However, it was realised that a greater adaptability of the numerical code would be achieved if there exist the possibility of computing the contact between planar boundaries and DEs. Talking about railway ballast, the boundaries refer to the soil, the sleeper or, in case of representing laboratory tests, the walls of the corresponding device.
Regarding the characterisation of railway ballast, most of the deformation will occur in the layer of granular material and a small deformation will occur in the boundaries interacting with it. Thus, a new methodology for the treatment of the contact interaction between spherical DEs and rigid boundaries is developed. The parts of the rigid body are defined from surfaces meshed with a Finite Element-like discretisation. The detection and calculation of contacts between those DEs and the discretised boundaries is not straightforward and, during the last years, it was addressed by different approaches.
The algorithm presented in this chapter considers the contact of DEs with geometric primitives of a Finite Element (FE) mesh (facets, edges or vertices). To do so, the original hierarchical method presented by Horner in 2001  is extended with a new insight leading to a robust, fast and accurate 3D contact algorithm which is fully parallelisable. The implementation of the method is developed in order to deal ideally with triangles and quadrilaterals. If the boundaries are discretised with another type of geometries, the method can be easily extended to higher order planar convex polyhedra.
In the following sections, a review of the state of the art of the existing methods for modelling the contact between DEs and rigid boundaries is presented. Next, the proposed strategy for the DE/FE contact search is described. Finally, some validation analysis together with examples of performance in critical situations (where most of the literature methods would fail) are shown.
Most of the ideas presented in this chapter are based on the published article The double hierarchy method. A parallel 3D contact method for the interaction of spherical particles with rigid FE boundaries using the DEM.
Particle-solid interaction problems were resolved by different approaches. Among the simplest ones is the glued-sphere approach , which approximates any complex geometry (i.e. a rigid body or boundary surface) by a collection of spherical particles, retaining the simplicity of particle-to-particle contact interaction. This approach, however, is geometrically inaccurate and computationally intensive due to the introduction of an excessive number of particles. Another easy approach (used in some numerical codes, e.g., ABAQUS) is to define the boundaries as analytical surfaces. This approach is computationally inexpensive, but it can only be applied in certain specific scenarios, where the use of infinite surfaces do not disturb the calculation. A more complex approach which combines accuracy and versatility is to resolve the contact of particles (spheres typically) with a finite element boundary mesh. These methods take into account the possibility of contact with the primitives of the FE mesh surface, i.e., facet, edge or vertex contact.
Horner et al.  and Kremmer et al.  developed the first hierarchical resolution algorithms for contact problems between spherical particles and triangular elements, while Zang et al.  proposed similar approaches accounting for quadrilateral facets. Dang et al.  upgraded the method introducing a numerical correction to improve smoothness and stability. Su et al.  developed a complex algorithm involving polygonal facets under the name of RIGID method which includes an elimination procedure to resolve the contact in different non-smooth contact situations. This approach, however, does not consider the cases when a spherical particle might be in contact with the entities of different surfaces at the same time (multiple contacts) leading to an inaccurate contact interaction. The upgraded RIGID-II method presented later by Su et al.  and also the method proposed by Hu et al.  account for the multiple contact situations, but they have a complex elimination procedure with many different contact scenarios to distinguish, which is difficult to code in practice. Recently, Chen et al.  presented a very simple and accurate algorithm which covers many situations. Their elimination procedure, however, requires a special database which can not be computed in parallel.
Therefore, the proposed Double Hierarchy () method consists in a simple contact algorithm based on the FE boundary approach. The method is specially designed to resolve efficiently the intersection of spheres with triangles and planar quadrilaterals but it can also work fine with any other higher order planar convex polyhedra. A two layer hierarchy is applied upgrading the classical hierarchy method presented by Horner ; namely hierarchy on contact type followed by hierarchy on distance. The first one, classifies the type of contact (facet, edge or vertex) for every contacting neighbour in a hierarchical way, while the distance-based hierarchy determines which of the contacts found are valid or relevant and which ones have to be removed.
Although code parallelisation lies outside the scope of this thesis it is worth to mentioning that the method is designed in a way that the code can work efficiently in parallel computations, which is crucial for practical purposes of a DEM code. This is a clear advantage over the above-mentioned publications. Nakashima  whose method is presumably parallelisable and Zang  and Su  who remark the importance of the future parallelisation of their algorithms are exceptions.
Summarising, the contact search framework presented is designed to satisfy the following requirements:
|Wide size rate DEs/FEs||-||-||⨉||✓||✓||✓||⨉||✓|
|Contact elem. typologies||⨉||-||✓||✓||✓||⨉||⨉||✓|
|Boundary shape variety||✓||⨉||✓||✓||✓||✓||✓||✓|
|Contact with deform. FEs||⨉||⨉||✓||✓||✓||✓||✓||✓|
Symbol (✓) implies that the method satisfies the property. On the other hand, symbol (⨉) means that the method does not satisfy the property. Symbol (-) denotes that the property does not apply to that method and (✓) means that, although the method satisfies the property, there are some limitations.
Table 5 summarises the strengths and drawbacks of the reviewed contact detection methods. Methods based on storing all the potential contacts to lately remove the invalid ones (RIGID-II , Hu et al. , Chen et al.  and ) are the most accurate. They treat the cases with large indentations (relative to the size of the FE) and give a solution for the contact continuity in non-smooth regions of the boundary. These methods have, however, some limitations due to the fact that the real deformed geometry of the sphere is not being represented in the DEM. The indentation emulates, instead, the local deformation near the contact region. Due to this fact, in concave transitions of the boundary, error in the contact detection is common for all these methods including the method here presented. In Section 4.4 this error is analysed under the framework.
As it was already mentioned, within the method, boundaries are discretised with a FE-like mesh. However, this discretisation is only for contact detection purposes and each FE entity is, indeed, a rigid element. Therefore, the boundaries are not calculated by means of a FE procedure.
|Figure 24: DE/FE standard contact interface.|
The adaptation of constitutive contact laws for DE/DE contact computation (section 3.2.3) to the case of DE/FE contact is straightforward. It is only necessary to take into account that:
Leading to the following equivalent contact properties:
The application of constitutive contact laws, such as Hertz-Mindlin , requires that the contact surfaces are smooth and present a unique normal at each point. In the DE/FE contact, usually, the original geometry presents regions where this requirement is not fulfilled. Moreover, even smooth surfaces loose this feature when they are discretised by means of FEs. In these situations a special treatment of the non-smooth regions should be applied under the requirement of some conditions to ensure reasonable results. Those conditions were analysed in the work by Wellmann :
The use of the present contact algorithm helps the selected contact model ensuring these objectives. Additionally, it should be noted that this method is not mesh dependent, with the limitations detailed in section 4.4.
The DE/FE contact search differs from the DE/DE in the sense that the search is between two different groups of geometries whose intersection is generally more complex and computationally more expensive than the trivial sphere-sphere case. The choice of spheres for the DEs and triangles or quadrilaterals for the FEs is a clever choice since this particular check can be done in an efficient way. As in the DE/DE contact detection, the contact search is also split into a GNS and a LCR.
In other works developed in parallel to this thesis  , the LCR stage is split into a Fast Intersection Test and method. However, the aim of this work is improving the method and prove that it can be used correctly and efficiently as a full LCR algorithm. The contact search scheme is shown in Figure 25.
|Figure 25: Neighbour search scheme.|
For the GNS, a basic cell-based algorithm  is chosen, parallelised in OMP and adapted for the DE/FE search. The FE domain () is selected to search taking advantage of the fact that, usually, the spatial distribution of the FEs is more regular than the spatial distribution of the DEs and in some cases fixed. Moreover, the search is carried out considering only the FEs belonging to the intersection of the bounding boxes of the DEs () and FEs () domains. Figure 26 shows how the intersection evolves as long as the calculation goes on. On the other hand, only the DEs inside the intersection domain () will look for their FE neighbours. This reduces significantly the contact pairs to be checked and, therefore, the global search performance is improved.
|Figure 26: Evolution of the bounding box of the intersection .|
In the GNS, every FE and DE has an associated Bounding Box (, as shown in Figures 27a and 27b) that is used to tag the position of the elements on the Search Bins and rapidly check for potential neighbours. This is done using a hash table structure as depicted in Figure 27c which relates each cell to the bounding box that falls into it. Rectangular (in 2D) or hexahedral (in 3D) bounding boxes, encompassing both types of elements, are chosen here. The process continues with the LCR (Figure 27d).
|(a) Bounding box of the FEs .||(b) Bounding box of the DEs .|
|(c) Intersection cells.||(d) Local Contact Resolution.|
|Figure 27: Sketch showing the search algorithm at the basic contact level.|
As it was already commented, the LCR can be split in two stages being the first one the Fast Intersection Test where it is determined which potential FE neighbours are in actual contact with each particle. A detailed description of this procedure can be found in  and .
In this work, the main effort was made in improving the second stage, the method as a full LCR check itself.
The first ideas about the method were presented in Irazábal's  work. Since then, the development has been focused in improving the efficiency for the characterisation of contacts.
The algorithm is split in two different stages:
The basis of this procedure is that each primitive has hierarchy over its sub-entities, i.e., a facet of a -sides polygon has hierarchy over the edges that compose it. In turn each of the edges has hierarchy over its two vertices . Figure 28 outlines the Contact Type Hierarchy for a triangle.
|Figure 28: Contact Type Hierarchy for a triangle: the facet has higher hierarchy than its edge and vertices, while the edges have higher hierarchy than its vertices.|
The algorithm is organised as a sequence of three entity-checking levels. If a particle is in contact with the facet of a FE, the contact search over its edges and vertices, which are at a lower hierarchy level, is discarded (See Figure 29). Otherwise, if contact with the FE facet does not exist, the contact check should continue over the sub-entities.
|Figure 29: Contact with facet. Edges and vertices are discarded from contact check.|
Similarly, at the edges level, any contact with an edge cancels out further contact checks for those two vertices belonging to that edge. As shown in Figure 30, when contact with the edge is found, the check at the lower level for the vertices associated to it, and , is discarded. However, the contact check with the following edges can not be discarded, since they are at the same hierarchy level in terms of Contact Type.
|Figure 30: Contact with edge. Vertices belonging to that edge are discarded from contact check.|
Figure 31 illustrates why the edge has hierarchy over its two vertices but not over the non-contiguous one .
|Figure 31: Contact with edge and vertex. When contact exist with edge it can also exists with vertex .|
For any of the entities, if the contact is detected and it is valid, the following characteristics are stored:
The procedure to determine each contact type (facet, edge, vertex or no contact), the computation of the position of and the calculation of the FE nodal weights are explained in the following paragraphs.
To know if the DE particle is in contact with a facet, the first check is to determine whether the particle intersects the plane formed by the -th planar finite element . This is represented in Figure 32.
|Figure 32: Intersection of a DE particle with a plane formed by a planar FE.|
The outward-pointing normal of the plane can be calculated with the cross product of any pair of edges taken counter-clockwise. This can be written in the following form, using the permutation tensor on two edges formed, for example, by the three consecutive vertices , , :
which has to be normalised to obtain the normal to the plane
In the case of a zero-thickness wall which can contact for both sides of the FE, the sense of the normal will be set such that points towards the particle centre. Once the normal is defined, the distance of the DE centre to the plane can be determined taking any known point of the plane, namely a vertex , as
and the projection of the centre onto the plane can be calculated as
The distance should be compared to the radius of the DE. If and only if , the contact between the sphere and the FE is possible. If this condition is fulfilled, the next step is to evaluate whether the projection lies inside or outside the FE .
|Figure 33: Inside-Outside check of the projection point edge by edge.|
Considering the edge formed by the vertices and ()
the cross product sign , shown in Figure 33, is computed as
If the product is positive, the projection point turns to be inside the triangle with respect to that edge. The loop proceeds with the next edges. If the same result is found for every edge, contact occurs with the facet of the FE and so the contact is assured. Figure 34 shows two examples where the projection is inside and outside the FE facet.
|(a) inside the facet.||(b) outside the facet.|
|Figure 34: Example of projection inside and outside the FE facet for a triangle.|
The values of the cross product sign , obtained from equation 4.7 for every edge , are used to obtain the weights of the shape function at the point of contact. The areas needed for the calculation are simply one half of the cross product sign: . The weights of the nodal shape functions on the point of contact are then calculated. For a triangle:
|Figure 35: Triangular areas for the calculation of shape function values in a planar convex quadrilateral.|
The general formula for calculating the nodal weights for -side polygons can be found in Santasusana et al. .
If the projection (equation 4.5) lies inside the facet, it becomes the point of contact . Due to the highest hierarchy level of the facet, the Contact Type Hierarchy finishes here for this FE. The Distance Hierarchy is now called and all the necessary contact characteristics are saved.
Checking contact with edges is needed for the cases where lies outside the facet. Here, to improve the efficiency of the algorithm, it is considered that the edge contact can not occur with the edges for which was found inside. Therefore, it is recommendable to test the edges with starting from the vertex for which is outside and skipping the previous ones (note that the edge check is the most expensive one). This approach was also used by Chen et al. .
First, the shortest distance between the edge and the particle centre should be calculated and compared to the radius .
The distance is calculated finding out the point of contact , as
and is the distance resulting from the projection of the vector connecting the centre and the vertex onto the edge :
|Figure 36: Intersection of a DE particle with an edge.|
If , the contact with this edge is not possible and the check starts again with the next edge . Otherwise, if , position is determined along the edge using the parameter , defined as
The case of implies edge contact. However, the contact check with the following edges can not be discarded since they are at the same hierarchy level in terms of Contact Type. The Distance Hierarchy will determine the validity of the new contact and eliminate or substitute previous ones.
If or , for the current edge , the connecting vertices ( and ) have to be evaluated. A value of indicates that the check has to be done with ; on the other hand, for the vertex to be tested is .
|Figure 37: Weights for a edge contact in a triangle.|
Equation 4.17 shows the values at the nodes connected to the edge . The rest of the nodes have a null value for its shape functions.
For the vertex under consideration the squared distance to the DE centre should be calculated:
If , contact exists, being the vertex itself with a nodal weight equal to one. Otherwise the test moves on with the check of the next edge and its subsequent vertices.
As usual, the Distance Hierarchy is called after the contact is detected. If the contact is valid, its characteristics are stored.
The scheme of Table 6 presents the Contact Type Hierarchy algorithm. For every DE the first loop is over the potential neighbours found during the Global Search. The check can be performed in parallel for every particle in the model.
|loop over every FE potential neighbour .|
|Perform Facet check.|
|if Contact: Go to Distance Hierarchy (Table 7) and Stop!|
|else: Go to .|
|loop over every edge .|
|Perform the Edge Check.|
|if Contact Go to Distance Hierarchy (Table 7).|
|else if ( and ) Go to (3) with .|
|else if ( and ) Go to (3) with .|
|Continue with the next edge.|
|Perform Vertex check.|
|if Contact Go to Distance Hierarchy (Table 7).|
|Go To Edge level and check next edge.|
A spherical particle can be in contact with many different FE entities. These contacts are result of the penetrations introduced by the penalty method. In some cases, those contacts give redundant or invalid information and should be eliminated. This is the scenario shown in Figure 38 where contact with elements , and is detected.
In a collision between a sphere and a plane, the force applied by the plane surface to the sphere must have the same direction as the normal of the plane and its magnitude must only depend on the penetrations, being independent of the position of the point of contact on the plane. Therefore, the contact force coming from the edges of elements and of Figure 38 should not be taken into account. This is solved by the Distance Hierarchy which is an elimination procedure that takes place every time a new contact entity is found at the Contact Type Hierarchy.
|Figure 38: Contact between a DE and a FE mesh whose elements are smaller than the indentation.|
The procedure basically compares the contact vectors against their projections one another. The new contact vector is projected onto the previously found contact vector and vice versa. The following expressions are obtained:
The contact check is performed using the algorithm presented in Table 7:
|Given a new found contact by the Contact Type Hierarchy:|
|loop over every existing contact|
|Project on :|
|Project on :|
|if ( ): is an invalid contact.|
|Go to (2) () and break loop.|
|else if ( ): is an invalid contact.|
|Discard ! Continue loop.|
|Go to (2) ().|
|Valid contact ()|
|if ( ): is valid contact! Save contact details.|
|else ( ): is an invalid contact! Discard !.|
Table 4.3.2 shows an example of how the elimination procedure is performed for two different scenarios. On the left side all the found contact vectors are represented. A graphical interpretation of the projections is also given. On the right side only the final relevant contact vectors, that the Distance Hierarchy yields, are shown.
In the first situation, no contact with edges of elements and is taken into account, since their projections, and , over the contact vector with element have the same module as the contact vector itself.
In the second situation, the sphere has contact with the facet of element , the edge of element and the shared edge of elements and which will be appearing as two different contact vectors and given by the Contact Type Hierarchy stage. These vectors do not appear directly in the figures in Table 4.3.2 but they are calculated by and respectively. First, note that either contact with or will be arbitrarily discarded by the elimination procedure since they are mathematically the same vector. Let us assume the is kept and discarded. On the other hand, the projection of the contact vector over the contact vector discards contact with element . Finally, contacts with elements and do not discard each other since their projections one another have a value of and (they form a 90 degrees angle) and therefore are smaller than the contact vectors length. Hence both contacts are taken into account, as it is expected.
Contact elimination cases.
|Found Points of Contact and Vectors||Relevant Contact Vectors|
The main advantage of this method lies in its wide generality. It works fine for most of the traditional conflictive situations where multi-contacts and FE transitions are present. It is consistent and so the order in which the neighbours have been found and stored does not affect the final result. The tests carried out in the validation (Section 4.5) show that the force vector always has the appropriate direction.
Although the method was conceived for calculating contacts between DEs and rigid boundaries, it can be easily upgraded to be coupled with deformable FEs. The contact forces can be transferred to the finite element nodes through interpolation based on FE shape functions . For linear triangles, a simple linear interpolation is used as
where is the position of the point of contact and is the area of the triangle formed by the point of contact and the opposite vertices to vertex in the FE, as it is shown in Figure 39.
|Figure 39: Punctual force and its nodal interpolation to a three noded triangle by means of linear shape functions expressed using its area coordinates.|
However, this linear interpolation may yield to inaccurate results if a detailed analysis of strain and stress in the deformable FEs is being conducted. In that case, more precise schemes should be used.
Figure 40 shows an scenario where the non-smooth contact detection introduces a calculation error. In Figure 40a the DE is in contact with an edge at time , the indentation is and the contact force is F. At time (Figure 40b), the DE is in contact with two surfaces, because now the normal to those two surfaces is not the same. Since the time-step size, in explicit time integration algorithms, is usually very small, it is reasonable to suppose that the penetration vectors between the DE and surface 1 () and surface 2 () at time are similar to . Then, the resultant contact is approximately: F = 2 F.
|(a) Time .||(b) Time .|
|Figure 40: Non-smoothness of the contact force with elastic elements.|
Notwithstanding the importance of this error, it should be mentioned that, for quasi-static calculations with small time steps and where the importance of the analysis lies on the DEs behaviour, the error introduced by the non-smoothness can be neglected .
Han et al [82,83], Wellmann  and Santasusana  developed algorithms to overcome the non-smoothness of the contact between DEs and deformable FEs. The computational cost of this new algorithms will be higher, so, it should be evaluated, for each case, if it is worth using them.
A limitation of this method, which is common to the revised penalty-based contact algorithms, occurs when a DE contacts with a slightly non-convex surface.
The penalty method introduces an indentation which accounts for the local elastic deformation of the DE during a contact event and allows the imposition of the contact condition in a weak form. This indentation is not problematic when the contact is with convex surfaces, however, for non-smooth concave regions the basic assumptions are not met and contact detection errors may arise.
|(a) Error region.||(b) Contact with 2 planes.|
|Figure 41: Error emerging in concave transitions.|
In the example of Figure 41 a sphere moves horizontally in a plane until it reaches a transition with other plane (planes and form an acute angle ). In this situation a region formed by the common edge of both planes and the normal of the second plane can be defined. Whenever the sphere centre is in that region a discontinuity in forces will occur. The contact with plane is detected only when the centre has a normal projection onto the plane forming a tangential contact. Figure 41b shows that when the new contact is detected, some indentation is existing already and, therefore, the new contact force value introduces a discontinuity.
It was found that for an indentation of of the radius of the DE, even with a small change in the angle ( degrees) no error is produced. However, for higher indentations the error increases, being higher for lower changes of the angle . An error of is reached in the extreme case of coplanar transition. Luckily, this very frequent case is considered by the Distance Hierarchy where a tolerance is used to detect coplanar cases. Note that the error depends only on geometrical conditions and the indentation and not on the boundary FE mesh quality.
Santasusana et al.  studied exhaustively this error, presenting it as a function of the change of and the indentation for different constitutive models.
The tangential force is applied by means of an incremental scheme which requires to keep track of the forces between the particle and each of its neighbours. The problem arises when a particle moves across two elements and the historical tangential force resets to zero because the contact is determined with the new element. This is common to all reviewed contact algorithms. Even though it could be solved with a complex data structure, the error introduced is very low for practical situations.
The cases with larger error will be the ones with sliding where the tangential force is kept at its maximum (generally Coulomb friction value) and are much higher than during a rolling event. In this situation the error can be measured in terms of the missing work in a force-displacement diagram. After studying the error, Santasusana et al.  came to the conclusion that a large indentation equal to the of the characteristic size of the FE, yields only an error about the , meanwhile, if the Hertz-Mindlin contact law is used, the error would be very similar.
In this section, several examples are carried out to test the performance of the algorithm in critical situations. The following tests do not correspond to practical situations, but they serve instead to validate the contact calculation procedure. All benchmarks have been carried out using a Hertzian contact law and Velocity Verlet integration scheme.
The first benchmark is represented by three spheres with low stiffness in order to achieve large indentation. Each sphere contacts a different boundary structure meshed with triangles. In every case the sphere falls from the same height (1 ) vertically and perpendicular to the contact entity which is respectively a facet, an edge or a vertex. Figure 42 shows the layout of the three sphere and boundary cases, while the material properties and calculation parameters are detailed in table 8.
|Figure 42: Benchmark 1 layout.|
|Material properties||Calculation parameters|
|Radius ()||Initial velocity (DE) ()|
|Density ()||Gravity ()|
|Friction coefficient DE/FE||Time step ()|
|Young modulus ()||Neighbour search frequency|
Since there is no damping applied, the energy should be conserved and the ball must return to the initial position after the rebound. The sphere is expected to follow a vertical trajectory with identical results for the three cases.
|(a) Force exerted by the FEs to the DE.||(b) Position of the centre of the DE.|
|Figure 43: Results obtained in the first benchmark test.|
The contact indentation in the three cases is greater than the of the sphere radius, leading to multiple contacts with all kind of entities. Besides that, graph in Figure 43a shows that the force is applied only in the vertical direction (Y direction). From this, it can be concluded that the contact elimination procedure performs correctly. The results are exactly the same in the three different scenarios (facet, edge and vertex contact). It verifies also that there is no energy gain or dissipation since the rebound maximum height is always the same, as it can be observed in Figure 43b. This is a good test to see that the method works properly for normal contacts of all three types: with facet, with edge and with vertex, independently of the mesh and the indentation achieved (always lower than the radius).
It is essential to ensure the continuity of the contact force in the non-smooth contact regions and FEs transitions. In the following example the continuity of the normal force is presented.
A DE is set to move along the boundary and its contact transfers from the surface of a triangular element (facet contact) to one of its edges or vertices. In particular, a frictionless and rotation free sphere has a trajectory path enforced (as shown in Figure 44) so that the indentation is always constant ( either in contact with the facets and or with the edge ). The material properties and calculation parameters are the ones presented in the table 8.
|Figure 44: Scheme of benchmark 2.|
If continuity is met, the force module must always be the same. The direction of the contact force should evolve from vertical (normal to ) to horizontal (normal to ) with a smooth transition. This is achieved due to the fact that the algorithm gives higher hierarchy to the edge and the vector is calculated joining the point of contact and the centre of the sphere.
|(a) Contact .||(b) Contact .||(c) Contact .||(d) Contact .|
|Figure 45: Force applied by the surface and the edge to the sphere at different instants.|
The results show that no discontinuities arise when the contact evolves from facet contact to edge contact and vice versa, being the contact force constant along all the calculation and equal to 76.063 N, as expected.
The continuity of the normal forces in a concave transition and the tangential forces across different elements is not fully assured, as it was commented in Section 4.4.2. Even though the error is very small for practical situations.
The goal of this test is to check that the method determines correctly the case of a sphere contacting more than one element. The set up of the example consists of three spheres falling onto a plane with three different shape holes, as shown in Figure 46. Material properties and calculation parameters are presented in table 9. In this example damping is applied.
|Figure 46: Geometry of benchmark 3.|
|Material properties||Calculation parameters|
|Density ()||Initial velocity (DE) ()|
|Friction coefficient DE/FE||Gravity ()|
|Young modulus ()||Time step ()|
|Poisson ratio||Neighbour search frequency|
Graph in Figure 47 shows the velocity modulus of each of the DEs involved in the numerical test. It can be seen that the spheres velocity after 2.5 seconds of calculation is close to 0. All the spheres reach their final equilibrium position, contacting vertices and edges simultaneously.
|Figure 47: DEs velocity.|
The following example represents two balls sliding on a plane with friction and is used to evaluate if the algorithm is independent of the mesh. The spheres are set in vertical equilibrium upon the plane and an horizontal velocity is imposed. The spheres should start sliding while their angular velocity will progressively increase up to a constant value at which the sliding event finishes and only rolling occurs thereafter. This is schematically depicted in Figure 48a.
|(a) Problem definition.||(b) Calculation set up.|
|Figure 48: Benchmark 4 layout.|
The analytical solution can be determined to validate the calculation using equilibrium equations with kinematic compatibility conditions and the basic Coulomb friction law. The moment of inertia of a sphere is . In this case a high value of the Young modulus () was chosen, in order to avoid large indentations, which will allow to assume that the sphere radius is equal to the rotational radius (). The following is obtained for the combined sliding and rotation phase:
Equation 4.23 comes from integrating the angular acceleration for the case of zero initial angular velocity. The constant rolling event occurs when the tangential velocity matches the angular velocity times the radius :
Leading to a critical time equal to:
For time the equations of motion are:
The set up of the numerical test is shown in Figure 48b. Two cases are compared, one involves sliding on a plane discretised by a single quadrilateral element while in the other case the plane is discretised by 80 triangular elements. The spheres are given a initial velocity of in the direction and the computational time is one second. Other calculation and material parameters are detailed in Table 10.
|Material properties||Calculation parameters|
|Density ()||Initial velocity (DE) ()|
|Friction coefficient DE/FE||Gravity ()|
|Young modulus ()||Time step ()|
|Poisson ratio||Neighbour search frequency|
The numerical results and the analytical solution are plotted in Figure 49. This plot, together with the information presented in Table 11, shows that the difference between meshes turned to be negligible. In Table 11 the values of the displacement (), velocity (), angular velocity () and computational error at the end of the calculation () are presented.
|(a) Triangular fine mesh.||(b) Quadrilateral mesh.|
|Figure 49: Numerical results of the displacement and velocity in direction with the angular velocity in direction compared against the theoretical solution for the two different meshes.|
|Meshes difference ()||-|
|Meshes difference ()||-|
|Meshes difference ()||-|
This example shows how the results on the DE are practically independent on the boundary mesh selected. Moreover, for the calculation performed, the numerical results agreed perfectly with the theoretical solution. This case does not show any noticeable discontinuity in the normal and tangential contact forces in the transition between boundary FEs.
Railway ballast is a layer of granular material composed of particles of different sizes with notorious irregularities. Those irregularities greatly affect the material behaviour. With this in mind, it may be though that the use of spherical particles for the representation of this material can be too simplistic. However, the computational advantages of using spheres have to be taken into account.
From the point of view of micro-scale analysis, it is essential to represent the exact geometry of the particle. On the other hand, if the interest lies in the behaviour of the granular material as a whole, it is thought that the geometry of each particle is not a determining factor. The key point is to find an approach accurate enough without being too much computationally intensive.
In this chapter, different approaches for characterising the geometry of railway ballast, and other granular materials, are presented. After evaluating each of them in section 5.1, two approaches were chosen, which are spheres with rolling friction and sphere clusters. In sections 5.2 and 5.3 respectively, the way spheres with rolling friction and sphere clusters are implemented in Kratos is described.
DEM has proven to be a very useful tool to obtain complete qualitative information on calculations of groups of particles , however, the computational cost of contact evaluation between DEs is high and limits the calculation capability. On this subject, it should be noted that particles shape greatly affects contact calculation computational cost, being spherical DEs the less computational demanding particles. This limitation, together with the uncertainty about the real contact mechanics and particle properties , led to different particle shape simplifications [85,86].
In the literature, different algorithms can be found to implement rolling friction. Section 5.2 presents them and describes the new algorithm proposed in this work, which was previously published in the article Numerical modelling of granular materials with spherical discrete particles and the bounded rolling friction model. Application to railway ballast .
|Figure 50: Repose angle of a granular material represented by spheres with rolling friction.|
|Figure 51: Simplification of the geometry if six different ballast particles using sphere clusters.|
It should also be noted that this approach introduces geometric friction due to the undesired cavities between overlapped spheres. This issue will be further analysed in Section 6.3.
|Figure 52: Examples of superquadrics. Source: Chakraborty et al. .|
Although superquadrics are a promising option to represent granular materials with the DEM, the computational cost of contact detection is high. Boon et al.  reported that the computation of the contact between superquadrics is 48 times higher than for spheres.
It is worth mentioning that superquadrics are very flexible in order to represent a large number of different particle shapes, however, they can not properly reproduce the numerous irregularities and concavities of ballast particles.
An extensive effort was made to use polyhedral particle shapes. Cundall et al. [96,97] developed a technique to detect contact forces between polyhedra called the common plane method. It is a computationally expensive iterative method that replaces the contact between two polyhedra with two plane-polyhedron contacts. This method was further improved by fast determination of the common plane . Elias  presented a new method of estimating the contact force between two polyhedra based on calculating the intersecting volume, and applied it to the calculation of railway ballast behaviour. Although the results obtained were promising, the simulations involved only 120 particles, due to computational time issues.
|Figure 53: Oedometric test calculation of a ballast sample carried out with polyhedral particles. Source: Elias .|
Aiming to improve contact detection and force evaluation, Alonso-Marroquín and Wang [99,100] developed the spheropolygons approach in 2D. It is based on sweeping a sphere around a polygon, which leads to an easier force evaluation, and a decrease in LCR computational time. Galindo-Torres and Pedroso  extended it to more complex interactions in 3D, resulting in the spheropolyhedra approach, which was used to predict granular materials behaviour .
|Figure 54: From polyhedron to spheropolyhedron. Source: Galindo-Torres and Pedroso .|
Ahmed et al.  presented a new algorithm called the potential particle shapes approach. It is based on representing the particles as adjustably rounded polyhedra. The limitation of this approach is that it is only able to represent convex particles.
|Figure 55: Ballast sample represented with potential particles. Source: Ahmed et al. .|
Table 12 presents the strengths and weaknesses of each geometrical approach. The most efficient contact calculation is done with spheres with rolling friction and sphere clusters, however, compared to spherical particles calculations the computational time using sphere clusters augments proportionally to the increase of the amount of spheres in the model. For superquadrics, polyhedra, spheropolyhedra and potential particles, the computational time strongly depends on the number of DEs and contacts, being difficult to compare them with spherical particles. Anyway, it should be noted that the calculations found in published works [95,1,55,103] are limited only to a few thousands of particles. Regarding the particles shape, sphere clusters are the only geometric type that can reproduce easily concave particles (calculations with concave polyhedra will increase dramatically the computational cost). On the other hand, as sphere clusters are a conglomerate of spheres joined rigidly, they do not ensure contact tangential continuity when contact changes from one sphere to other that belongs to the same cluster, similarly to the case exposed in Section 4.5.2 for planar boundaries. Even though, considering the irregular nature of ballast particles, the error introduced by this fact can be assumed.
|Spheres with RF||Sphere Clusters||Superquadrics||Polyhedra||Sphero-polyhedra||Potential Particles|
|Low computational cost||✓||⨉||⨉||⨉||⨉||⨉|
Symbol (✓) implies that the geometrical approach satisfies the property. On the other hand, symbol (⨉) means that the geometrical approach does not satisfy the property. Symbol (✓) means that, although the geometrical approach satisfies the property, there are some limitations.
Considering all the above-mentioned, in this work, the two types of particles chosen are: spheres with rolling friction, due to their simplicity and lower computational requirements, and sphere clusters, because their calculation is based on an extension of the spherical particles algorithms yielding to an efficient contact calculation. Another important advantage of using sphere clusters is their versatility in terms of particle shape. Angularities and concavities can be naturally included, which may be useful for representing railway ballast stones.
The low computational demand of spheres, compared to other geometrical shapes, makes them very attractive for DEM calculations. On the other hand, the special characteristics of spheres may be an obstacle to reproduce the behaviour of irregular granular materials, such as railway ballast.
The main drawback of using spheres is controlling their excessive rotation; to that end, the rolling friction approach is used, which consist on applying a virtual torque against particles rotation, the so-called rolling resistance torque. In this section a new insight for the application of this virtual torque, called the Bounded Rolling Friction (BROF) model, is presented. Some validation tests are included in order to check its correctness and robustness.
Rolling friction calculation can be addressed by different formulations. Ai et al.  presented the four most popular models:
Models B and D will not be further commented due to their limitations.
A and C are the most commonly used rolling friction model types . In this work, model A has been improved to avoid the inconsistencies appearing in static situations. The main advantage of model A compared to model C is that it only needs one parameter to be defined.
In model type A the rolling resistance torque of particle in contact with particle () is given by
where is the resistance parameter that defines the contact rolling friction, which depends on the size and material properties of the particles in contact. is the normal contact force and is the relative angular velocity between the two particles. Figure 56 shows schematically the implementation of the rolling friction model type A.
|Figure 56: Scheme of rolling resistance model type A.|
The material property that influences the rolling behaviour of the DE particles is called rolling friction coefficient (), which depends on the shape of the granular material particles: it will be higher for sharp stones than for pseudo-spherical ones. The rolling resistance parameter, , depends on the rolling friction coefficient and the radius of both contacting spheres.
Till this point, was treated as the rolling resistance parameter. However, it can also be defined as the eccentricity of the contact. The need of this parameter is based on the fact that, when dealing with non-spherical particles contact, the line of action of the contact normal force does not pass through the centroid of the particles . In the classical model A, the rolling resistance parameter for the contact between particles and () is considered as the product of the rolling friction coefficient of the particle under consideration () and the effective rolling radius () [84,104], which, for those particles in contact, is calculated as
leading to the expression
Figure 57 shows schematically how is the rolling friction applied into particle , considering the eccentricity of the normal contact force.
|Figure 57: Schematic representation of the effect of the rolling friction parameter .|
Ai et al.  outlined that model A should be used with caution in static situations, because rapid oscillations in the rolling resistance torque can appear due to the discontinuity in Eq. 5.1 at . To avoid this drawback, the BROF model limits the rolling resistance torque () to the necessary moment to stop the sphere rotation in one time step ()
where is the angular velocity of the sphere in the previous time step.
Finally, the BROF model conditions read:
This improvement, based on the work of Tasora and Anitescu , avoids undesirable oscillations in the spheres spin.
It should be noted that, within the BROF model, the rolling resistance torque is applied in the direction of the necessary moment to stop the sphere rotation in one time step (), and not in the direction of the relative angular velocity of the two particles in contact (). This was set in order to avoid discrepancies, making the algorithm frame-independent.
Figure 58 shows how all the particles in contact with particle applied the rolling resistance torque in the same direction, against the sphere rotation.
|Figure 58: Rolling friction from various particles acting against particle rotation.|
Eq. 5.6.a highlights the differences in the computation of the rolling resistance torque between the classical model A and the BROF model.
Two test cases were selected for the validation of the BROF model. They were chosen from Ai et al. reference study . Both tests were reproduced setting the same material properties and simulation parameters as in the reference tests. It should be noted that, although in the reference study the rolling particle is a disk, its thickness was chosen in order to have the same moment of inertia as a sphere, allowing the comparison between the BROF model results and those presented by Ai et al.
The first test adopted is a single sphere (with rolling friction) rotating over a flat surface. To develop the simulation, a sphere is placed over a rigid surface letting it move by its own weight until it achieve equilibrium. Then, an initial translational velocity () is applied to the sphere.
|Material properties||Calculation parameters|
|Young modulus ()||Gravity ()|
|Poisson ratio||Time step ()|
|Restitution coefficient||Neighbour search frequency|
|Friction coefficient DE/FE|
|Rolling friction coefficient|
|Figure 59: Initial layout of test case 1.|
Fig. 60 shows the rolling resistance torque over time using the BROF model, as compared to that obtained with the classic model type A . In the dynamic part of the simulation, the rolling resistance torque in both models is a constant value. However, once the sphere reaches its final position, differences between both models arise. In the classic model A, the torque oscillates between a positive and a negative value with the same magnitude. The BROF model overcomes this inconvenience thanks to the limitation imposed, defined in equation 5.5, and leads to an equilibrium situation where the rolling resistance torque and the particle angular velocity are zero.
|Figure 60: Comparison between rolling resistance torque obtained applying the classic rolling friction model A  and the BROF model.|
The comparison between the angular velocity of both models is not presented because, although oscillations exist when the classic rolling friction model A is applied, their magnitude is very low. It can be assumed that those angular velocity oscillations can not affect the system response due to their small magnitude, however, if the simulation involves a big amount of particles, the kinetic energy introduced by the angular velocity changes may not be neglected.
Differences between the classic model C and the BROF model are displayed in Fig. 61. The first fact to point out about model C is that it is defined by two key parameters: the rolling damping and the rolling stiffness. Damping is necessary to avoid oscillations in a static situation. Without damping, the behaviour would be similar to the behaviour of model A, but the oscillating frequency in that case would depend on the rolling stiffness and the mass of the sphere and not on the time step.
The graph in Fig. 61 shows the response of the BROF model and the classic rolling friction model C with a damping ratio . It can be appreciated that, in model C, some oscillations already appear although damping is applied. The amplitude of the oscillation decreases gradually with time.
|Figure 61: Comparison between rolling resistance torque obtained applying the classic rolling friction model C with a damping ratio  and the BROF model.|
From the results obtained in test number 1 it can be concluded that the new rolling friction model proposed in this thesis avoids oscillations when the angular velocity of the sphere is almost zero, but it behaves in a similar manner to classic rolling friction models when the sphere is rotating. It is also worth noting that this new approach only needs one parameter to be defined, an advantage compared to the classic rolling friction model C.
The first test case was used to compare the BROF model with the classic model types A and C. The aim of the second test case is to evaluate the influence of varying the rolling friction coefficient in the BROF model.
Test case 2 consists of a sphere rolling up a slope with an angle degrees, as shown in Fig. 62. The sphere has the same properties as in test case 1 (see Table 13). In this case the sphere is positioned over the rigid surface allowing it to move by its own weight, but restraining its movement in the direction (see Fig. 62). When the sphere comes to rest, the restriction of movement in the direction is removed and an initial translational velocity , parallel to the slope, is applied.
|Figure 62: Initial layout of test case 2.|
In this test case, in order to evaluate the influence of the rolling friction coefficient in the sphere response, two other values of the rolling friction coefficient are considered. When is small (less than 0.176, which corresponds to a rolling friction angle degrees), the sphere should roll back downwards after reaching its highest point. When is sufficiently large (more than 0.176), the sphere should be stopped by a resistance torque that prevents the downward rolling due to gravity.
|Figure 63: Test case 2 results for three different rolling friction coefficients and applying the BROF model.|
Fig. 63a shows the evolution of the rolling resistance torque over time. It is worth noting that applying the BROF model, the rolling resistance torque in dynamic situations is constant for a specific value of the rolling friction coefficient. However, when the particle comes to rest, the rolling resistance torque is set to a specific value, which is the necessary torque to stop the sphere rotation in one time step. This feature can be clearly appreciated in the last part of the graph of Fig. 63a, for and , where the rolling resistance torque is the same, independently of the value of the rolling friction coefficient.
From Fig. 63b, which shows the sphere rolling distance over time, it can be concluded that, with a higher rolling friction coefficient, the sphere spin stops faster. As it was already commented, it can be seen that, for a rolling friction coefficient smaller than 0.176, the sphere rolls back downwards after reaching its highest point. This fact is reinforced in Figure 63c where the evolution of the angular velocity over time is displayed.
Fig. 64 presents the results obtained by Ai et al.  when they use the classic rolling friction model C with a damping ratio . From the graphs, it can be concluded that the rolling resistance torque using the BROF model and the classic rolling friction model C are similar, but the BROF model avoids oscillations.
|Figure 64: Test case 2 results for three different rolling friction coefficients and applying the classic rolling friction model C with a damping ratio .|
To summarise, it can be remarked that the BROF model, besides providing similar results than the previous rolling friction models in dynamic situations, it includes a limitation to the angular velocity in order to avoid undesirable sphere rotation when the particle is almost at rest, outperforming previous approaches and only needing one parameter () to be defined.
One important feature which was not already mentioned is the generation of particle packings. Different methods are proposed in the literature to generate these assemblies. Bagi  divided them into two different types: dynamic and constructive methods.
In this work, the algorithm used to create the spheres assembly is the method developed by Roselló et al. , which is based on the advancing front packing technique in 2D but modified for obtaining high density sets of spherical particles (3D) in an efficient way. Advancing front techniques collocate the particles layer by layer starting from the boundaries or the interior of the domain controlling the desired size distribution.
|Figure 65: Example of a cube of side filled with spheres with radii between and . Source: Roselló et al. .|
Although the new advancing front approach allows the generation of dense distributions of spherical particles, the result do not always meet the desired compactness for railway ballast. As a result, new alternatives need to be considered to address the problem. Tran  proposed the dynamic method called gravitational packing technique to generate DE samples for granular material simulations. It consists in assigning the particles a small friction coefficient value, and letting them to freely fill the volume under consideration. The friction coefficient used to be zero, but depending on the desired compactness of the sample, it can be other value. This leads to a high particle compactness, though requires a pre-calculation.
Clusters of spheres are a widely used alternative to spherical particles for representing granular materials. The contact forces and torques are evaluated as usual on the spheres, through equations 3.4 and 3.5. The force and torque contribution from every sphere is then summed up and translated to the centre of mass of the cluster, adding also the additional torque from the application of each force in the centre of every sphere .
For a body enclosed by the domain , supposing constant density, the centre of mass of the cluster is calculated with the formula: