Abstract

Neural networks (NNs) as an alternative method for universal approximation of differential equations have proven to be computationally efficient and still sufficiently accurate compared to established methods such as the finite volume method (FVM). Additionally, analysing weights and biases can give insights into the underlying physical laws. FVM and NNs are both based upon spacial discretisation. Since a Cartesian and equidistant grid is a raster graphics, image-to-image regression techniques can be used to predict phase velocity fields as well as particle and pressure distributions from simple mass flow boundary conditions. The impact of convolution layer depth and number of channels of a ConvolutionDeconvolution Regression Network (CDRN), on prediction performance of internal non-Newtownian multiphase flows is investigated. Parametric training data with 2055 sets is computed using FVM. To capture significant non-Newtownian effects of a particle-laden fluid (e.g. blood) flowing through small and non-straight channels, an Euler-Euler multiphase approach is used. The FVM results are normalized and mapped onto an equidistant grid as supervised learning target. The investigated NNs consist of n= {3, 5, 7} corresponding encoding/decoding blocks and different skip connections. Regardless of the convolution depth (i.e. number of blocks), the deepest spacial down-sampling via strided convolution is adjusted to result in a 1 × 1 × f · 2nfeature map, with f = {8, 16, 32}. The prediction performance expressed is as channel-averaged normalized root mean squared error (NRMSE). With a NRMSE of < 2 · 10-3, the best preforming NN has f = 32 initial feature maps, a kernel size of k = 4, n = 5 blocks and dense skip connections. Average inference time from this NN takes < 7 · 10-3s. Worst accuracy at NRMSE of approx 9 · 10-3is achieved without any skips, at k = 2, f = 16 and n = 3, but deployment takes only < 2 · 10-3s Given an adequate training, the prediction accuracy improves with convolution depth, where more features have higher impact on deeper NNs. Due to skip connections and batch normalisation, training is similarly efficient, regardless of the depth. This is further improved by blocks with dense connections, but at the price of a drastically larger model. Depending on geometrical complexity, spacial resolution is critical, as it increases the number of learnables and memory requirements massively.

Full document

The PDF file did not load properly or your web browser does not support viewing PDF files. Download directly to your device: Download PDF document
Back to Top
GET PDF

Document information

Published on 11/03/21
Submitted on 11/03/21

Volume 600 - Fluid Dynamics and Transport Phenomena, 2021
DOI: 10.23967/wccm-eccomas.2020.107
Licence: CC BY-NC-SA license

Document Score

0

Views 22
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?