Quantitative Assessment of the Influence of Tensile Softening of Concrete in Beams under Bending by Numerical Simulations with XFEM and Cohesive Cracks

Results of the numerical simulations of the size effect phenomenon for concrete in comparison with experimental data are presented. In-plane geometrically similar notched and unnotched beams under three-point bending are analyzed. EXtended Finite Element Method (XFEM) with a cohesive softening law is used. Comprehensive parametric study with the respect to the tensile strength and the initial fracture energy is performed. Sensitivity of the results with respect to the material parameters and the specimen geometry is investigated. Three different softening laws are examined. First, a bilinear softening definition is utilized. Then, an exponential curve is taken. Finally, a rational Bezier curve is tested. An ambiguity in choosing material parameters and softening curve definitions is discussed. Numerical results are compared with experimental outcomes recently reported in the literature. Two error measures are defined and used to quantitatively assess calculated maximum forces (nominal strengths) in comparison with experimental values as a primary criterion. In addition, the force—displacement curves are also analyzed. It is shown that all softening curves produce results consistent with the experimental data. Moreover, with different softening laws assumed, different initial fracture energies should be taken to obtain proper results.


Introduction
During a cracking process in concrete so called fracture process zone is created. Its size is not negligible comparing to specimen's dimensions. As a consequence, the behavior of concrete structures exhibits strong size effects, i.e., small elements have a greater nominal strength than large ones. The size effect depends also on the geometry on the boundaries, e.g., unnotched versus notched beams. All mentioned difficulties cause problems in proper determination of concrete material properties or performing advanced numerical simulations. There is a variety of alternative approaches and there is still no consensus in describing the fracture in concrete. The proper description of cracks is, therefore, crucial in obtaining physically admissible results in simulations of concrete [1,2] and reinforced concrete specimens [3][4][5][6]. Within continuum mechanics, cracks can be defined in a smeared sense [7,8] or as a discrete one with cohesive elements [9,10] or based on Extended Finite Element Method (XFEM) [11][12][13][14][15][16]. In advanced formulations these two approaches can be coupled [17,18].
A very important issue in the numerical description of cracks in concrete is a realistic definition of a constitutive law for concrete in tension. The behaviour of concrete under uniaxial tension is strongly nonlinear. Initially concrete behaves as a linear elastic material (approximately up to 90% of the tensile strength) followed by a hardening phase. After reaching the tensile strength a gradual decrease of the tensile strength starts (so-called quasi-brittle behaviour). When a mathematical description of concrete under tension is formulated, a hardening region before the peak is usually neglected and the material Sener [34] on the basis of their experimental results verified universal size effect law (USEL) proposed by Hoover and Bažant [36] and boundary effect model of Duan et al. [38,39]. They stated that Bažant's Type I size-effect law is reasonably good for beams with small notches, and Type II size-effect law fits favourably for beams with deep notches. They also observed that boundary effect model provides good comparison for unnotched beams. Grégoire et al. [35] compared the experimental outcomes with USEL proposed by Bažant and Yu [40] and found consistent match between them.
Several researchers performed numerical simulations and compared their results with experimental outcomes cited above. Hoover and Bažant [41] used the crack band model defined as an isotropic damage model with an equivalent strain based on Mazar's proposal [42] and a bilinear softening law. They also tried to fit experimental data using a linear or an exponential softening law, but without a success. They presented, however, no results with exponential or exponential softening curves to support this statement. Lorentz [43] formulated a nonlocal gradient model consistent with cohesive fracture. To describe the post-peak behavior of the material he proposed a combination of a linear polynomial and an exponential function with two parameters. Grégoire et al. [35] took the isotropic damage constitutive law with Mazar's [42] equivalent strain, an exponential softening and integral non-local regularization method. They obtained a good agreement with experiments of the middle-size specimens, but much worse results were achieved of the smallest or largest beams. Feng and Wu [44] used phase-field regularized cohesive zone model with a very small length scale to simulate notched and unnotched concrete beams experimentally tested by Gregoire et al. [35] and Hoover et al. [32]. In the softening regime, they adopted the exponential curve proposed by Reinhardt et al. [25]. Barbat et al. [45] used a local version of an isotropic damage constitutive law with an exponential law to simulate both experimental campaigns by Gregoire et al. [35] and Hoover et al. [32]. They reported a good agreement between numerical simulations and experimental outcomes. Parrilla Gomez et al. [46] simulated again these two experimental series with a model of graded damage with Thick Level Set (TLS) method. Zhang et al. [47] used localizing gradient damage model to numerically reproduce both Hoover et al. [32] and Grégoire et al. [35] experiments. Wosatko et al. [48] examined the behaviour of two constitutive laws: the consistency viscoplasticity and the gradient damage model against the experimental outcomes of Grégoire et al. [35] experiment. Marzec and Bobiński [49] adopted elasto-plastic model with Rankine criterion in tension enriched by an integral non-local regularisation approach to simulate Hoover et al. [32] test.
Havlásek et al. [50] used an isotropic damage model with an equivalent strain corresponding to the Rankine criterion with round-off in multiaxial tension region. An exponential curve was adopted in the softening regime. The integral non-local theory was used as a regularisation method. They studied standard and distance based averaging methods. In the second approach the characteristic length in points lying near the boundary decreased with decreasing the distance to the specimen's edge. They found that the distance-based method was able to reproduce the experimental results well for all notch lengths and beam sizes, while standard averaging significantly overestimated the nominal strength of the small notched beams. The distance-based approach by decreasing the characteristic length results in reducing the localisation zone width. As a consequence, the fracture energy in the material points lying in the boundary layer is also reduced. Its idea can be related to nonlocal boundary layer model proposed by Bažant et al. [51] to overcome numerical problems in treating boundaries in fracture analysis when integral non-local averaging algorithm is used as a regularisation technique. Vořechovský [52] used the similar idea of the 'weakened boundary layer' to explain experimental results of van Vliet and van Mier [53,54]. They tested the dog-bone specimens of different sizes subjected to uniaxial tension. Surprisingly the smallest specimen turned out not to be the strongest one (with the respect to the nominal strength).
The above review shows there is no clear consensus when modeling the softening phase of cracks in concrete in mode I. Therefore the aim of the paper is to examine the performance of the discrete cohesive crack model equipped with different softening curve definitions in simulating the size effect phenomenon in concrete beams under bending to verify the opinion of the superiority of the bilinear softening curves over alternative formulations. Contrary to smeared crack formulations used in simulation studies mentioned above, a discrete approach is used here within Extended Finite Element Method (XFEM). The attention is paid to the ability of different softening laws: bilinear, exponential, and based on rational Bezier curve to properly reflect the experimental outcomes. The influence of the reduction of the fracture energy in the boundary layer is also examined. Error measures are introduced to quantitatively estimate calculated peak loads. In none of the papers cited above such quantitatively assessment has not been done. Obtained force-crack mouth open displacement (CMOD) curves are compared with experimental diagrams only qualitatively (visually). Analyses determining the best initial fracture energies are performed. The main goal of presented numerical results is the comparison with experiments, therefore no fitting to different size effect laws is done here.
The paper is organized as follows. Section 2 outlines the main ideas of the paper. Section 3 presents the experimental research performed by Hoover et al. [32]. Their results serve as a verification data to rate material parameters used in FE-calculations. Section 4 provides information about the formulation of the eXtended Finite Element Method and the constitutive law used to describe a discrete crack. Some details of the implementation issues are also added. Numerical results are extensively presented in Section 5. The final conclusions and future plans are listed in Section 6.

Significance of Research
Current research, concerning a series of geometrically scaled concrete beams with and without notch under three-point bending, is an extension of our previous investigations of concrete fracture properties [49]. The attention is put on influence of different softening curves definition and material parameters in concrete (i.e., fracture energy and initial fracture energy) on specimen strength under bending. This knowledge is important to better understand a fracture phenomenon and to provide reliable numerical tool to describe the size effect in concrete members. Thus, the main objective of this study that also represent its novelty is detailed and quantitative (not only qualitative) assessment of efficiency of different softening curves in numerical prediction of size effect for concrete beams under bending.

Geometry of the Beams
Fracture process in one of the fundamental phenomenon in concrete. The adequate description is a crucial issue and it is essential to formulate a proper physically meaningful constitutive law. There have been several experimental campaigns on fracture in concrete. One of the recently reported research was conducted by Hoover et al. [32]. They examined 128 unnotched and notched concrete beams under three-point bending. The geometry of a beam is shown at Figure 1. Four different sizes with five notch dimensions were analysed. The beam height D was taken as 500, 215, 93 and 40 mm for a huge, large, medium, and small specimen, respectively. The total length of the beam was 1200, 516, 223.2 and 96 mm for a huge, large, medium, and small specimen, respectively. The notch to depth ratio α 0 (relative length of a notch with the respect to the beam's height D) was 0.0 (no notch), 0.025, 0.075, 0.15 and 0.30. In total, 18 geometries were defined (beams with the height equal to D = 93 mm and D = 40 mm with α 0 = 0.025 were not cast). The span length L was equal to 2.167D for all beams. The thickness was set to B = 40 mm. The width of the notch was 1.5 mm. In addition, 36 companion samples were prepared to determine the compressive strength, modulus of rupture, Young's modulus, and Poisson's ratio. All specimens were cast within three hours from the same batch of concrete. They were kept in identical curing and environmental conditions. As a consequence, low scatter of results was achieved, and all beams had virtually the same material properties.

Experimental Results
All tests were carried out under opening displacement control. Two points at the bottom edge lying symmetrically with the respect to the vertical axis of symmetry of the beam were chosen and a gauge was placed. The gauge length was scaled with the beam size. Steel loading blocks with dimensions 60 × 40 × 40, 25.8 × 17.2 × 40, 11.1 × 7.4 × 40 and 4.8 × 3.2 × 40 mm for a huge, large, medium, and small specimen, respectively, were placed under the load and supports. The nominal strength σ N was calculated for all results: where P u is the maximum force. Table 1 presents mean nominal strengths, σ Exp N for each beam geometry (after [30]). In this paper, the correction factor C f was calculated for each geometry. This factor considers real dimensions measured in experiments, usually different than nominal beam dimensions B and D. It is defined as: where P Exp u is the averaged maximum force for a given geometry (calculated from maximum force values given in [32]). Averaged maximum forces P Exp u and correction factors C f are presented in Table 1. The correction factor of the unnotched medium beam (D = 93 mm) was rather unrealistic and it deviated significantly from other values (bold number in Table 1), therefore a value 1.0 was assumed instead.

General Formulation
Cohesive cracks will be described here via Extended Finite Element Method (XFEM). This approach is based on a Partition of Unity concept [55]. It allows for adding extra terms to the standard Finite Element (FE) displacement field approximation for a better capture of a displacement discontinuities. The key point is to enrich only selected nodes with additional degrees of freedom (locally "near" a crack) and to remain the remaining part of the specimen standard. As the crack propagates during FE calculations, the number of enriched nodes increases dynamically. Using XFEM, cracks can pass through finite elements without any remeshing; they do not have to follow the edges of the elements.
The formulation used in this paper follows generally the classical idea presented by Wells and Sluys [11]. The only fundamental difference is the application of the shifted-basis enrichment (Zi and Belytschko [56]). In a body Ω cut by a discontinuity Γ d (Figure 2), the displacement field u in a point x can be calculated as (using a finite element format): where N I is a shape function in a node I, N tot is the set of all nodes, N enr is the set of enriched nodes (nodes of the elements cut by the crack), a I are standard displacements (in a node I), b I are enriched displacements in a node I and ψ denotes a (generalized) step function (a sign function) defined as: In the original formulation [11], the Heaviside step function was used. This shift does not change the approximating basis, but it simplifies the formulation of the method. This enrichment is equal to zero in all elements not cut by a crack; as a consequence, only two types of finite elements have to be defined. Moreover, total displacements in nodes are equal to standard ones. The weak form of equilibrium, discretized equations, and the details of the finite element derivations can be found in Zi and Belytschko [56] and Tejchman and Bobiński [57].

Bulk Material Description
Within this approach two material laws have to be defined. The first one describes the behavior of the material in a solid (bulk) body. In uncracked continuum, a linear elastic constitutive relationship between strains ε and stresses σ is assumed: where D e is a linear elastic material matrix. Assuming plane stress conditions, the matrix D e is calculated as: where E is Young's modulus and ν denotes Poisson's ratio.

Discrete Crack Definiton
The second constitutive relationship defines the behavior of the cohesive crack. A new crack segment can be created, if the Rankine criterion (plane stress case) is fulfilled: where σ 1 and σ 2 are the principal stresses and f t is the tensile strength. This inequality is checked in all integration points in the finite element at the front of the crack tip. The crack grows if Equation (7) is true in at least one integration point. Due to a symmetry of the problem (three-point bending test) and isotropic and homogeneous material definition, a fixed vertical direction of the crack propagation is assumed. A new segment is defined from one element's side to another one (crack tip cannot be placed inside a finite element). Segment end points cannot be placed at element's vertices. For integration purposed a cracked element is divided into three sub-triangles with one-point Gauss quadrature, while two integration points are defined along the crack segment (Asferg et al. [58]). Along a crack a cohesive traction vector t is defined (it is not a stress-free crack formulation). The traction vector t is related with displacement jumps u. Both these quantities are defined in a local coordinate system and they have normal (index n) and tangential (index s) components. Due to the tensile dominated nature of the problem, the following loading function f is assumed: where κ is an internal variable, equal to the largest value of the normal displacement u n obtained during the loading history. Active loading occurs for f ≥ 0 and unloading (reloading) is indicated by f < 0. During active loading the normal traction force is equal to the yield traction t y : Three (basic) softening curves are used to calculate the yield traction t y . First, a bilinear curve is defined ( Figure 3a): where t k and κ k are the traction and the value of the internal variable κ at the kink point, respectively, and κ u is the value of the of the internal variable (crack opening) κ when the traction t n is reduced to zero. Parameters κ k and κ u can be calculated as: Using the total fracture energy G F and the initial fracture energy G f (area under the initial tangent line from the peak point at Figure 3, marked as a green area). The equivalent relationship (in a continuum format) was used by Hoover and Bažant [41].
As a second alternative an exponential function is chosen (Figure 3b): The area under this curve is equal to the total fracture energy G F and the ratio between the total fracture energy G F and the initial fracture energy G f is equal to 2 (it is a fixed value, it does not depend on the curve parameters f t and G F ).
The last proposal is formulated using Bezier rational curve based on the bilinear softening definition (Figure 3c,d) to allow the smooth transition between two segments (without a sudden change of the direction in the kink point). It is defined via two parametric equations: With a parameter t and a weight w attached to the kink point. The weight w controls the shape of the curve; for w = 0 a linear softening is obtained and for w = ∞ it coincides with the bilinear diagram ( Figure 3a). Here the "total" fracture energy used to define the kink point on the underlying bilinear softening curve is denoted as G B and it is not equal to the total fracture energy G F (except for the case with w = ∞). The term G F − G B can be interpreted as the area between the Bezier rational curve and the bilinear curve (marked as a red area in Figure 3c). The larger the value of the weight w is taken, the smaller difference Given a basic softening curve t n , the yield traction is defined as: where D f is a correction term calculated as: With a drop factor d f (Cox [59]). The presence of the correction term D f improves the convergence in cases where transition between tension and compression occurs, resulting in sudden stiffness changes. With increasing the value of the drop factor d f , the correction term D f goes to one and the original softening definition is recovered.
During unloading, the secant stiffness is used with a return to the original configuration (damage format): In compression the penalty stiffness T n is taken: It is calculated as a derivative of the yield traction (Equation (14)) at κ = 0. In a tangential direction a linear dependence on the current yield traction is assumed: With the initial shear stiffness T s . It ensures that the shear traction decreases to zero while the crack opens. This idea is close to the exponential softening postulated by Wells and Sluys [11].

Boundary Layer
In order to verify the necessity of introducing the weaker boundary layer postulated by some researchers to obtain physically consistent results (Vořechovský [52]) some extra simulations are carried out. Figure 4 presents the geometry of the weaker boundary layer zone of the width b along the specimen's edges (including notch). Within this zone initial and total fracture energies are calculated as βG f and βG F , where a reduction coefficient β is defined as: Here d(x) is the distance of the point x to the nearest edge and β 0 is the value of the reduction coefficient at the boundary. A linear decrease is assumed here, but other relationships, e.g., exponential one [50], can be also used. Although such reduction was not performed by Hoover and Bažant [41] and Lorentz [43], it was essential to obtained experimentally consistent results by Havlásek et al. [50].

Implementation
Numerical calculations have been performed using a commercial program Abaqus Standard [60]. Although it includes XFEM procedures, this implementation has some limitations. Only quadrilateral finite elements are allowed and there is no possibility to define user's crack direction propagation criteria. Therefore, the Abaqus user-defined element procedure (UEL) is utilized to implement a finite element within XFEM. The independent module in Fortran 95 has been written to handle model data and needed subroutines. Within this module nodal coordinates, elements connectivity data, information from integration points are kept. This module is then called in for the UEL subroutine. Such approach gives the access to gathered model data from each finite element (it is not possible by default). The convergence criteria taken from Abaqus [60] are applied: r max ≤ 0.01 q and c max ≤ 0.01∆u max (20) where r max is the largest residual in a balance force vector (right hand side vector), q is an overall time-averaged value of all element force vectors and external loads, c max stands for the largest correction (change between last two iterations) of the unknown displacements and ∆u max depicts the largest change of the unknown displacement in the increment.
The new crack segments can be created only in a converged configuration. Then, a restart procedure is applied, and a current increment is repeated to find a converged configuration again. A crack can be extended by one segment (in one finite element) only between two converged configurations. This procedure is repeated as long new crack segments are created in a current increment. If no new crack segments have been added, the next increment starts (after convergence). In order to implement this idea into Abaqus, an independent convergence algorithm has been developed. Information about residuum forces and displacement corrections is gathered (independently from Abaqus) in the Fortran module (it is transferred from user elements). One-node user elements with zero stiffness matrix and zero force vector are manually defined in an input file in nodes with imposed boundary conditions. These elements transfer information on defined displacement/boundary conditions to exclude appropriate degrees of freedom from convergence check algorithm. As a consequence, converged iterations can be detected independently within a Fortran module. Information about the simulation process (e.g., start of a new increment, execution of a next iteration or creation of a new crack segment and restart of a current increment) is "passed" to Abaqus by defining a user element with a very large label (to be called as a last finite element in an iteration). This element (not attached to the model analyzed) returns a very large force vector in iterations when the start of the next increment is not permitted or zero force vector in the opposite case.
The arc-length method has been used to control the simulation process. Generally, in arc-length methods, a system of balance equations in an iteration i can be written as: where K is the global stiffness matrix, r is the residuum force vector and f stands for a vector of external loads. Corrections δu r and δu f form the correction of total displacements: With the multiplier correction δλ. Abaqus Standard includes only one method based on arc-length control, namely modified Riks procedure. This approach is suitable in global buckling analysis, but it is not efficient in cases when deformations concentrate in small regions with elastic unloading in the remaining part. So-called indirect displacement control method is used here [61]. In order to implement it into Abaqus, some modifications are required. First the set containing all nodes of the model is copied and a new set is created with the same number of nodes and their original coordinates. The first and the second set of nodes store u r and u f displacements, respectively. The definition of a user element contains a subset of nodes from the first set following an analogous subset of nodes from the second set. The total number of nodes defining the element is doubled and the element stiffness matrix and element force vector is extended using the idea presented in Equation (21). Note that all displacements: u r , u f and u t contain both standard and enriched displacement terms a I and b I respectively. Another user element with zero stiffness matrix and force vector is defined with nodes located at the ends of the gauge. It is responsible for modifying the value of the λ multiplier based on displacement values in nodes. Its label is manually set to one to ensure its call as the first element in the iteration.
Abaqus is not able to visualize user elements in Complete Abaqus Environment (CAE). Therefore, a third set of nodes is defined (again with the same original coordinates). Based on information from the original mesh, a set of built-in standard finite elements is created on these nodes. A zero stiffness (and stress) material is assigned to these elements. Information from user elements about strains and stresses is passed via module written in Fortran (Intel Company, Santa Clara, CA, USA) and next it is exported as state variables in these elements. In each node from the third set a one-node user element is created with the unit matrix as a stiffness matrix and a force vector with appropriate terms from the total displacement vector u t . In that way global displacements can be visualized. However, this trick does not allow for presenting the crack pattern. It is achieved by creating Postscript files with deformed (and cracked) FE mesh in selected simulation times. Alternatively, each built-in standard finite element defined to visualize results, in which a crack occurs, is replaced with three or four built-in standard triangle elements. These elements are defined on standard nodes from the 'master' element and two additional pair of nodes located at the crack segment ends. It enables to visualize the growth of the crack during the simulation. However, this method requires some extra modifications of the input file and the re-execution of the simulation (information about the crack geometry is available after the completion of the job).

Input Data
The performance of all 18 beams are simulated, following the geometry data provided in Section 3. Steel blocks are also created with load/support points defined in the middle at the horizontal edge. Indirect displacement control method described in Section 4.5 is used to apply the load. The gauge length varies for the different beam's sizes and shapes (between 12.7 mm and 162 mm) and it is taken directly from the experiment [32]. The ultimate elongation of the gauge is set to ∆ = 0.3 mm. A requirement of execution of at least 1000 and 250 increments is imposed to complete the job in simulating unnotched and notched beams, respectively. The starting point for a discrete crack is defined manually in the middle of the horizontal edge at the top of the notch. The numerical calculations are carried out assuming plane stress conditions. Triangular constant strain finite elements are used. The FE mesh in the central region above the notch along the height is refined with the maximum length of a finite element side about 2 mm. The total number of finite elements is between 4981 and 11,660, depending on the beam geometry.
Elastic constants are the same as determined in the experiment. The Young's modulus is taken as E = 41.24 GPa and the Poisson's ratio is ν = 0.172 [32,41]. Note that a slightly different value of Young's modulus has been assumed in [50]. They took E = 35.6 GPa based on the initial slope of the experimental load-displacement curve obtained from the largest unnotched specimen. The total fracture energy is fixed to G F = 70 N/m (after [41]), although initially larger values G F = 96.94 N/m and G F = 110.09 N/m were reported [33]. Lorentz [43] assumed G F = 75 N/m and Barbat et al. [45] took G F = 90 N/m. In the calculations with the bilinear softening or the Bezier rational curve the traction at the kink point is always taken as t k = 0.15 f t (after [41]). The shear stiffness is taken as K s = 0.0 N/m 3 . The companion calculations with other values of K s showed no difference in results. The remaining parameters: the tensile strength f t , the initial fracture energy G f , the weight w (for the Bezier rational curve) and the type of the softening curve vary thorough the simulations. The behavior of the steel loading plates is simulated by defining linear elastic constitutive law with the Young's modulus E s = 200 GPa and the Poisson's ratio ν s = 0.3. If not explicitly stated, no boundary layer reduction is applied.

Error Measures
In order to quantitatively estimate the quality of the simulation results, several error measures are used. The following relative error Err 0 defined as: is used to evaluate a single simulation. Here σ FEM N is the nominal strength calculated from FE results as (using Equation (1)): The whole set of N results (usually N = 18) is rated using mean percentage error Err 1 : or mean absolute percentage error Err 2 :

Bilinear Softening
The choice of the values of the material parameters in softening to obtain the best fit is not an easy task. Havlásek et al. [50] calculated an error measure considering six force values for the huge beams (D = 500 mm) and peak loads for the remaining sizes. Lorentz [43] identified tensile parameters simulating medium (D = 93 mm) and large (D = 213 mm) beams with notch to depth ratios α 0 = 0.15 and α 0 = 0.30. Then he used these values to predict the behaviour of unnotched medium and large beams, and huge (D = 500 mm) unnotched and notched (α 0 = 0.15 and α 0 = 0.30) specimens. It is interesting to note that he did not simulate the performance of the small (D = 40 mm) beams at all.
Grégoire et al. [35] calibrated their model parameters on the smaller beam sizes and they used them to simulate the behaviour of the largest specimens.

Huge and Small Beams
First series of FE-calculations are performed using a bilinear softening law (Equation (10)). Here two "extreme" case are investigated. After some initial studies two values of the tensile strength are chosen for further calculations: f t = 4.8 MPa and f t = 5.2 MPa. A discrete set of initial fracture energies G f in the range of 30 to 50 N/m and an increment of 2 N/m is assumed. The remaining parameters are kept fixed with their initial values specified in Section 5.1. Figures 5 and 6 present obtained nominal strengths σ FEM N for the huge (D = 500 mm) and small (D = 40 mm) unnotched and notched specimens, respectively. In the simulations of the huge beams, the best results are obtained for the initial fracture energy G f = 48 N/m (Err 1 = 0.07% and Err 2 = 6.20%) and G f = 42 N/m (Err 1 = −0.34%, Err 2 = 2.72%) taking the tensile strength f t = 4.8 MPa and f t = 5.2 MPa, respectively. Taking the tensile strength f t = 4.8 MPa, the larger notch to depth ratio α 0 is assumed, the smaller initial fracture energy G f gives the best results. The same conclusion is true for the results with the tensile strength f t = 5.2 MPa with an exception of the unnotched beams. It is interesting to remark that the sensitivity of the nominal strengths σ FEM N with the respect to the initial fracture energy G f increases with increasing the notch to depth ratio α 0 , e.g., the error Err 0 is between 0.73% and 3.72% and between −10.13% and 9.76% for the notch to depth ratio α 0 = 0.0 and α 0 = 0.30, respectively (with the tensile strength f t = 5.2 MPa). Calculations of the small beams give the best results for the initial fracture energy G f = 40 N/m (Err 1 = 0.32% and Err 2 = 6.91%) and G f = 32 N/m (Err 1 = −0.03%, Err 2 = 5.84%) taking the tensile strength f t = 4.8 MPa and f t = 5.2 MPa, respectively. Both simulation sets confirm the previous observation of decreasing the "best" initial fracture energy G f with increasing the notch to depth ratio α 0 . The sensitivity of calculated nominal strengths for the small beams with the respect to notch to depth ratio α 0 is smaller comparing with results for the huge beams. The error Err 0 is between −4.17% and 4.90% and between 4.08% and 17.56% for the notch to depth ratio α 0 = 0.0 and α 0 = 0.30, respectively (with the tensile strength f t = 5.2 MPa). Graphically this fact may be seen by comparing line inclinations at Figures 5 and 6 for different notch to depth ratios α 0 . Analogous parametric studies for large (D = 215 mm) and medium (D = 93 mm) beams confirm the observation the "best" initial fracture energy G f decreased with decreasing the specimen's size (e.g., the initial fracture energy with f t = 5.2 MPa is found to be G f = 38 N/m for both large and medium beams).

All Beam Geometries
The definition of objective quality measures allows for choosing the best parameters set. Of course, such analysis should be performed with the restriction of the stochastic nature of the experimental results. Therefore, the average values of maximum forces obtained experimentally (which serve to asset simulation results) due to finite low number of realizations (specimens tested) can also introduce some errors in the analysis. In experiments by Hoover et al. [32] coefficients of variation were approximately about 5% (they can be interpreted as results scatter). However, in order to limit the number of simulations executed, no stochastic analysis will be performed here, and averaged values of experimental maximum forces will be treated as 'perfect' ones.
On the basis of the results from the Section 5.3.1., four sets of parameters are chosen to perform calculations with all 18 beams: set S1: f t = 4.  Table 2 presents calculated nominal strengths obtained with sets S1-S4. Graphical comparison of experimental and numerical nominal strengths is depicted at Figure 7. The following errors are obtained: set S1: Err 1 = 2.02% and Err 2 = 3.59%, set S2: Err 1 = 2.57% and Err 2 = 3.34%, set S3: Err 1 = −2.60% and Err 2 = 3.65%, and set S4: Err 1 = −4.47% and Err 2 = 5.36%. In the calculations with sets S1 and S2 the largest errors are obtained for the small beam and the notch to depth ratio α 0 = 0.3 (Err 0 = 11.22% and Err 0 = 12.97% for set S1 and S2, respectively). The absolute values of the error Err 0 do not exceed 7% for the remaining geometries. Generally nominal strengths are overestimated for the small beams, while the agreement with the experimental outcomes is very good for the other specimens (especially for the set S2). On the contrary, results with sets S3 and S4 generally underestimate experimental peak loads for larger beams (especially when parameters set S4 is assumed). The obtained results reveal some problems in determining optimum material parameters in concrete. Sets S1, S2 and S3 give similar errors Err 2 . Sets S1 and S2 overestimate the peak loads, while set S3 underestimate it in average, but the absolute values of the errors Err 1 are similar. Only calculations with set S4 produce larger both errors. Analysis of nominal strengths obtained from FE-calculations reported in [41] return errors Err 1 = −3.76% and Err 2 = 4.43%. These values are larger than the errors obtained with sets S1, S2 and S3, but they are smaller than errors from results with the set S4. Taking material parameters from [41] and using the approach described here (XFEM) much larger errors are achieved: Err 1 = −7.29% and Err 2 = 7.52%. The second comment should be made about the value of the tensile strength f t . The values between 4.8 MPa and 5.2 MPa are taken here, while Hoover and Bažant [41] assumed the tensile strength equal to f t = 3.92 MPa. On the other hand, Havlásek et al. [50] found the optimum uniaxial tensile strength as f t = 4.984 MPa. Lorentz [43] used a similar value, namely f t = 5.0 MPa. Note also that this high value of the tensile strength corresponds nicely with the nominal strength for very large structures f r,∞ = 5.27 MPa determined in [33].
A comment should be made about application of XFEM to simulate unnotched beams under bending with only one crack. The exact solution should consider a region in the middle at the bottom edge of the beam with several initial cracks. At the beginning a region of diffuse damage is formed. Upon increasing the loading force, cracks develop, but one crack dominates (in the case of the problem analysed here with the axis of symmetry it would be crack located along this axis). Planas et al. [62] showed numerically that in unnotched beams under bending, several cracks start to develop but only one crack dominates at the peak. Moreover, obtained errors with calculations of unnotched beams with only one crack defined are similar to values obtained for beams with α 0 = 0.025 and they confirm general trends observed in Section 5.3.1. Even two different size effect laws can be postulated to describe unnotched (Type I) and notched (Type II) beams, XFEM with this simplified approach is able to capture numerically both these phenomena. Therefore, this simplification (one crack instead of a bundle of cracks) is justified. The same simplification was made by Fend and Wu [44]. In the simulations of the unnotched beams they also assumed only one crack starting from the midpoint of the bottom edge of a beam.   16-0.18) and 3.9-4.9 cm (L crack /D = 0.08-0.10) for small, medium, large and huge beams, respectively. The larger the beam is assumed, the smaller the relative crack length is obtained [63]. For small beams a crack is always fully developed (a large horizontal plateau) while for the huge beams a crack is still to propagate (no horizontal plateau). In general, at the peak a crack is far away from being fully formed.

Boundary Layer
In the next phase, the influence of the boundary layer is examined. Material parameters S1 and S2 are adopted.  Figure 9 shows calculated nominal strengths compared with experimental values and calculated nominal strengths are listed in Table 3. Set of force-crack mouth opening displacement (CMOD) curves for all 18 beams is presented at Figure 10 (set S7), with grey areas between experimental extreme curves (here numerical force values are not corrected with the factor C f ). It can be seen all numerical results fit into experimental limits. The error measures are equal to: Err 1 = 0.09% and Err 2 = 2.92%, Err 1 = −0.12% and Err 2 = 2.90%, Err 1 = 0.30% and Err 2 = 2.27%, and Err 1 = 0.08% and Err 2 = 2.28% for the set S5, S6, S7 and S8, respectively. In all cases the largest error Err 0 is obtained for the small beam with the notch to depth ratio α 0 = 0.3 and it is about 9%. The errors Err 0 from the remaining beams do not exceeded 5.5% (its absolute values). Comparing with simulations S1-S4 the presence of the boundary layer decreases the error measures, especially for the sets with the tensile strength equal to f t = 5.2 MPa (set S2 versus sets S7 and S8). On the other hand, input parameters (S1 and S2) already produce relatively small errors. By further parametric studies even better parameters can be found. For instance FE-calculations with the following parameters (set S9): f t = 5.0 MPa and G f = 48 N/m give the following errors: Err 1 = 0.67% and Err 2 = 2.69%, comparable with errors obtained with sets with declared boundary layer. Therefore, the definition of the boundary cannot be treated as the significant improvement of the results. Simulation results do not allow also for unique identification of the boundary layer thickness.

Notched Beams
As an alternative approach, notched beams with the notch to depth ratio equal to α 0 = 0.3 are used to determine the fracture parameters. Based upon parametric studies, the following sets are defined: set N1 with f t  Figure 11 shows obtained force-displacement curves. In general, comparable agreement with experimental outcomes is achieved for all parameter sets. For the smallest beam, the maximum load is obtained with the set N4 and the minimum load with the set N1, while for the largest beam the opposite case occurs. Force-displacement diagrams for the small beam suggest also that the assumed here total fracture energy G F = 70 N/m is too small. This observation is consistent with the larger value of the total fracture energy (G F = 96.94 N/m) calculated by Hoover and Bažant [14] on the basis of experimental results. On the other hand, simulations of all beam sizes with sets N1-N4 generally underestimate experimental results. They return errors Err 1 = −6.09% and Err 2 = 6.51%, Err 1 = −5.53% and Err 2 = 5.85%, Err 1 = −3.53% and Err 2 = 4.19%, and Err 1 = −2.91% and Err 2 = 4.32%, for the set N1, N2, N3 and N4, respectively.

Exponential Softening
So far, only a bilinear softening law has been used. The same curve was used by Hoover and Bažant [41] who postulated that a bilinear shape of the softening curve is a fundamental property of concrete. They also stated that no linear nor exponential functions in softening allowed for fitting numerical results with experiments. Despite this information Havlásek et al. [50] and Grégoire et al. [35] used exponential relationships in their simulations with isotropic damage models with non-local softening (to be more precise different formulas involving exponent function were used in both papers). Lorentz [43] also proposed a formula containing an exponential function to describe the post-peak behavior. In order to clarify this issue, FE-calculations with the exponential softening law (Equation (12)) are performed. The initial fracture energy is equal to G f = 35 N/m (50% of the total fracture energy). The tensile strength is assumed as f t = 5.2 MPa (set S10). Obtained nominal strengths are presented at Figure 12a and in Table 4. The error measures are equal to Err 1 = 4.46% and Err 2 = 4.80%. They are larger than obtained with sets S1-S3, but comparable with errors from original simulations (Hoover and Bažant [41]). The application of the boundary layer model (set S11 with f t = 5.2 MPa, G f = 35 N/m, b = 1 cm and β 0 = 0.6) significantly improves the results. The error measures are equal to Err 1 = 0.61% and Err 2 = 3.10%. Comparison between numerical and experimental nominal strengths is made in Figure 12b. Family of force-CMOD diagrams is shown at Figure 13. Generally, all numerical curves fall into experimental limits. Again, the application of the boundary layer is not necessary if better parameters are found. Taking f t = 4.8 MPa and G f = 35 N/m the errors are calculated as Err 1 = 0.20% and Err 2 = 3.65% (set S12).

Bezier Rational Curve
Finally, simulations with the softening law based on Bezier rational curve (Equation (13)) are executed. The set S13 is defined with the tensile strength f t = 5.2 MPa, initial fracture energy G f = 35 N/m and the weight w = 4 (parameters chosen on some initial simulations). The fracture energy G B is equal 59.52 N/m. Obtained numerical nominal strengths are shown at Figure 14 and listed in Table 4, while force-CMOD diagrams are depicted in Figure 15. The calculated error measures are Err 1 = 0.38% and Err 2 = 2.51% (values comparable with improved parameter sets S4-S8). All numerical curves fit the experimental limits. What is interesting is the use of the boundary layer method (set S14: f t = 5.2 MPa, G f = 35 N/m, w = 4, b = 1 cm, β 0 = 0.9) decreases the error Err 1 = −0.13%, but it slightly increases the error, Err 2 = 2.55%.  for exponential (sets S10-S12) and rational Bezier (sets S13-S14) softening curves.

Conclusions
Numerical simulations of unnotched and notched geometrically similar concrete beams of different sizes and different notch to depth ratios have been presented. Obtained results have been compared with experimental data [32]. Two error measures have been defined and used to quantitatively assess calculated maximum forces. The influence of the softening law has been investigated. Three alternatives have been examined: bilinear, exponential and ration Bezier curves. All analysed softening curves turn out to be equivalently good, they give results with comparable error measures consistent with experiments. This conclusion contradicts the hypothesis of the supremacy of the bilinear definition postulated by Hoover and Bažant [41]. At the same time, the use of different softening laws results in different values of best initial fracture energies. This fact reveals some limitations of the initial fracture energy definition when a nonlinear relationship is assumed instead of segmentally linear function. The linear reduction of the initial and total fracture energies in the boundary layer did not significantly improve the results. The assumed value of the total fracture energy G F = 70 N/m based on analysis of the experimental curves performed by Hoover and Bažant [33], was correct. Obtained force-displacement diagrams fitted within experimental curves.
Simulations with the cohesive crack model were the first step. The ongoing research aim is to define an equivalence of initial fracture energy definition for different softening laws, especially for non-linear relationships. It will lead to a more unique definition of this quantity and to a better understanding of the fracture process.