Introduction
The presence of cracks in hydrocarbon pipelines poses one of the most complex challenges to structural integrity. This is not only due to their potential for brittle failure but also because of the difficulty in accurately characterizing them and their sensitivity to real operating conditions.
These cracks can originate from various mechanisms: manufacturing defects in longitudinal welds, stress corrosion cracking (SCC), fatigue from cyclic pressure, or a combination of these. In all cases, their technical evaluation requires rigorous analysis, especially when they appear in high-consequence areas or in sections where previous incidents have been recorded.
Our Vision: Towards a Non-Deterministic Approach
At GIE, we believe that complementing traditional deterministic analyses with probabilistic tools enriches the understanding of risk associated with crack-type defects.
Currently valid technical standards—such as API 579 Part 9—enable evaluations based on single values: depth, length, thickness, yield strength, fracture toughness. This logic has been the basis for integrity management for decades. However, we know that all these parameters are subject to uncertainty, both due to the origin of the data and the natural variability of the system.
Incorporating this variability is not about replacing the regulations, but rather using them with greater awareness of the statistical context of each variable.
Where Does Uncertainty Come From?
Each of the critical parameters in the crack assessment model can be represented as a random variable instead of an exact constant:
Crack depth and length: Derived from internal inspections (ILI) like UTCD, whose technical specifications declare typical errors of ±10% of the wall thickness, at 80% confidence.
Wall thickness and outer diameter: Both have manufacturing tolerances specified in API 5L, which allow their dispersion to be modeled as normal distributions.
Specified Minimum Yield Strength (SMYS): Although the minimum value specified by API 5L is often used, it can be represented with a distribution that reflects the actual statistical dispersion of the batch.
Fracture Toughness (KIC): Probably one of the variables with the highest uncertainty, which can come from Charpy tests, typical empirical values by steel type, or historical results from destructive tests.
These sources of variability are recognized in standards such as API 1176, which highlight the impact of uncertainty and measurement error on the reliability of assessments.
Monte Carlo Simulations: From Point Value to Result Distribution
Instead of entering fixed values into the Failure Assessment Diagram (FAD) model, at GIE we have developed a tool that implements Monte Carlo simulations, where each parameter is represented as a statistical distribution.
In each iteration, values for depth, length, thickness, KIC, σ_y, etc., are randomly sampled according to their respective distributions, and it is evaluated whether the point (Kr, Lr) falls within the acceptable domain of the FAD according to API 579 Part 9.
The result is not a single point, but a statistical cloud of possible scenarios. Those that exceed the acceptance limits are interpreted as failures.
The error in the input data naturally propagates to the result: the input variables have a dispersion, and therefore, the simulated failure pressure also has a distribution. Although this is not explicitly calculated, the probability of failure results from how many of those simulated values are less than the operating pressure.
Thanks to improvements in current computing capacity, we can run millions of evaluations in seconds or minutes, within timeframes compatible with any integrity analysis program.

Figure 1: Crack Evaluation by Monte Carlo Simulation</strong
This visualization shows that even when the average case is acceptable, a fraction of the simulated scenarios may fall outside the safe domain.
Risk Accumulation: When Many Small PoFs Make a Large One
One of the advantages of the probabilistic approach is that it allows for the estimation of the accumulated risk in a pipeline segment containing multiple anomalies. This is especially relevant when hundreds of minor defects are combined, each with a low individual probability of failure.
In this context, the probability that at least one will fail within a segment is estimated as:
$$ POF_{Segmento} = 1 – \prod_{i=1}^{n} (1 – POF_i) $$
Where:
n is the number of defects in the segment,
POF_i is the individual probability of failure for each crack.
This phenomenon cannot be captured by traditional deterministic evaluations that analyze each defect in isolation. In contrast, the probabilistic approach allows for the detection of critical zones due to defect density, even if none of them are, by themselves, critical.
Conclusion
Current standards remain the basis for crack assessment. But when a deeper analysis is required—due to defect density, operating context, or regulatory or corporate requirements—the probabilistic approach provides an additional layer of technical understanding, which allows for better-informed decisions.
More importantly, it allows for the identification of unacceptable risk scenarios that would otherwise not have been considered.
At GIE, we develop and implement this methodology as part of our integrity analysis tools, applicable to the evaluation of crack-type defects detected by ILI.