This entry describes the differences between systematic and random failures. It goes on to explain the relevance of these types of failure to hardware and software.
Faults, which lead to failures within a system, can be classified as one of two types:
• Random Faults
• Systematic Faults
Random Faults are due to physical causes and only apply to the simple hardware components within a system. This type of fault is caused by effects such as corrosion, thermal stressing and wear-out. Due to their random nature, statistical information can be produced from testing and historical data about this type of fault. Thus, the average probability, and hence the risk, associated with the occurrence of a random fault can be calculated.
Systematic Faults are produced by human error during system development and operation. They can be created in any stage of the system’s life including specification, design, manufacture, operation, maintenance and decommissioning. After a systematic fault has been created, it will always appear, when the circumstances are exactly the same, until it is removed. However it is difficult to predict the occurrence of systematic faults and their effect on the safety of a system. This is because of the difficulty of predicting when the same “circumstances” will arise.
Failures of simple hardware failures are primarily random in nature rather than systematic. While it is possible that hardware can be subject to systematic failures, the level of complexity of hardware means that predominately failures are random in nature. However, this is changing with the growing complexity of processors and the use of Application-Specific Integrated Circuits (ASICS). The use of hardware within safety critical systems significantly predates the use of software within safety critical systems. Historically, this led to the assessment of safety critical systems using quantified risk assessment based upon statistical calculations of failure rates.
All software faults are systematic, thus demonstrating the safety of software relies upon assessing the likelihood of this type of fault. Software within Safety Critical Systems is increasing in size and extent of use, and this is leading to the risks associated with software systematic faults becoming more prevalent. The level of authority given to and complexity associated with software within safety critical applications means that it is extremely important to be able to assess and argue about the effects of software with respect to system safety. It is not possible to statistically predict the probability of systematic faults, thus for software it is not possible to quantify the associated risks. Instead, most current approaches for arguing the acceptability of software are based upon appeal to the suitability of the development processes followed, as recommended by standards and the development of a software safety argument.
Can’t find the information you want? Please leave a comment telling me what you were looking for and I will add it – how this resource works.