Predictive Failure Analysis (PFA) refers to computer mechanisms that analyse trends in corrected errors to predict future failures of hardware components and proactively enabling mechanisms to avoid them. Predictive Failure Analysis was originally used as term for a proprietary IBM technology for monitoring the likelihood of hard disk drives to fail, although the term is now used generically for a variety of technologies for judging the imminent failure of CPU's, memory and I/O devices. See also first failure data capture.
Video Predictive failure analysis
Disks
IBM introduced the term PFA and its technology in 1992 with reference to its 0662-S1x drive (1052 MB Fast-Wide SCSI-2 disk which operated at 5400 rpm).
The technology relies on measuring several key (mainly mechanical) parameters of the drive unit, for example the flying height of heads. The drive firmware compares the measured parameters against predefined thresholds and evaluates the health status of the drive. If the drive appears likely to fail soon, the system sends notification to the disk controller.
The major drawbacks of the technology included:
- the binary result - the only status visible to the host was presence or absence of a notification
- the unidirectional communications - the drive firmware sending notification
The technology merged with IntelliSafe to form the Self-Monitoring, Analysis, and Reporting Technology (SMART).
Maps Predictive failure analysis
Processor and Memory
High counts of corrected RAM intermittent errors by ECC can be predictive of future DIMM failures and so automatic offlining for memory and CPU caches can be used to avoid future errors, for example under the Linux operating system the mcelog daemon will automatically remove from usage memory pages showing excessive corrections, and will remove from usage processor cores showing excessive cache correctable memory errors.
References
See also
- MCELog- Linux daemon for processing of x86 machine checks for predictive failure analysis
Source of the article : Wikipedia