Thoughts on ALARP
Abstract
Chris and Fred discussing another question from a listener based on ‘as low as reasonably practicable’ or ALARP. This is used a lot in risk management and analysis … but what is it?
Key Points
Join Chris and Fred as they discuss the safety concept of reducing risks and a basic approach implied by ‘ALARP’ which stands for ‘as low as reasonably practicable.’ This is an often-quoted ‘goal’ for risk management frameworks, where we look at a system and try and reduce risks to ALARP. But what does this mean? … what is ‘practicable?’
Topics include:
- What is ‘reasonably practicable?’ This changes over time. At one point in time, driving while intoxicated was not a ‘big deal.’ Now it is illegal and you can go to jail if you do it. At one point in time, there were no real limits on what we could do to the environment. That is not the case anymore. So what is ‘reasonably practicable’ changes as society changes. And of course, ‘reasonable’ is subjective. What is reasonable to you might not be reasonable to someone else. Or the jury that is working out how much damages your company needs to fork out.
- ALARP often devolves into us continually asking ‘are we there yet?’ Which in turn becomes a ‘box ticking’ exercise. Why? Because when we keep asking ‘are we there yet?’ we simply convey impatience. If we are impatient, we want to ‘be there’ already. So impatient engineers, designers, managers and manufacturers simply want to make their decision defendable. Not genuinely balance risks and work out what society deems ‘reasonably practicable.’ So we stop looking at what might go wrong, and start hoping that everything is right.
- … so it sometimes makes us use this ‘business case’ approach to everything. But by then it is too late. Most organizations that do really well in the fields of quality, reliability and safety invest a ‘tiny’ amount of resources into making sure their first design is a quality/reliable/safe design. Why are these resources ‘tiny?’ Because if often costs next to nothing to incorporate key design features into the first iterations of design as opposed to having to redesign them in later. So we can incorporate lots of quality/reliable/safe design features from the start without paying for much at all. The problem is that the business case for each individual design characteristics costs more than making it happen in the first place.
- … which leads into balancing risks and responding to known risks. Whenever we are ticking boxes on a checklist to come up with a defendable reason as to why we don’t have to worry about risk anymore, we invariably focus on known risks and not unknown risks. And it can get even worse than that. The blowout preventer on the Deepwater Horizon offshore oil drilling rig that failed to stop a catastrophic oil spill was ‘built to standards.’ But the standards were out of date, and didn’t include a well-known failure mechanism that pushes the drill string to the side under pressure. There was a rush to be able to say this is safe as opposed to make sure it is safe.
- ‘Safe’ is simply a word that says we can use or sell something. Nothing else. We like to think that something that is ‘safe’ is something that is unlikely to cause harm. This is not the case. If something is ‘safe’ it means that it is able to be used or sold. And that is different.
- Many organizations don’t worry about ‘testing’ something is safe because they have ‘designed’ it to be safe. This sounds shocking. But in reality, the organizational cultures that do best at things like quality, reliability and safety, are those that rarely measure how well they are doing. They invest all their time into improving the design of their system and not measuring how compliant the design of their system is
Enjoy an episode of Speaking of Reliability. Where you can join friends as they discuss reliability topics. Join us as we discuss topics ranging from design for reliability techniques to field data analysis approaches.
- Social:
- Link:
- Embed:
Show Notes
Leave a Reply