Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Way of the Quality Warrior
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • Quality Engineering Statistics
    • An Introduction to Reliability Engineering
    • Reliability Engineering for Heavy Industry
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Dianna Deeney Leave a Comment

QDD 015 Using the Pareto Principle and Avoiding Common Pitfalls

Using the Pareto Principle and Avoiding Common Pitfalls

The likeness of the Pareto Principle can be compared to Murphy’s Law and the Peter Principle: it’s a curious phenomenon. So, how did it make its way into quality? If using it to make decisions, there are some common pitfalls which can lead to delays in fixing a problem or even misdirect our efforts. So, what is it, and how can we use it for design?

Get to know the Pareto Chart. If it’s built and applied properly, it can help us prioritize: root cause analysis, new design features based on user input, or to help us tackle a problem that just seems too big to even start (just to name a few examples).

We review the Pareto Principle, what a Pareto Chart is, what we need to consider when building one, and how we need to be careful when interpreting its results.

 

View the Episode Transcript

NSW Government & Clinical Excellence Commission. “Pareto Charts & 80-20 Rule.”

To conclude:

  • A Pareto Chart is a useful tool for planning when trying to address the root cause of a problem or when prioritizing. It’s been used for many years and can help us decide on the strategy of action.
  • The results of a Pareto Chart might not be textbook. It is based on a phenomenon not a proven scientific law. It’s more of a compass than a statistical result.
  • We need to be careful to construct it with the right bins: low enough in a causal chain for us to act against and following the MECE Principle.
  • We need to think about and interpret its results to make a decision. We can adjust for categories that don’t carry the same weight in severity or occurrence or even difficulty to resolve. Some ways we talked about addressing this is by applying weight or a factor, or creating a 3-D chart.

Citations

Something just for fun “12 Fun Laws, Rules and Principles You Really Ought to Know” by Oxford Royale Academy, www.oxford-royale.com/articles/12-fun-laws-principles/

Video Harvard X introduces the basic construction and use of a Pareto. – Harvard X, & Institute for Healthcare Improvement. (2017). How to use a Pareto Chart. YouTube. https://youtu.be/ltBw6kwD3_o

Article Ms. Bhalla explores the common issues when using a Pareto Chart to make decisions. – Bhalla, Aditya. “Don’t Misuse the Pareto Principle: Four Common Mistakes Can Lead We to the Wrong Conclusions.” Six Sigma Forum Magazine, vol. 8 issue 3, May 2009, pp. 15-18.

Presentation Mr. Stang added a z-axis to the Pareto Chart to evaluate other project parameters (like project cost and difficulty to implement) to make more informed decisions. – Stang, Eric. “Quarterbacking a Quality Risk Decision.” World Conference on Quality and Innovation. ASQ. May 26, 2021.

Article Mr. Stevenson explores ways to multiply Pareto results by factors to address other project parameters (like project cost and difficulty to implement). – Stevenson, William J. “Supercharging Your Pareto Analysis: Frequency Approach Isn’t Always Appropriate.” Quality Progress, Oct 2000, pp. 51-55.

Episode Transcript

The likeness of the Pareto Principle can be compared to Murphy’s Law and the Peter Principle: it’s a curious phenomenon. So, how did it make its way into quality? If using it to make decisions, there are some common pitfalls which can lead to delays in fixing a problem or even misdirect our efforts. So, what is it, and how can we use it for design? Coming up after this brief introduction…

Hello and welcome to Quality During Design the place to use quality thinking to create products, others love for less. My name is Dianna. I’m a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. Listen-in and then join the conversation at qualityduringdesign.com.

A Pareto Chart is a pretty commonly used tool in quality and also project management, corporate finance and economics. In quality, it is one of the seven basic quality tools that Mr. Juran proposed. In Six Sigma, it is used a lot in the measure phase. And there is an ASQ magazine called Quality Progress that features, regularly, a comic strip called “Mr. Pareto Head” (which is a play on “Mr. Potato Head”, or so I read from an interview of the artist). A Pareto Chart was coined by Mr. Juran and is based on the Pareto Principle, and the Pareto Principle is a general observation. It’s not mathematically sound or scientifically accurate. It’s a curious thing that seems to happen repeatedly, but it’s not guaranteed to happen. It is part of a list of some other fun laws like I mentioned in the introduction (like Murphy’s Law and Peter’s Principle). There’s a fun list that I found on the internet about these laws, so I’ll attach a link to it in this podcast blog.

The Pareto Principle is named after an Italian economist, Vilfredo Federico Damaso Pareto. He published (in the 1890s) a paper that showed about 80% of the land in Italy was owned by 20% of the population. Lore has it that he started seeing that ratio everywhere, even down to the pea plants in his garden. Other people notice that it seemed to carry through in other things to this 80/20 rule. So, the Pareto Principle is, generally, that 80% of the output is caused by 20% of the input. Or 80% of the consequences come from 20% of the causes, and those 20% of the causes is dubbed “The Vital Few”.

Why would we want to apply this in our engineering and design processes? Well, it’s a tool to separate the “vital few” factors from the “trivial many”. Or in plain speak, we want to spend the least amount of effort we need in order to make the biggest effect. A Pareto Chart helps us to identify if we’ve got a cause or a short list of causes that we can work hard to solve to fix most of the problems.

Typical use cases of a Pareto chart are:

time spent on tasks
sales revenue from the number of customers
problems versus defects
number of complaints from the number of customers
percent of funding from the number of investors
The curious thing about the Pareto Principle is that it spans lots of things. But remember, it’s just a principle or a curiosity.

What is a Pareto Chart? It’s a type of histogram, but it ranks things. It’s a combination bar chart and line chart. The bar chart shows a count: how often something happens or how often it caused creates an event. Each bar represents a count of a bin or bucket like age, range, or cause. And the bars are organized from most frequently occurring to the least. So, we can imagine it steps down, peaked at the left axis and trailing downward. Then a line graph is superimposed on top of that histogram showing the cumulative percent number of occurrences. The line arches up to 100% on the right side of the graph. In this podcast blog, at qualityduringdesign.com, I’ll provide a link to a Harvard X YouTube video. It’s about how to construct one. And, I’ll also include a picture of a Pareto chart.

Something people like about the Pareto Chart is that to construct it you don’t need to know a lot about statistics. However, it is not a thoughtless exercise. We need to build and interpret the data correctly in order to make the right decision. If not, it could lead to delays in fixing a problem because of a misunderstanding, or we could misdirect our efforts in trying to solve the problem.

Something that goes a long way in creating a good Pareto Chart is the design of its histogram, specifically the design of the data collection categories, or bins.

We need to make sure that the bins and categories align with meaningful activities that can be done to address the issue. A problem that practitioners have noticed is that teams don’t drill down far enough to get to the root cause.
The bins for a Pareto Chart also need to follow the MECE Principle, which is an acronym for Mutually Exclusive and Collectively Exhaustive. What it really means is that a data point should only be able to go into one bin, and we have enough bins that they cover the entire scope of the problem.For example, if we’re creating histogram or Pareto chart about somebody’s age, we wouldn’t want to label the age 0 to 20 for one bin and the next bin as 20 to 40. If one of our results was counting a 20-year-old, which bin would they go in? Those bins are not mutually exclusive.
As far as covering the entire scope of the problem, we can think about if we’re counting foods: we would want to ensure that we have a bin for each food group presented. We wouldn’t want to just have categories that cover vegetables, grains, dairy. Where would the fruits go? Including all the subgroups would be collectively exhaustive.
Specific to Pareto Chart bins, there are some assumptions that need to be made:

each category or each bin should be of equal importance
the potential occurrence of each category should be the same
the risks and their severities and occurrences are the same
If the Pareto chart meets these assumptions, then we can more confidently take it at face value. If not, there’s some other things that we can do to make sure that we are analyzing the data properly.

When we’re interpreting the results of a Pareto chart, we have to be careful of just looking at ‘frequency’.

We need to note that our 80%/20% mix may not be specifically 80/20. We could end up with 78% of the issues or 83% of the issues being caused by 23% of the problems. And that’s another thing to note: is that our mix may not add up to 100, and that’s OK.
A mistake that can happen when we’re looking at a Pareto chart to make decisions is picking factors as dominant when they’re not. If we get a flat histogram (in other words, there isn’t a lot of variation in the counts of our categories), then we can’t conclude that any one category is dominant. We may need to re-classify the data or investigate other factors. Maybe our data didn’t follow the MECE principle.
We need to make sure that we’re asking the right question: “Are 20% of the factors contributing to 80% of the issues?” The wrong question is, “How many factors contribute to 80% of the issues?” Remember that we’re looking for the ‘vital few’ causes, so our initial work of addressing the problem gets us the furthest to our goal.
Another mistake when looking at our Pareto Chart is trying to address the top contributor only, anyway, when the 80/20 rule doesn’t apply. Practitioners notice that you may not get the results you want. You may be working with your team really hard on that top priority problem and not getting very far in fixing it.
Another mistake that can happen when we’re looking at our Pareto Chart is focusing only on ‘frequency’ and ignoring the cost or effort to resolve the problem. This kind of gets into our assumptions when we were creating our Pareto Chart. If our categories don’t have equal importance and the potential occurrence of each category is not the same, then there are some things that we can do.To fix for categories with different levels of severities or importance:We can apply a weight. So, for example, we can multiply the results by a factor so that we get a weighted frequency, then reprioritize and redo our Pareto Chart. The weight can be qualitative, and it can be based on assumptions or someone’s judgement call. This factor can also be based on cost savings when the fix is implemented.
To fix for categories with different likelihood of occurrence: We can calculate the rate of the issue and then turn frequencies into rates of occurrence.
We can also combine both severity and occurrence factors if we have a situation with both.
Lastly, we can create a 3D Pareto Chart where we include other factors that are important to us on a third, Z-axis. Instead of multiplying by a factor, we get a visual of the factor across our different categories and can decide the best course of action from there.
To conclude:

A Pareto Chart is a useful tool for planning when trying to address the root cause of a problem. It’s been used for many years and can help us make a decision on the strategy of action.
The results of a Pareto Chart might not be textbook. It is based on a phenomenon not a proven scientific law. It’s more of a compass for a root cause than a statistical result.
We need to think about and interpret its results to make a decision.
Like building histograms, we need to be careful to construct it with the right bins: low enough in a causal chain for us to take action against, following the MECE Principle, and adjusting for categories that don’t carry the same weight in severity or occurrence or even difficulty to resolve.
What can you do with what we’ve been talking about today? Well, get to know the Pareto Chart. If it’s built and applied properly, it can help us prioritize the root cause analysis for things like complaint investigations or V&V failures. For new design features based on user input. And it can help us tackle a problem that just seems too big to even start.

Please visit this podcast blog and others at qualityduringdesign.com. Subscribe to the weekly newsletter to keep in touch. If you like this podcast or have a suggestion for an upcoming episode, let me know. You can find me at qualityduringdesign.com, on Linked-In, or you can leave me a voicemail at 484-341-0238. This has been a production of Denney Enterprises. Thanks for listening!

Filed Under: Quality during Design, The Reliability FM network

About Dianna Deeney

Dianna is a senior-level Quality Professional and an experienced engineer. She has worked over 20 years in product manufacturing and design and is active in learning about the latest techniques in business.

Dianna promotes strategic use of quality tools and techniques throughout the design process.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quality during Design podcast logo

Tips for using quality tools and methods to help you design products others love, for less.


by Dianna Deeney
Quality during Design,
Hosted on Buzzsprout.com
Subscribe and enjoy every episode
Google
Apple
Spotify

Recent Episodes

QDD 128 Leveraging Proven Frameworks or Concept Development

QDD 127 Understanding Cross-Functional Collaboration

QDD 126 Exploring the Problem Space: A Key Principle for Robust Product Design and Project Success

QDD 125 Exploring Product Development and AI Through Literature

QDD 124 Design for User Tasks using an Urgent/Important Matrix

QDD 123 Information Development in Design, with Scott Abel – Part 2 (A Chat with Cross-Functional Experts)

QDD 122 Information Development in Design, with Scott Abel – Part 1 (A Chat with Cross-Functional Experts)

QDD 121 Crafting Effective Technical Documents for the Engineering Field

QDD 120 How to use FMEA for Complaint Investigation

QDD 119 Results-Driven Decisions, Faster: Accelerated Stress Testing as a Reliability Life Test

QDD 118 Journey from Production to Consumption: Enhancing Product Reliability

QDD 117 QDD Redux: Choose Reliability Goals for Modules

QDD 116 Reliability Engineering during Design, with Adam Bahret (A Chat with Cross-Functional Experts)

QDD 115 QDD Redux: 5 Options to Manage Risks during Product Engineering

QDD 114 The Instant Glory of Projects

QDD 113 What to do about Virtual Meetings

QDD 112 QDD Redux: How to self-advocate for more customer face time (and why it’s important)

QDD 111 Engineering with Receptivity, with Sol Rosenbaum (A Chat with Cross-Functional Experts)

QDD 110 Don’t Wish for Cross-Functional Buy-in on Product Designs – Plan to Get It!

QDD 109 Before You Start Engineering Solutions, Do This

QDD 108 QDD Redux Ep. 4: Statistical vs. Practical Significance

QDD 107 QDD Redux Ep. 3: When it’s Not Normal: How to Choose from a Library of Distributions

QDD 106 QDD Redux Ep. 2: How to Handle Competing Failure Modes

QDD 105 QDD Redux Ep. 1: How Many Do We Need to Test?

QDD 104 The Fundamental Thing to Know from Statistics for Design Engineering

QDD 103 What to do for Effective and Efficient Working Meetings

QDD 102 Get Design Inputs with Flowcharts

QDD 101 Quality Tools are Legos of Development (and Their 7 Uses)

QDD 100 Lessons Learned from Coffee Pod Stories

QDD 099 Crucial Conversations in Engineering, with Shere Tuckey (A Chat with Cross-Functional Experts)

QDD 098 Challenges Getting Team Input in Concept Development

QDD 097 Brainstorming within Design Sprints

QDD 096 After the ‘Storm: Compare and Prioritize Ideas

QDD 095 After the ‘Storm: Pareto Voting and Screening Methods

QDD 094 After the ‘Storm: Group and Explore Ideas

QDD 093 Product Design with Brainstorming, with Emily Haidemenos (A Chat with Cross Functional Experts)

QDD 092 Ways to Gather Ideas with a Team

QDD 091 The Spirits of Technical Writing Past, Present, and Future

QDD 090 The Gifts Others Bring

QDD 089 Next Steps after Surprising Test Results

QDD 088 Choose Reliability Goals for Modules

QDD 087 Start a System Architecture Diagram Early

QDD 086 Why Yield Quality in the Front-End of Product Development

QDD 085 Book Cast

QDD 084 Engineering in the Color Economy

QDD 083 Getting to Great Designs

QDD 082 Get Clarity on Goals with a Continuum

QDD 081 Variable Relationships: Correlation and Causation

QDD 080 Use Meetings to Add Productivity

QDD 079 Ways to Partner with Test Engineers

QDD 078 What do We do with FMEA Early in Design Concept?

QDD 077 A Severity Scale based on Quality Dimensions

QDD 076 Use Force Field Analysis to Understand Nuances

QDD 075 Getting Use Information without a Prototype

QDD 074 Finite Element Analysis (FEA) Supplements Test

QDD 073 2 Lessons about Remote Work for Design Engineers

QDD 072 Always Plot the Data

QDD 071 Supplier Control Plans and Design Specs

QDD 070 Use FMEA to Design for In-Process Testing

QDD 069 Use FMEA to Choose Critical Design Features

QDD 068 Get Unstuck: Expand and Contract Our Problem

QDD 067 Get Unstuck: Reframe our Problem

QDD 066 5 Options to Manage Risks during Product Engineering

QDD 065 Prioritizing Technical Requirements with a House of Quality

QDD 064 Gemba for Product Design Engineering

QDD 063 Product Design from a Data Professional Viewpoint, with Gabor Szabo (A Chat with Cross Functional Experts)

QDD 062 How Does Reliability Engineering Affect (Not Just Assess) Design?

QDD 061 How to use FMEA for Complaint Investigation

QDD 060 3 Tips for Planning Design Reviews

QDD 059 Product Design from a Marketing Viewpoint, with Laura Krick (A Chat with Cross Functional Experts)

QDD 058 UFMEA vs. DFMEA

QDD 057 Design Input & Specs vs. Test & Measure Capability

QDD 056 ALT vs. HALT

QDD 055 Quality as a Strategic Asset vs. Quality as a Control

QDD 054 Design Specs vs. Process Control, Capability, and SPC

QDD 053 Internal Customers vs. External Customers

QDD 052 Discrete Data vs. Continuous Data

QDD 051 Prevention Controls vs. Detection Controls

QDD 050 Try this Method to Help with Complex Decisions (DMRCS)

QDD 049 Overlapping Ideas: Quality, Reliability, and Safety

QDD 048 Using SIPOC to Get Started

QDD 047 Risk Barriers as Swiss Cheese?

QDD 046 Environmental Stress Testing for Robust Designs

QDD 045 Choosing a Confidence Level for Test using FMEA

QDD 044 Getting Started with FMEA – It All Begins with a Plan

QDD 043 How can 8D help Solve my Recurring Problem?

QDD 042 Mistake-Proofing – The Poka-Yoke of Usability

QDD 041 Getting Comfortable with using Reliability Results

QDD 040 How to Self-Advocate for More Customer Face Time (and why it’s important)

QDD 039 Choosing Quality Tools (Mind Map vs. Flowchart vs. Spaghetti Diagram)

QDD 038 The DFE Part of DFX (Design For Environment and eXcellence)

QDD 037 Results-Driven Decisions, Faster: Accelerated Stress Testing as a Reliability Life Test

QDD 036 When to use DOE (Design of Experiments)?

QDD 035 Design for User Tasks using an Urgent/Important Matrix

QDD 034 Statistical vs. Practical Significance

QDD 033 How Many Do We Need To Test?

QDD 032 Life Cycle Costing for Product Design Choices

QDD 031 5 Aspects of Good Reliability Goals and Requirements

QDD 030 Using Failure Rate Functions to Drive Early Design Decisions

QDD 029 Types of Design Analyses possible with User Process Flowcharts

QDD 028 Design Tolerances Based on Economics (Using the Taguchi Loss Function)

QDD 027 How Many Controls do we Need to Reduce Risk?

QDD 026 Solving Symptoms Instead of Causes?

QDD 025 Do you have SMART ACORN objectives?

QDD 024 Why Look to Standards

QDD 023 Getting the Voice of the Customer

QDD 022 The Way We Test Matters

QDD 021 Designing Specs for QA

QDD 020 Every Failure is a Gift

QDD 019 Understanding the Purposes behind Kaizen

QDD 018 Fishbone Diagram: A Supertool to Understand Problems, Potential Solutions, and Goals

QDD 017 What is ‘Production Equivalent’ and Why Does it Matter?

QDD 016 About Visual Quality Standards

QDD 015 Using the Pareto Principle and Avoiding Common Pitfalls

QDD 014 The Who’s Who of your Quality Team

QDD 013 When it’s Not Normal: How to Choose from a Library of Distributions

QDD 012 What are TQM, QFD, Six Sigma, and Lean?

QDD 011 The Designer’s Important Influence on Monitoring After Launch

QDD 010 How to Handle Competing Failure Modes

QDD 009 About Using Slide Decks for Technical Design Reviews

QDD 008 Remaking Risk-Based Decisions: Allowing Ourselves to Change our Minds.

QDD 007 Need to innovate? Stop brainstorming and try a systematic approach.

QDD 006 HALT! Watch out for that weakest link

QDD 005 The Designer’s Risk Analysis affects Business, Projects, and Suppliers

QDD 004 A big failure and too many causes? Try this analysis.

QDD 003 Why Your Design Inputs Need to Include Quality & Reliability

QDD 002 My product works. Why don’t they want it?

QDD 001 How to Choose the Right Improvement Model

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy