Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Way of the Quality Warrior
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • Quality Engineering Statistics
    • An Introduction to Reliability Engineering
    • Reliability Engineering for Heavy Industry
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Dianna Deeney Leave a Comment

QDD 089 Next Steps after Surprising Test Results

Next Steps after Surprising Test Results

During product development, we’re consistently looking for ways to learn more about the product in order to make design decisions. Some of that comes from test.

What do we do when our test results are…surprising?

We talk about some next steps I typically take when tasked with surprises.

 

View the Episode Transcript

Some steps to get us started after surprising test results: what did we learn?

  1. Revisiting the purpose. Did we answer the question we originally started with, or no?
  2. Understanding failure modes at test vs. what we expected. We can go back to our definition of a failure mode from requirements and our FMEAs (failure mode and effects analyses). How do our test results compare to those? What have we learned?
  3. Finding a root cause by following a process, with the help of FMEA causes – things we thought could cause the failure mode. Does it match? What has changed?
  4. Deciding what to do next, depending on where we are in the development process. We also refer back to the FMEA with an update to it based on our test results. Has our risk analysis changed? Has our concluding risk statement changed? We may find that it has, in which case we need to assess what to do about that with our team. If we’ve identified a new risk, we can work to eliminate or control it.

 


Citations:

Other Quality during Design podcast episodes you might like:

How to Handle Competing Failure Modes

Remaking Risk-Based Decisions: Allowing Ourselves to Change our Minds

5 Aspects of Good Reliability Goals and Requirements

The Way We Test Matters

 

Episode Transcript

We’re in new product development and we’re deciding we want to learn more about our product through test and we get our test results back and they’re not quite what we expected. What does that mean and how can we move forward? Let’s talk more about some steps we can take after this brief introduction.

Hello and welcome to Quality During Design, the place to use quality thinking to create products, others love for less. Each week we talk about ways to use quality during design, engineering, and product development. My name is Dianna Deeney. I’m a senior level quality, professional, and engineer with over 20 years of experience in manufacturing and design. Listen in and then join us. Visit quality during design.com.

Do you know what 12 things you should have before a design concept makes it to the engineering drawing board where you’re setting specifications. I’ve got a free checklist for you and you can do some assessments of your own.

Where do you stack up with the checklist? You can log into a learning portal to access the checklist and an introduction to more information about how to get those 12 things. To get this free information, just sign up@qualityduringdesign.com. On the homepage, there’s a link in the middle of the page. Just click it and say, I want it. Something standard that we do during new product development or any development is run some tests. It’s just part of the engineering cycle. It’s part of the scientific method. We’re using our creative energies to come up with new ideas and design new things, and then we develop tests and requirements against which to test them, and then we test them and look at the results. The results are not usually so clean, cut and straightforward. Despite our best efforts at trying to define clear requirements from the beginning. Sometimes our test results are a little messy.

Now we can get frustrated about this or we can look at it as an opportunity to learn more about our product. When I am working with test results that are messy or someone approaches me with test results that they’re not sure what to do with next, there’s some standard things that I tend to do, so let me share what those are with you and why I look into those.

The first thing I do is I go back to revisit the purpose of the test. Sometimes these are based on product requirements or user needs. What is it that we were really trying to test against to verify or to learn? Did we learn what we intended to learn from this test or did something new come up? If we’re performing a reliability test, what were our reliability goals? They should define what a failure mode is, even if it’s worded as a success and our requirements. Our full requirement might be spread out in different sections of our requirements document, but it should be there and we should have based our test method based off of that. If it’s a reliability requirement, we could be looking at a measure of time, reliability at specific points in time, and then no matter what we’re testing, we’re going to be defining a desired confidence level, what a failure would be, the definition of the failure, and then what kind of operating environmental conditions we expect our product to be able to perform within. When we’re looking at the results of test, we always go back to what the original intent was.

The next thing I look at is what was the failure at test exactly? Is how the product failed match with what we expected it to do, or did something unexpected happen? We may also be dealing with something called competing failure modes, something I get into in a different episode and I’ll link to that. Even if this failure mode isn’t exactly what we were expecting when we started out the test, do we understand that this was even a possibility? We can go back to our FMEAs that we’ve done in our preliminary concept development and we can see if we listed that failure mode within the FMEA and what effect was listed. What was associated with that failure mode and how severe was it?

We’re not even really looking at the numbers yet, and we’re already starting to learn a lot about the results of this test. We’re thinking about the original intent and we’re looking at the different failure modes that have occurred and comparing that against what we expected the product to do.

Now is where we can start taking it to the next step, which is verifying the root cause. We want to get to the root cause so that we can understand the most out of our product. Why did it fail the way it did, or why was it performing the way that it was? Is it because of the product itself or was there variation introduced in making the product that we hadn’t accounted for, or was it even the test method? Sometimes the way that we choose to test the product or handle it or store it between tests can introduce variables that produce failure modes that we didn’t expect while we’re investigating the root cause. We can go back to the FMEA again, what causes are associated with the failure mode that we saw from our test results referencing the FMEA this way may help us with our root cause analysis. Once we’ve gotten to the root cause, we can check that yes, we did have it in there, or no we didn’t, and we have to add it. We can also look at the occurrence. Now that we’ve tested it, we better understand what we think the occurrence of that failure mode due to that cause could be. I understand that it’s easy to say, “Go find the root cause.” It usually takes several iterations of different investigations and different test confirmations to be able to verify that you did get the root cause. But getting to the root cause is an important part of investigating a test failure.

So where we are now is we’ve had a test with results. We’ve revisited the purpose of the test. Did our results meet up with the purpose of our test or did we learn something new from that? We looked at the failure mode. Was our failure mode something we expected or did we learn something new? And then we’ve gotten to the root cause and we’ve probably learned the most out of finding the root cause, out of our test method.

After all of this, we can decide what to do next. What did we learn about the product through the test and the test results? Is it acceptable or not? Did it meet the requirement and is it going to meet the need or did we find a new failure mode that we need to address? There’s a whole product development world of different questions we could be asking ourselves with, “Is this acceptable or not?” We should have clearly defined acceptance criteria at the beginning of test, and depending where we are in the product development cycle, we’ll depend on how we can react with this question.

We want to learn as much about the product as early in the development process as we can. So if this is an early test, then we may have lots of options to redesign things, to reconfigure things, choose different components. Other prevention methods that we could use would be changing the manufacturing process. We could also decide to add detection controls, something like in process testing or inspection to actively look for the root cause that we’ve discovered.

If we’re learning lots of new things after the product’s already developed, then that can be more challenging. It will be more difficult or impossible to make a design change late in the development process. Sometimes our tests pan out this way and our project may get scrapped.

What’s today’s insight to action? No matter where in the development process we’re working, we can look at product test as learning about the product itself. If we strive to learn about it early in development as much as we can, we’ll have the most chances of making the product right. We can work with our quality and reliability engineering friends to help us develop tests early in development, And then exploring potential issues through FMEA (failure mode and effects analysis) with our team can highlight where to test and can also help us in deciding about how we react with the test results. What next steps we should take?

If you like this topic or the content in this episode, there’s much more on our website including information about how to join our signature coaching program, the Quality during Design Journey. Consistency is important, so subscribe to the weekly newsletter. This has been a production of Deeney Enterprises. Thanks for listening!

 

Filed Under: Quality during Design

About Dianna Deeney

Dianna is a senior-level Quality Professional and an experienced engineer. She has worked over 20 years in product manufacturing and design and is active in learning about the latest techniques in business.

Dianna promotes strategic use of quality tools and techniques throughout the design process.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quality during Design podcast logo

Tips for using quality tools and methods to help you design products others love, for less.


by Dianna Deeney
Quality during Design,
Hosted on Buzzsprout.com
Subscribe and enjoy every episode
Google
Apple
Spotify

Recent Episodes

QDD 128 Leveraging Proven Frameworks or Concept Development

QDD 127 Understanding Cross-Functional Collaboration

QDD 126 Exploring the Problem Space: A Key Principle for Robust Product Design and Project Success

QDD 125 Exploring Product Development and AI Through Literature

QDD 124 Design for User Tasks using an Urgent/Important Matrix

QDD 123 Information Development in Design, with Scott Abel – Part 2 (A Chat with Cross-Functional Experts)

QDD 122 Information Development in Design, with Scott Abel – Part 1 (A Chat with Cross-Functional Experts)

QDD 121 Crafting Effective Technical Documents for the Engineering Field

QDD 120 How to use FMEA for Complaint Investigation

QDD 119 Results-Driven Decisions, Faster: Accelerated Stress Testing as a Reliability Life Test

QDD 118 Journey from Production to Consumption: Enhancing Product Reliability

QDD 117 QDD Redux: Choose Reliability Goals for Modules

QDD 116 Reliability Engineering during Design, with Adam Bahret (A Chat with Cross-Functional Experts)

QDD 115 QDD Redux: 5 Options to Manage Risks during Product Engineering

QDD 114 The Instant Glory of Projects

QDD 113 What to do about Virtual Meetings

QDD 112 QDD Redux: How to self-advocate for more customer face time (and why it’s important)

QDD 111 Engineering with Receptivity, with Sol Rosenbaum (A Chat with Cross-Functional Experts)

QDD 110 Don’t Wish for Cross-Functional Buy-in on Product Designs – Plan to Get It!

QDD 109 Before You Start Engineering Solutions, Do This

QDD 108 QDD Redux Ep. 4: Statistical vs. Practical Significance

QDD 107 QDD Redux Ep. 3: When it’s Not Normal: How to Choose from a Library of Distributions

QDD 106 QDD Redux Ep. 2: How to Handle Competing Failure Modes

QDD 105 QDD Redux Ep. 1: How Many Do We Need to Test?

QDD 104 The Fundamental Thing to Know from Statistics for Design Engineering

QDD 103 What to do for Effective and Efficient Working Meetings

QDD 102 Get Design Inputs with Flowcharts

QDD 101 Quality Tools are Legos of Development (and Their 7 Uses)

QDD 100 Lessons Learned from Coffee Pod Stories

QDD 099 Crucial Conversations in Engineering, with Shere Tuckey (A Chat with Cross-Functional Experts)

QDD 098 Challenges Getting Team Input in Concept Development

QDD 097 Brainstorming within Design Sprints

QDD 096 After the ‘Storm: Compare and Prioritize Ideas

QDD 095 After the ‘Storm: Pareto Voting and Screening Methods

QDD 094 After the ‘Storm: Group and Explore Ideas

QDD 093 Product Design with Brainstorming, with Emily Haidemenos (A Chat with Cross Functional Experts)

QDD 092 Ways to Gather Ideas with a Team

QDD 091 The Spirits of Technical Writing Past, Present, and Future

QDD 090 The Gifts Others Bring

QDD 089 Next Steps after Surprising Test Results

QDD 088 Choose Reliability Goals for Modules

QDD 087 Start a System Architecture Diagram Early

QDD 086 Why Yield Quality in the Front-End of Product Development

QDD 085 Book Cast

QDD 084 Engineering in the Color Economy

QDD 083 Getting to Great Designs

QDD 082 Get Clarity on Goals with a Continuum

QDD 081 Variable Relationships: Correlation and Causation

QDD 080 Use Meetings to Add Productivity

QDD 079 Ways to Partner with Test Engineers

QDD 078 What do We do with FMEA Early in Design Concept?

QDD 077 A Severity Scale based on Quality Dimensions

QDD 076 Use Force Field Analysis to Understand Nuances

QDD 075 Getting Use Information without a Prototype

QDD 074 Finite Element Analysis (FEA) Supplements Test

QDD 073 2 Lessons about Remote Work for Design Engineers

QDD 072 Always Plot the Data

QDD 071 Supplier Control Plans and Design Specs

QDD 070 Use FMEA to Design for In-Process Testing

QDD 069 Use FMEA to Choose Critical Design Features

QDD 068 Get Unstuck: Expand and Contract Our Problem

QDD 067 Get Unstuck: Reframe our Problem

QDD 066 5 Options to Manage Risks during Product Engineering

QDD 065 Prioritizing Technical Requirements with a House of Quality

QDD 064 Gemba for Product Design Engineering

QDD 063 Product Design from a Data Professional Viewpoint, with Gabor Szabo (A Chat with Cross Functional Experts)

QDD 062 How Does Reliability Engineering Affect (Not Just Assess) Design?

QDD 061 How to use FMEA for Complaint Investigation

QDD 060 3 Tips for Planning Design Reviews

QDD 059 Product Design from a Marketing Viewpoint, with Laura Krick (A Chat with Cross Functional Experts)

QDD 058 UFMEA vs. DFMEA

QDD 057 Design Input & Specs vs. Test & Measure Capability

QDD 056 ALT vs. HALT

QDD 055 Quality as a Strategic Asset vs. Quality as a Control

QDD 054 Design Specs vs. Process Control, Capability, and SPC

QDD 053 Internal Customers vs. External Customers

QDD 052 Discrete Data vs. Continuous Data

QDD 051 Prevention Controls vs. Detection Controls

QDD 050 Try this Method to Help with Complex Decisions (DMRCS)

QDD 049 Overlapping Ideas: Quality, Reliability, and Safety

QDD 048 Using SIPOC to Get Started

QDD 047 Risk Barriers as Swiss Cheese?

QDD 046 Environmental Stress Testing for Robust Designs

QDD 045 Choosing a Confidence Level for Test using FMEA

QDD 044 Getting Started with FMEA – It All Begins with a Plan

QDD 043 How can 8D help Solve my Recurring Problem?

QDD 042 Mistake-Proofing – The Poka-Yoke of Usability

QDD 041 Getting Comfortable with using Reliability Results

QDD 040 How to Self-Advocate for More Customer Face Time (and why it’s important)

QDD 039 Choosing Quality Tools (Mind Map vs. Flowchart vs. Spaghetti Diagram)

QDD 038 The DFE Part of DFX (Design For Environment and eXcellence)

QDD 037 Results-Driven Decisions, Faster: Accelerated Stress Testing as a Reliability Life Test

QDD 036 When to use DOE (Design of Experiments)?

QDD 035 Design for User Tasks using an Urgent/Important Matrix

QDD 034 Statistical vs. Practical Significance

QDD 033 How Many Do We Need To Test?

QDD 032 Life Cycle Costing for Product Design Choices

QDD 031 5 Aspects of Good Reliability Goals and Requirements

QDD 030 Using Failure Rate Functions to Drive Early Design Decisions

QDD 029 Types of Design Analyses possible with User Process Flowcharts

QDD 028 Design Tolerances Based on Economics (Using the Taguchi Loss Function)

QDD 027 How Many Controls do we Need to Reduce Risk?

QDD 026 Solving Symptoms Instead of Causes?

QDD 025 Do you have SMART ACORN objectives?

QDD 024 Why Look to Standards

QDD 023 Getting the Voice of the Customer

QDD 022 The Way We Test Matters

QDD 021 Designing Specs for QA

QDD 020 Every Failure is a Gift

QDD 019 Understanding the Purposes behind Kaizen

QDD 018 Fishbone Diagram: A Supertool to Understand Problems, Potential Solutions, and Goals

QDD 017 What is ‘Production Equivalent’ and Why Does it Matter?

QDD 016 About Visual Quality Standards

QDD 015 Using the Pareto Principle and Avoiding Common Pitfalls

QDD 014 The Who’s Who of your Quality Team

QDD 013 When it’s Not Normal: How to Choose from a Library of Distributions

QDD 012 What are TQM, QFD, Six Sigma, and Lean?

QDD 011 The Designer’s Important Influence on Monitoring After Launch

QDD 010 How to Handle Competing Failure Modes

QDD 009 About Using Slide Decks for Technical Design Reviews

QDD 008 Remaking Risk-Based Decisions: Allowing Ourselves to Change our Minds.

QDD 007 Need to innovate? Stop brainstorming and try a systematic approach.

QDD 006 HALT! Watch out for that weakest link

QDD 005 The Designer’s Risk Analysis affects Business, Projects, and Suppliers

QDD 004 A big failure and too many causes? Try this analysis.

QDD 003 Why Your Design Inputs Need to Include Quality & Reliability

QDD 002 My product works. Why don’t they want it?

QDD 001 How to Choose the Right Improvement Model

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy