Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
    • About Us
    • Colophon
    • Survey
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • CMMSradio
    • Way of the Quality Warrior
    • Critical Talks
    • Asset Performance
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Hero
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
  • Articles
    • CRE Preparation Notes
    • NoMTBF
    • on Leadership & Career
      • Advanced Engineering Culture
      • ASQR&R
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Maintenance Management
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • RCM Blitz®
      • ReliabilityXperience
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Breaking Bad for Reliability
      • Field Reliability Data Analysis
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability by Design
      • Reliability Competence
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
      • Reliability Knowledge
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • The RCA
      • Communicating with FINESSE
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Institute of Quality & Reliability
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Statistical Methods for Failure-Time Data
      • Testing 1 2 3
      • The Hardware Product Develoment Lifecycle
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Special Offers
    • Accendo Authors
    • FMEA Resources
    • Glossary
    • Feed Forward Publications
    • Openings
    • Books
    • Webinar Sources
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • Your Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Live Courses
      • Introduction to Reliability Engineering & Accelerated Testings Course Landing Page
      • Advanced Accelerated Testing Course Landing Page
    • Integral Concepts Courses
      • Reliability Analysis Methods Course Landing Page
      • Applied Reliability Analysis Course Landing Page
      • Statistics, Hypothesis Testing, & Regression Modeling Course Landing Page
      • Measurement System Assessment Course Landing Page
      • SPC & Process Capability Course Landing Page
      • Design of Experiments Course Landing Page
    • The Manufacturing Academy Courses
      • An Introduction to Reliability Engineering
      • Reliability Engineering Statistics
      • An Introduction to Quality Engineering
      • Quality Engineering Statistics
      • FMEA in Practice
      • Process Capability Analysis course
      • Root Cause Analysis and the 8D Corrective Action Process course
      • Return on Investment online course
    • Industrial Metallurgist Courses
    • FMEA courses Powered by The Luminous Group
      • FMEA Introduction
      • AIAG & VDA FMEA Methodology
    • Barringer Process Reliability Introduction
      • Barringer Process Reliability Introduction Course Landing Page
    • Fault Tree Analysis (FTA)
    • Foundations of RCM online course
    • Reliability Engineering for Heavy Industry
    • How to be an Online Student
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
    • Accendo Reliability Webinar Series
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home
Home » Podcast Episodes » Quality during Design » QDD 002 My product works. Why don’t they want it?

by Dianna Deeney Leave a Comment

QDD 002 My product works. Why don’t they want it?

My product works. Why don’t they want it?

Have you ever designed a product that works but that customers just don’t want to use?

We’ve put a product to market only to find that the users just don’t want to use it. They’re buying it, so there’s perceived value in it. It’s functional, it does what we say it will do, it really works! But they’re not repeat buyers and not making good recommendations to others (it’s sort of making the company look bad).

What went wrong? In some cases, there’s warning signs to watch for during your design development. And, if looking to the design process for an answer, there’s some tools and strategies to help prevent that from happening before product launch, or to help as a starting point when you plan for your version 2.0.

This podcast will review the strategies and pitfalls and how to avoid them.

  • Get the right level of detail in your user process flow (or user task analysis).
  • Follow-through on early warning signs that you might not have the right level of detail.

I’ll also share a memorable phrase that will help you remember to apply what you’ve learned.

View the Episode Transcript

There are some actions you can take today. If you’re in the development phase of something now, reacquaint yourself with your user profile, process and use scenarios. Make sure you and your team agree that they’re at the right level of detail. And, whenever you get feedback from a customer, take another, discerning look at those user files to make sure they’re enough (all of them, not just the part related to the feedback).

Once you’ve had a chance to listen, I want to hear from you. Share your answers to one of these questions in the comments section.

What are your stories of designs that customers just didn’t like or want to use? Can you tell us about a specific detail and some of the history of what you did to resolve it, or what you would do differently next time?

If you are new to process flowcharting, get in touch with your local Quality Professional, or see this resource from ASQ: What is a Flowchart? Process Flow Diagrams & Maps | ASQ

Citations

 

Episode Transcript

 

Note: This transcript is not word-for-word true transcript of the podcast episode. I wrote it before I recorded it, that’s all!

Have you ever designed a product that works but that customers just don’t want to use? This episode will review some tools and strategies to help prevent that from happening before product launch, or to help as a starting point when you plan for your version 2.0.

Our problem is that we put a product to market only to find that the users just don’t want to use it. They’re buying it, so there’s perceived value in it! It’s functional, it does what we say it will do, it really works! But they’re not repeat buyers and not making good recommendations to others. It’s sort of making the company look bad. What went wrong? We put it through a design development or design control process. We had user needs and requirements that were validated and verified, successfully.  When looking to the design process, ONE of the first places I would look is to how the user procedure was documented and acted on. At what level of detail is the user’s process flow? It may not have been detailed enough. If lacking enough detail, then some important user needs and requirements may have been missed.  There’s a balance of detail that needs to be considered: 1) we need to keep it simple enough to execute or act against (if it’s so detailed it’s big and cumbersome and overwhelming…well, we don’t want to hinder ourselves from releasing a great, helpful product) and 2) we can’t make it so simple that we miss important information. If it’s too high-level, we won’t know how to design for exceptional user experiences. In order to design for that, we need to understand and represent our users and their use scenario and process as best as possible. Even if that makes it more complicated for us.

So, how and at what point could you tell that your user process flow was not detailed enough? In some cases, there’s warning signs to watch for during your design development, especially during the prototype evaluation phases. Any time a user provides feedback that a design is inadequate, we should not just zero-in on that one feature of complaint, but take another, discerning look at user profile, process and use scenarios for our project. It doesn’t always have to end as a released product that people buy but never use, again.

Let’s talk through a simple example that we can visualize, and maybe one that you can easily peg in your memory. We’re designing a winter night shirt for a child.  Within our design development process, we have some user needs: a shirt with long sleeves made of cozy material. It’s a simple product: the kid needs to wear it to bed. So, that’s how’ll we’ll define our use information: a child wears the shirt to bed. We follow the design process from identifying the idea; defining the needs and requirements; plan; and then design. We even consult with our user group along the way. We’re ready for the prototype phase.

We have some representative users try on our prototype shirt. Their head wouldn’t fit. We decide to just fix the one problem that our users identified: we modified the shirt by cutting a little slit in the neck hole then continued with production.

But, wait a minute. We failed the prototype evaluation. This should have been a first indication that something bigger may be wrong with the design. When we’re performing validations or getting customers’ feedback on our prototypes and we have something that goes wrong (like their head not fitting through the head hole), there’s options for how we react to this information. One option is we can go ahead and fix that one problem and keep moving ahead. Or we recognize this may be a symptom of potentially larger problems with our design: maybe we don’t really understand our users and their use scenario and process as best as possible. Maybe we don’t have enough detail in our user process, and we’ve missed some important requirements. What was our user information for this nightshirt project, again? Ah, yes, the kid had to wear it to bed…period.

Let’s say we fix the head hole and continue forward through our design development. Our users could now get the shirt over their head, but to get their arms into the sleeves was a nightmare! They tried it ‘this way’ and then ‘that way’…the kid nearly needed to be contortionist just to be able to get into it. The user was able to wear it after some help from an adult. They don’t like it. We know it’s not right, and they know it’s not right. But, we had spent so much time and money on it that we didn’t want to give up on it. So, we started to sell it. “Well, that’s OK, we did get it on! And, look, see how it fits nicely? And isn’t it nice and cozy?”

Our shirt design was a failure. The user doesn’t want to use our product. It’s too hard to use. It did meet what we captured as user needs and requirements: it was cozy, it did fit, and our user could put it on and wear it. And it performed its function (it could be worn to bed). But, the customer is not happy.

Our user process was not detailed enough. And, because of that, we missed important needs and requirements.  Remember, our user information was “a child wears the shirt to bed”. A more appropriately detailed user process could have been 1. Put the shirt on 2. Walk and move in it 3. Wear it to bed 4. Take it off 5. Launder it. We could break it down even further with a few more steps and details. For example, our step number 1. Put the shirt on could be broken down into: 1a. pull the shirt down over the head, 1b. push arms through the sleeves, one at a time, and then finally 1c. grab and pull the bottom hem of the shirt down toward the floor. If we had more detail in our user process from the start, it could have forced us to stop and think through more of the use scenarios and design. Even if a user process seems intuitive and simple (sometimes especially so), we benefit from documenting it so we can more clearly see it, communicate it with the rest of the team, and understand it.

If at any point a user evaluates our prototype product and has feedback, we won’t just address that feedback. We’ll take a another, discerning look at our user profile, process, and use scenarios and ensure that we’ve absolutely captured it at the right level of detail. If we find ourselves justifying a design because the user isn’t doing it right, well, we need to be prepared to conclude that maybe we didn’t design it for the user. We designed it to function, but we didn’t design it for exceptional user experiences because we didn’t plan appropriately and understand enough about our user process and scenario. Maybe we’ll be given an opportunity to fix it, and we’ll go back to our user information and adjust our needs and requirements. Maybe we can’t do anything about it, but it’s never too late to communicate our lessons learned for the next time.

What actions can you take today? If you’re in the development phase of something now, reacquaint yourself with your user profile, process and use scenarios. Make sure you and your team agree that they’re at the right level of detail. And, whenever you get feedback from a customer, take another, discerning look at those user files to make sure they’re enough. And maybe peg in your memory the “can’t fit the head through the head hole” story to remind you to watch out for those types of things during your user evaluations of your designs.

Filed Under: Quality during Design, The Reliability FM network

About Dianna Deeney

Dianna is a senior-level Quality Professional and an experienced engineer. She has worked over 20 years in product manufacturing and design and is active in learning about the latest techniques in business.

Dianna promotes strategic use of quality tools and techniques throughout the design process.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quality during Design podcast logo

Tips for using quality tools and methods to help you design products others love, for less.


by Dianna Deeney
Quality during Design,
Hosted on Buzzsprout.com
Subscribe and enjoy every episode
Google
Apple
Spotify

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy

Book the Course with John
  Ask a question or send along a comment. Please login to view and use the contact form.
This site uses cookies to give you a better experience, analyze site traffic, and gain insight to products or offers that may interest you. By continuing, you consent to the use of cookies. Learn how we use cookies, how they work, and how to set your browser preferences by reading our Cookies Policy.