Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Way of the Quality Warrior
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • Quality Engineering Statistics
    • An Introduction to Reliability Engineering
    • Reliability Engineering for Heavy Industry
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Dianna Deeney Leave a Comment

QDD 011 The Designer’s Important Influence on Monitoring After Launch

The Designer’s Important Influence on Monitoring After Launch

Because of your role as a designer in product development, you have great input into the planning for what field (or real-use) data should be monitored for your product. We talk about this as post market surveillance, which is a typical term used for medical devices. This episode talks about how the post market surveillance engine follows the PDSA (plan-do-study-act) continuous improvement cycle, some expectations of post market surveillance systems, and what inputs designers have in its planning.

 

View the Episode Transcript

 

What’s today’s insight to action?

Acknowledge that a great post market surveillance plan will include more than monitoring for complaints. Consider case studies, published reviews, surveys, and other field assessments of our product. But we need to base it on our particular design and market.

Understand that because of our role as designers in product development, we have great input into the planning for post market surveillance of our products. Before there’s a release to market, we can take another look at the product performance results during development, the usability engineering file, and the risk management file so that we can pull out what’s really important to monitor in the field for this product.

Once you’ve had a chance to listen, I want to hear from you. Share your answers in the comments section.

What are some examples of post market surveillance activities for products you’ve been involved with?

Citations

 

Episode Transcript

Our design is finally done going to be released to the market. It’s exciting. But before we hand off our design and move on to the next thing, there’s something you need to consider: do we have a good plan in place to capture data about the device, post-market? In other words, do we have a good post market surveillance plan? Let’s talk about this more after this brief introduction.

Hello and welcome to Quality During Design, the place to use quality thinking to create products, others love for less. My name is Dianna. I’m a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. Listen in, and then join the conversation at qualityduringdesign.com.

Designers have great input into the design of a post market surveillance plan for their products. You have been involved in usability engineering, performance tests, and risk management throughout the design process and you know your product inside an out. Now, linking post market surveillance to things like risk management and product performance is not a new concept. Continuous improvement cycles like Plan, Do, Study, Act have been around for many years. And if using a cycle for product design, part of those cycles is taking field data and feeding it back into the requirements or specs. However, I’m seeing it become highly regulated in the medical device industry, in as much as what’s being reported, how often it’s reported, and the level of third party oversight has increased. In the European Union, the sale of medical devices requires a CE-mark and getting a CE-mark means compliance to the Medical Device Regulations. There was a Medical Devices Directive which was released in 1993, but it was repealed and replaced by the Medical Device Regulations, which was released in 2017. And the transition period for medical device manufacturers to comply with this new MDR ends at the end of May, 2021.

Now, regulations are laws and laws require compliance, which in the product device world means third party audits, mandatory reporting and other oversights. If you don’t comply, you can’t sell your products in the EU. The MDR put the medical device manufacturing community into a bit of a tailspin because of the increased post market surveillance (really it’s the reporting requirements for postmarket surveillance). Companies needed to change their internal policies and procedures in order to meet the demands of MDR. When we get into business systems, there is no one-size-fits-all fix, and policies and procedures depend on the company’s existing structure, the types of products they sell, their markets, and their users, and even their company culture. Because of the changes introduced by the MDR, companies had to redo the way that they’ve been doing business.

So, what is a post market surveillance system? It’s really an engine of information and it’s information that leaders need to use to make decisions about the acceptability of their products in the field. How does this engine work? It’s a lot like a Plan, Do, Study, Act cycle (or the Deming wheel, that continuous improvement cycle) so much so, I don’t really see a need to make up something new for it. If we visualize a wheel with four spokes and each spoke has a different letter designation, one of the spokes is P for plan, D for do, S for study and A for act.

  • In our post market surveillance engine scenario, planning is the design of the product and its risks. Planning is associated with the product design performance, or the claims of what the design can do: the expected performance we’ve learned from doing verification and validation and usability engineering. Planning is also managing risks: identifying them, estimating their effects, putting controls into place, and making an acceptability decision.
  • On the second spoke of the wheel is Do, which is releasing our product of the field.
  • The third spoke of our wheel is Study, which is the actual post market surveillance activities.
  • And the last spoke of our wheel is Act. This has to do with the responsibility of the company to evaluate the post market surveillance data and make a decision about the product. Is it performing the way we intended it to? Are people using it as directed? Is there a problem that we need to recall it in this model?

Product designers affect the planning stage the most.

The most obvious effect that a designer has on the planning part of this engine is the design performance. What are the performance claims we’re making about the product and can it deliver? In other words, is it reliable and dependable?

Usability engineering is also a big part of this: usability engineering started early in the design as an input and it’s part of the design process and includes studies with user groups. It answers questions like can the design be used? Can the user carry out the task? Is it intuitive? Where are places of possible misuse and can it be prevented? Do they understand how to use the device safely? Do they understand the instructions of the product’s use, maintenance and proper disposal? Are the instructions accessible? And do users require training before they use a device?

Risk management is also part of the planning phase of our post market surveillance engine. It is started early in the design as an input as well. It’s also part of the design process, and features analysis, risk controls, and drives action items for improvement. Risk management is an iterative system. It’s never really finished, and the analysis can be considered a ‘living analysis’. It answers questions like: What can go wrong? How bad would it be if it did go wrong? How often would this bad thing happen? What are we doing to prevent this from happening? What are we checking to make sure it doesn’t happen? And what are we telling our users about it and what are we expecting them to do? After all those questions finally, given all that we know about the risk of our product failing or doing harm to people or environment, is it worth manufacturing and selling? Does the benefit of our product outweigh the risks it introduces? Risk management also interfaces with the design, performance and usability engineering efforts, so it affects the design and test methods.

Back to our post market surveillance engine: We are now at the Do part where we are releasing our product to the market. After the release to the market, we can move on to Study and this is where our post market surveillance planning comes into effect, where we’re collecting information about what’s happening in the field and we’re asking questions like: How does it compare with our product performance? How successful is the design? Does it perform as advertised? How does it compare with our usability engineering file? How is it being used? Is it being used for something other than what’s intended? Are there use related issues that were not captured in the usability engineering process and we can compare to our risk analysis? Are there unexpected failures occurring? Is their affect worse than we estimated? Is our occurrence more often than we thought?

Finally, in our post market surveillance engine, we get to act what Actions are we taking based on our study of post market surveillance information? In other words, what are we going to do with what we’ve learned? We need a cross functional team to review this information, just like design inputs include other experts like field ops, quality engineering, reliability engineering, quality assurance, and marketing. Medical device manufacturers have the additional responsibility to decide if their therapy is as good or better than other therapies out there, available for patients. They need to investigate other devices and therapies used to treat the same illness. What have been their failures and how many issues are there? How does their product compare with the use of our product? And they need to make a decision on if the therapy they’re selling is really the best option for the patient. And they need to decide: should they continue to sell it? For some businesses, this would be a huge shift in business focus, wouldn’t it?

So, what’s today’s insight to action?

Well, first we can acknowledge that a great post market surveillance plan will include more than monitoring for complaints. Consider case studies, published reviews, surveys, and other field assessments of our product. But we need to base it on our particular design and market.

Another thing we can do today is to understand that because of our role as designers in product development, we have great input into the planning for post market surveillance of our products. We’ve been involved in performance tests, usability engineering and risk management throughout the design process, and we know our product inside and out. So, before there’s a release to market, we can take another look at the product performance results during development, the usability engineering file, and the risk management file. And we study these so that we can pull out what’s really important to monitor in the field for this product. We can consider what a team would need to monitor to ensure the product is worthy of customers to use. We can use all that work that we did during design development to help make a meaningful plan for post market surveillance.

I’d like to hear from you, now. What are some examples of post market surveillance activities for products you’ve been involved with? Please reach out to me on LinkedIn (I’m Dianna Deeney) or you can leave me a voicemail at 484-341-0238. I get all the messages and I might include yours in an upcoming episode. If you like this podcast or have a suggestion for an upcoming episode, let me know. And share this podcast with your designing peers. This has been the production of Deeney Enterprises. Thanks for listening!

Filed Under: Quality during Design, The Reliability FM network

About Dianna Deeney

Dianna is a senior-level Quality Professional and an experienced engineer. She has worked over 20 years in product manufacturing and design and is active in learning about the latest techniques in business.

Dianna promotes strategic use of quality tools and techniques throughout the design process.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Quality during Design podcast logo

Tips for using quality tools and methods to help you design products others love, for less.


by Dianna Deeney
Quality during Design,
Hosted on Buzzsprout.com
Subscribe and enjoy every episode
Google
Apple
Spotify

Recent Episodes

QDD 128 Leveraging Proven Frameworks or Concept Development

QDD 127 Understanding Cross-Functional Collaboration

QDD 126 Exploring the Problem Space: A Key Principle for Robust Product Design and Project Success

QDD 125 Exploring Product Development and AI Through Literature

QDD 124 Design for User Tasks using an Urgent/Important Matrix

QDD 123 Information Development in Design, with Scott Abel – Part 2 (A Chat with Cross-Functional Experts)

QDD 122 Information Development in Design, with Scott Abel – Part 1 (A Chat with Cross-Functional Experts)

QDD 121 Crafting Effective Technical Documents for the Engineering Field

QDD 120 How to use FMEA for Complaint Investigation

QDD 119 Results-Driven Decisions, Faster: Accelerated Stress Testing as a Reliability Life Test

QDD 118 Journey from Production to Consumption: Enhancing Product Reliability

QDD 117 QDD Redux: Choose Reliability Goals for Modules

QDD 116 Reliability Engineering during Design, with Adam Bahret (A Chat with Cross-Functional Experts)

QDD 115 QDD Redux: 5 Options to Manage Risks during Product Engineering

QDD 114 The Instant Glory of Projects

QDD 113 What to do about Virtual Meetings

QDD 112 QDD Redux: How to self-advocate for more customer face time (and why it’s important)

QDD 111 Engineering with Receptivity, with Sol Rosenbaum (A Chat with Cross-Functional Experts)

QDD 110 Don’t Wish for Cross-Functional Buy-in on Product Designs – Plan to Get It!

QDD 109 Before You Start Engineering Solutions, Do This

QDD 108 QDD Redux Ep. 4: Statistical vs. Practical Significance

QDD 107 QDD Redux Ep. 3: When it’s Not Normal: How to Choose from a Library of Distributions

QDD 106 QDD Redux Ep. 2: How to Handle Competing Failure Modes

QDD 105 QDD Redux Ep. 1: How Many Do We Need to Test?

QDD 104 The Fundamental Thing to Know from Statistics for Design Engineering

QDD 103 What to do for Effective and Efficient Working Meetings

QDD 102 Get Design Inputs with Flowcharts

QDD 101 Quality Tools are Legos of Development (and Their 7 Uses)

QDD 100 Lessons Learned from Coffee Pod Stories

QDD 099 Crucial Conversations in Engineering, with Shere Tuckey (A Chat with Cross-Functional Experts)

QDD 098 Challenges Getting Team Input in Concept Development

QDD 097 Brainstorming within Design Sprints

QDD 096 After the ‘Storm: Compare and Prioritize Ideas

QDD 095 After the ‘Storm: Pareto Voting and Screening Methods

QDD 094 After the ‘Storm: Group and Explore Ideas

QDD 093 Product Design with Brainstorming, with Emily Haidemenos (A Chat with Cross Functional Experts)

QDD 092 Ways to Gather Ideas with a Team

QDD 091 The Spirits of Technical Writing Past, Present, and Future

QDD 090 The Gifts Others Bring

QDD 089 Next Steps after Surprising Test Results

QDD 088 Choose Reliability Goals for Modules

QDD 087 Start a System Architecture Diagram Early

QDD 086 Why Yield Quality in the Front-End of Product Development

QDD 085 Book Cast

QDD 084 Engineering in the Color Economy

QDD 083 Getting to Great Designs

QDD 082 Get Clarity on Goals with a Continuum

QDD 081 Variable Relationships: Correlation and Causation

QDD 080 Use Meetings to Add Productivity

QDD 079 Ways to Partner with Test Engineers

QDD 078 What do We do with FMEA Early in Design Concept?

QDD 077 A Severity Scale based on Quality Dimensions

QDD 076 Use Force Field Analysis to Understand Nuances

QDD 075 Getting Use Information without a Prototype

QDD 074 Finite Element Analysis (FEA) Supplements Test

QDD 073 2 Lessons about Remote Work for Design Engineers

QDD 072 Always Plot the Data

QDD 071 Supplier Control Plans and Design Specs

QDD 070 Use FMEA to Design for In-Process Testing

QDD 069 Use FMEA to Choose Critical Design Features

QDD 068 Get Unstuck: Expand and Contract Our Problem

QDD 067 Get Unstuck: Reframe our Problem

QDD 066 5 Options to Manage Risks during Product Engineering

QDD 065 Prioritizing Technical Requirements with a House of Quality

QDD 064 Gemba for Product Design Engineering

QDD 063 Product Design from a Data Professional Viewpoint, with Gabor Szabo (A Chat with Cross Functional Experts)

QDD 062 How Does Reliability Engineering Affect (Not Just Assess) Design?

QDD 061 How to use FMEA for Complaint Investigation

QDD 060 3 Tips for Planning Design Reviews

QDD 059 Product Design from a Marketing Viewpoint, with Laura Krick (A Chat with Cross Functional Experts)

QDD 058 UFMEA vs. DFMEA

QDD 057 Design Input & Specs vs. Test & Measure Capability

QDD 056 ALT vs. HALT

QDD 055 Quality as a Strategic Asset vs. Quality as a Control

QDD 054 Design Specs vs. Process Control, Capability, and SPC

QDD 053 Internal Customers vs. External Customers

QDD 052 Discrete Data vs. Continuous Data

QDD 051 Prevention Controls vs. Detection Controls

QDD 050 Try this Method to Help with Complex Decisions (DMRCS)

QDD 049 Overlapping Ideas: Quality, Reliability, and Safety

QDD 048 Using SIPOC to Get Started

QDD 047 Risk Barriers as Swiss Cheese?

QDD 046 Environmental Stress Testing for Robust Designs

QDD 045 Choosing a Confidence Level for Test using FMEA

QDD 044 Getting Started with FMEA – It All Begins with a Plan

QDD 043 How can 8D help Solve my Recurring Problem?

QDD 042 Mistake-Proofing – The Poka-Yoke of Usability

QDD 041 Getting Comfortable with using Reliability Results

QDD 040 How to Self-Advocate for More Customer Face Time (and why it’s important)

QDD 039 Choosing Quality Tools (Mind Map vs. Flowchart vs. Spaghetti Diagram)

QDD 038 The DFE Part of DFX (Design For Environment and eXcellence)

QDD 037 Results-Driven Decisions, Faster: Accelerated Stress Testing as a Reliability Life Test

QDD 036 When to use DOE (Design of Experiments)?

QDD 035 Design for User Tasks using an Urgent/Important Matrix

QDD 034 Statistical vs. Practical Significance

QDD 033 How Many Do We Need To Test?

QDD 032 Life Cycle Costing for Product Design Choices

QDD 031 5 Aspects of Good Reliability Goals and Requirements

QDD 030 Using Failure Rate Functions to Drive Early Design Decisions

QDD 029 Types of Design Analyses possible with User Process Flowcharts

QDD 028 Design Tolerances Based on Economics (Using the Taguchi Loss Function)

QDD 027 How Many Controls do we Need to Reduce Risk?

QDD 026 Solving Symptoms Instead of Causes?

QDD 025 Do you have SMART ACORN objectives?

QDD 024 Why Look to Standards

QDD 023 Getting the Voice of the Customer

QDD 022 The Way We Test Matters

QDD 021 Designing Specs for QA

QDD 020 Every Failure is a Gift

QDD 019 Understanding the Purposes behind Kaizen

QDD 018 Fishbone Diagram: A Supertool to Understand Problems, Potential Solutions, and Goals

QDD 017 What is ‘Production Equivalent’ and Why Does it Matter?

QDD 016 About Visual Quality Standards

QDD 015 Using the Pareto Principle and Avoiding Common Pitfalls

QDD 014 The Who’s Who of your Quality Team

QDD 013 When it’s Not Normal: How to Choose from a Library of Distributions

QDD 012 What are TQM, QFD, Six Sigma, and Lean?

QDD 011 The Designer’s Important Influence on Monitoring After Launch

QDD 010 How to Handle Competing Failure Modes

QDD 009 About Using Slide Decks for Technical Design Reviews

QDD 008 Remaking Risk-Based Decisions: Allowing Ourselves to Change our Minds.

QDD 007 Need to innovate? Stop brainstorming and try a systematic approach.

QDD 006 HALT! Watch out for that weakest link

QDD 005 The Designer’s Risk Analysis affects Business, Projects, and Suppliers

QDD 004 A big failure and too many causes? Try this analysis.

QDD 003 Why Your Design Inputs Need to Include Quality & Reliability

QDD 002 My product works. Why don’t they want it?

QDD 001 How to Choose the Right Improvement Model

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy