Isn’t it enough to estimate the age-specific field reliability functions for each of our products and their service parts? Of course we quantify uncertainties in estimates: sample uncertainties and population uncertainties due to changes or evolution. That’s information to forecast service requirements, recommend spares, optimize diagnostics, plan maintenance, warranty reserves, recalls, etc. What else could we possibly need or do?
[Read more…]on Tools & Techniques
A listing in reverse chronological order of articles by:
- Dennis Craggs — Big Data Analytics series
- Perry Parendo — Experimental Design for NPD series
- Dev Raheja — Innovative Thinking in Reliability and Durability series
- Oleg Ivanov — Inside and Beyond HALT series
- Carl Carlson — Inside FMEA series
- Steven Wachs — Integral Concepts series
- Shane Turcott — Learning from Failures series
- Larry George — Progress in Field Reliability? series
- Gabor Szabo — R for Engineering series
- Matthew Reid — Reliability Engineering Using Python series
- Kevin Stewart — Reliability Relfections series
- Anne Meixner — Testing 1 2 3 series
- Ray Harkins — The Manufacturing Academy series
What Is a Standard Deviation and How Do I Compute It?
Most manufacturers would rate product quality as a key driver of their overall ability to satisfy customers and compete in a global market. Poor quality is simply not tolerated. It follows that manufacturers require objective measures of their product quality. While many companies still think of quality as “being in specification,” progressive companies focus on reducing variation to minimize waste and produce products that perform consistently well over time. Quality may be thought of as inversely proportional to variation–that is, as variation increases, product quality decreases. [Read more…]
What Is Equivalence Testing & When Should We Use It?
Most quality professionals are familiar with basic hypothesis tests such as the 2-sample t test. However, depending on the goals of the study, another type of test, called an equivalence test, may be utilized instead of traditional hypothesis tests. This article will review statistical hypothesis testing in general and then introduce equivalence testing and its application. To illustrate the differences between traditional hypothesis tests and equivalence tests, we will focus on the case of comparing 2 independent samples. The concepts may be easily extended to other situations (such a comparing a sample to a target or paired comparisons). [Read more…]
How Undetected Process Changes Can Impair Product Reliability
SPC and Reliability
We often think of Statistical Process Control as a tool to help drive product quality by informing us when process changes occur. By systematically detecting (and rectifying) sources of special cause variation upstream in the process, the important process outcomes become predictable. Furthermore, a focus on reducing common cause variation drives higher levels of process capability and more consistent product performance. [Read more…]
Optimizing Product Target Weights of Foods and Beverages
In order to maximize profitability while complying with government regulations regarding net package contents, food manufacturers and packagers must achieve an optimal balance. Consistent overfilling to minimize risk is inefficient and sacrifices profitability, while aggressive filling practices result in significant risks of non-compliance with net contents regulations leading to potential penalties, loss of reputation, and impaired customer relations. Statistical process control and process capability methods may be utilized to determine optimal targets for product fill weights or volumes for a given process. Subsequent focused efforts to minimize variation will allow the target to be further optimized, resulting in less waste without compromising risk. [Read more…]
Facilitation Skill #3: Asking Probing Questions
FMEA facilitators can generate deep discussion and stimulate creative ideas by asking probing questions.
“A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of idea.” – John Ciardi
The Oxford English dictionary defines “probe” as “seek to uncover information about something.” [Read more…]
Where do the Typical Control Chart Signals Come From?
The purpose of control charting is to regularly monitor a process so that significant process changes may be detected. These process changes may be a shift in the process average (Xbar) or a change in the amount of variation in the process. The variation observed when the process is operating normally is called common cause variation. When a process change occurs, then special cause variation occurs. [Read more…]
Help! They Lost the Data
What can we do without reliability function estimates? FMEA? FTA? RCA? RCM? Argue about MTBFs and availability? Weibull? Keep a low profile? Run Admirals’ tests? Look for a new, well-funded project far from the deliverable stage?
Ask for field data; there should be enough to estimate reliability and make reliability-based decisions, even if some data are missing. Field data might even be population data!
[Read more…]Ten Ways to Improve Your Measurement Systems Assessments
Why Measurement Systems Assessment (MSA)?
Effective use of data to drive decision making requires adequate measurement systems. For example, when implementing statistical process control charts, we assume that a signal represents a significant change in the process and we react as such. However, inadequate measurement systems may result in inappropriate signals or even worse, charts that fail to detect important process changes. Thus, it is incumbent upon us to ensure that measurement systems are adequate for their intended use via proper assessments prior to their use. Only capable measurement systems should be utilized in data based methods such as Statistical Process Control, Design of Experiments, Inspection activities, etc. [Read more…]
Estimation of a Hidden Service-Time Distribution of an M(t)/G/∞ Self-Service System
(This is chapter 5 of User Manual for Credible Reliability Prediction – Field Reliability (google.com), cleaned up and typeset for AccendoReliaiblity Weekly Update.)
The nonparametric maximum likelihood estimator for an M/G/∞ self-service time distribution function G(t) extends to nonstationary, time-dependent, Poisson arrival process M(t)/G/∞ systems, under a condition. A linearly increasing Poisson rate function satisfies the condition. The estimator of 1-G(t) is a reliability function estimate, from population ships and returns data required by generally accepted accounting principles.
[Read more…]How Does SPC Complement My Automatic Inspection System?
Background
More companies are leveraging high speed vision systems to inspect multiple quality characteristics on their products.
For example, in a high volume baking operation, a vision system can test for bun height, bun length, slice thickness, topping distribution, surface color, and more. This happens automatically on the line at high speeds. In bottling or other plastic manufacturing, a vision system may inspect multiple dimensions and surface properties. [Read more…]
How do I Implement SPC for Short Production Runs (Part II)?
In Part I of this article, we introduced the concept of utilizing Deviation from Nominal (DNOM) control charts for short production runs. These charts allow us to monitor process characteristics over time even when the units being controlled have varying nominal values. DNOM charts assume that the process variability (i.e. standard deviation) does not vary significantly by part type. However, often this assumption does not hold. Characteristics with larger nominal values tend to have more variation than characteristics with smaller nominal values. In Part II we discuss how to test whether or not significant differences in variability exist and if so, how to modify the DNOM methods and charts to handle this situation. [Read more…]
Analyzing the Experiment (Part 6) – Prediction Uncertainty and Model Validation
In the last Article, we explored the use of contour plots and other tools (such as a response optimizer) to help us quickly find solutions to our models. In this article, we will look at the uncertainty in these predictions. We will also discuss model validation to ensure that technical assumptions that are inherent in the modeling process is satisfied. [Read more…]
Facilitation Skill #2: Controlling Discussion
Facilitation Skill # 2 – Controlling Discussion
“It was impossible to get a conversation going, everybody was talking too much.” – Yogi Berra
Based on actual surveys of FMEA team leaders, the most common concern is how to control discussion during team meetings. This article will provide insight into this critical facilitation skill, and is a companion to the previous article in this series: Facilitation Skill #1: – Encouraging Participation.
Analyzing the Experiment (Part 5) – Contour Plots and Optimization
In the last Article, we learned how to work with predictive models to find solutions that solve for desired responses. We used some basic algebra to solve for solutions and looked at the use of contour plots to quickly visualize many solutions at a glance.
In this article, we further explore the use of contour plots and other tools to help us quickly find solutions to our models. We start by revisiting the battery life DOE example that was discussed in the previous article. The statistical output below shows the coded model that contains only the statistically significant (main and interaction) effects. [Read more…]
- « Previous Page
- 1
- …
- 6
- 7
- 8
- 9
- 10
- …
- 32
- Next Page »