Each of us have seen product life or component reliability claims on product literature or data sheets. We may even have received such claims stated as goals and been asked to support the claim with some form of an experiment. Standards bodies from ANSI, BSI, ISO, IEC, and others from around the world provide standard methods for testing products. This includes product life testing in some cases.
I have found that naive use of industry standards leads to over and under testing, over confidence, missing critical flaws and a waste of resources. The standards can be useful, yet not when used with out some thought and consideration of the products failure mechanisms. For those that regularly read my posts, you know my view of our work requires our critical thinking to be successful.
We tend to rely on standards to save time, to satisfy customer and contract obligations or to help insure we evaluate a broad range of stresses and environmental conditions. A lot of work typically goes into the creation of standards and the intent is to provide a useful set of guidelines. Standards also provide a means to create results that are verifiable and easily understood.
What we should be doing, with or without standards, is to provide meaningful information and insights on product design weakness or reliability. Any experimentation should have a expressed purpose to determine design margins, robustness, or durability. The intent is to explore the design for flaws or to estimate the ability of the product to meet reliability and maintainability objectives. The testing results should provide value and meaningful information. To do otherwise is wasteful.
Let’s take a simple example of the 85°C and 85%RH for 1000 hour test (it is common across many industries for some unknown reason). Originally, this test was created from a larger study of the bonding of epoxy over molding for early plastic encapsulated modules. The study found that if the bonding was not done properly it would lead to premature failure due to ingress of contaminants through the failed bond of epoxy and lead frame. If the samples passed the testing criteria the process would produce units that would last at least 5 years in normal use. It was an threshold created from a body of work relating accelerated testing conditions to use conditions, for a very specific set of materials and process.
This group of engineers found the 85/85 test useful to evaluate changes to the process, and later changes to the materials. They didn’t repeat the entire set of accelerated tests with each change in materials or process, as each was seen as minimal. Step by step the testing has been applied to solar panels, polymer housings, gaskets, and a myriad of other products. It reminds me of the reason for rail gauge and two horses rear ends [snopes.com discussion].
Yes, using previous work does make sense and saves time. Yes, when there is a small change and the basic underlying failure mechanisms do not appreciably change, then it does make sense. It is the continuation of that line of reasoning till there is little or no connection between the testing and products under test that the issue becomes a problem.
Unfortunately not all standards define the conditions under which the testing is appropriate. They often do not list the boundaries of environmental or material or design parameters that the testing applies. And very few list the specific failure mechanisms. Of course there are exceptions, yet many do not. so, when you find a standard that does not provide explicit connection to a failure mechansim or application – it is time to ask the question.
How does this standard test related to specific failure mechanisms in this product?
There are more questions, yet that is the primary one to ask. It assumes that you (and you should) have explored the most likely failure mechanisms that will occur in your product. That you understand the materials, processes, and design sufficiently to understand the range of possible relevant failure mechanisms.
Another aspect of standard testing is many are based on success testing. We run 3 samples from 3 lots of production and if all ‘pass’ production must be good. Yes, samples and testing is expensive. Yes, more samples may take more testing resources and time to provide results. Yet, what is it you are learning or accomplishing?
A standard test with some standard sample size, may or may not provide sufficient information. Let’s assume the testing is meaningful to a relatively high risk failure mechanisms, yet, using a standard provided sample size without thinking through the meaning of a result is likewise possibly foolhardy.
We could go into the statistics and the required information to analyze the meaning of the result, yet I’ll leave that to the interested reader. Basically, if you don’t know, you may well be running a test that is incapable of detecting a large failure rate in the population. The ‘pass’ could well be meaningless. It also could be over testing and provide false failures, although I’ve seen this only rarely.
One improvement to consider is to run the testing to failure. Learn something about how and when your design fails. Understand the root cause and implement improvements – some call this continuous improvement. For example, dropping a cell phone 10 times on various edges is likely to ‘pass’ at a select height. Either continue to drop or increase the height till failure occurs. Is there a pattern to what fails? Is there a marked change over various weeks of production (or different lines, or different supplied parts, etc) in the number and/or height till first failure? Can you see the difference?
In summary, connect your testing wether standards based or not to learning something useful about your product. Be sure the testing relates to failure mechansisms that are relevant to your product and it’s intended use. Be sure the sample size and other testing parameters are meaningful and provide the intended ability to detect what you are seeking. Consider testing to failure – often the most useful bit of information we can provide the rest of team is a how and why a design fails.
I’m interested in your experience with standards and how it has led to testing value or not so much value. Standards are part of our landscape, they can provide useful guidance, it is up to use to use them properly.
Dan Conrad says
Fred very insightful and true.
Dustin Aldridge says
The most useful standards are those that provide a guideline for the process of test definition, options for model choices, assistance in determining which may be the most appropriate, the assumptions involved, the typical values for model factors and constants; more than simple platitudes of what should be done but how to do it properly, resources, etc..
The least useful but often cited are those for contractural compliance with standard boiler plate of some historically performed test. Often these produce little knowledge of field performance unless someone has measured the real environment or usage and has related the standard test to the field requirement.
The auto industry in my mind has done a good job of this with many OEM’s measuring customers and fleet vehicles around the world. In the defense industry there are good standards that define the process properly as well as outline the theory, models, constants and exponents, but the programs often specify a carry over requirement with old boilerplate. These end up being primarily of administrative value, contractually necessary, but not generating much knowledge.
The sample size aspect is unfortunately a reality, particularly in defense, due to the costs involved. Testing to failure and documenting the design limits does produce information but many are still hung up about exceeding the requirement, sometimes exceeded with accelerated testing levels. One has to overcome this by instilling confidence to the program that the benefits outweigh the risk; develop competent reliability and test professionals with knowledge and humility. Humility because we are not omniscient, it is simply the best we know how. We provide an incling of the degree of risk so that reasonable business decisions can be made. It is seductive to say a failure is a “one off” but in my experience this is very rarely true. If you fail something it is real, if you succeed on a standard test you have not learned much, unless you know its relation to the field need. The field need is a distribution of stress severity too, so this is where a seasoned reliability professional can help management better understand the impact of these risks.
Oleg Ivanov says
Just a simple example of the mismatch of standart.
There is practice of lifetime tests of critical parts by test on “triple lifetime”.
The reliability development process is finished if two exemplars of critical parts pass this test without failures.
Really this give us R=0.99 with CL=0.99 for lognormal lifetime with Root-mean-square deviation of a lifetime logarithm=0.3.
It works for Flexural fatigue of metal, Creep of metal and don’t works for other critical parts http://www.slideshare.net/Oleg_I/lc-sim3