Accendo Reliability

Your Reliability Engineering Professional Development Site

  • Home
  • About
    • Contributors
  • Reliability.fm
    • Speaking Of Reliability
    • Rooted in Reliability: The Plant Performance Podcast
    • Quality during Design
    • Way of the Quality Warrior
    • Critical Talks
    • Dare to Know
    • Maintenance Disrupted
    • Metal Conversations
    • The Leadership Connection
    • Practical Reliability Podcast
    • Reliability Matters
    • Reliability it Matters
    • Maintenance Mavericks Podcast
    • Women in Maintenance
    • Accendo Reliability Webinar Series
  • Articles
    • CRE Preparation Notes
    • on Leadership & Career
      • Advanced Engineering Culture
      • Engineering Leadership
      • Managing in the 2000s
      • Product Development and Process Improvement
    • on Maintenance Reliability
      • Aasan Asset Management
      • AI & Predictive Maintenance
      • Asset Management in the Mining Industry
      • CMMS and Reliability
      • Conscious Asset
      • EAM & CMMS
      • Everyday RCM
      • History of Maintenance Management
      • Life Cycle Asset Management
      • Maintenance and Reliability
      • Maintenance Management
      • Plant Maintenance
      • Process Plant Reliability Engineering
      • ReliabilityXperience
      • RCM Blitz®
      • Rob’s Reliability Project
      • The Intelligent Transformer Blog
      • The People Side of Maintenance
      • The Reliability Mindset
    • on Product Reliability
      • Accelerated Reliability
      • Achieving the Benefits of Reliability
      • Apex Ridge
      • Metals Engineering and Product Reliability
      • Musings on Reliability and Maintenance Topics
      • Product Validation
      • Reliability Engineering Insights
      • Reliability in Emerging Technology
    • on Risk & Safety
      • CERM® Risk Insights
      • Equipment Risk and Reliability in Downhole Applications
      • Operational Risk Process Safety
    • on Systems Thinking
      • Communicating with FINESSE
      • The RCA
    • on Tools & Techniques
      • Big Data & Analytics
      • Experimental Design for NPD
      • Innovative Thinking in Reliability and Durability
      • Inside and Beyond HALT
      • Inside FMEA
      • Integral Concepts
      • Learning from Failures
      • Progress in Field Reliability?
      • R for Engineering
      • Reliability Engineering Using Python
      • Reliability Reflections
      • Testing 1 2 3
      • The Manufacturing Academy
  • eBooks
  • Resources
    • Accendo Authors
    • FMEA Resources
    • Feed Forward Publications
    • Openings
    • Books
    • Webinars
    • Journals
    • Higher Education
    • Podcasts
  • Courses
    • 14 Ways to Acquire Reliability Engineering Knowledge
    • Reliability Analysis Methods online course
    • Measurement System Assessment
    • SPC-Process Capability Course
    • Design of Experiments
    • Foundations of RCM online course
    • Quality during Design Journey
    • Reliability Engineering Statistics
    • Quality Engineering Statistics
    • An Introduction to Reliability Engineering
    • Reliability Engineering for Heavy Industry
    • An Introduction to Quality Engineering
    • Process Capability Analysis course
    • Root Cause Analysis and the 8D Corrective Action Process course
    • Return on Investment online course
    • CRE Preparation Online Course
    • Quondam Courses
  • Webinars
    • Upcoming Live Events
  • Calendar
    • Call for Papers Listing
    • Upcoming Webinars
    • Webinar Calendar
  • Login
    • Member Home

by Gabor Szabo 1 Comment

Density Curves (With a Reliability Engineering Example)

Density Curves (With a Reliability Engineering Example)

Today we look at a couple different ways to visualize the distribution of your data.

Understanding the distribution of your data can be useful for engineers undertaking various tasks. The fact of the matter is that there are many different ways in which one can get an idea of the distribution of the data they’re interested in, one of which is density curves.

Let’s look at a scenario where density curves can come in handy; one scenario I can think of and demonstrate here is common in reliability engineering, and it involves comparing two distributions, one of the strength of a product with one of the sources of stress that the product could be subjected to over its lifetime. You want to compare those two distributions to get an idea both of product performance and product reliability over time. More specifically, you do not want those two distributions to overlap, and ideally, there is a safety margin between them that accounts for changes over the lifetime of the product, e.g. decay in strength.

Let’s look at dataset of strength values of product as well as values of some stress factor that is able to be quantified. For demonstrational purposes, we will create an example dataset called product_stress, see below:

# WEEK 007: DATA DISTRIBUTION PLOTTING WITH KERNEL DENSITY ESTIMATION

# 0. INSTALL PACKAGE AND LOAD LIBRARIES ----

install.packages("overlapping")

library(tidyverse)
library(sherlock)
library(overlapping)


# 1. SET.SEED() FOR REPRODUCIBILITY ----
set.seed(132535)

# 2. CREATE DATASET ----
product_stress <- tibble(strength = rnorm(n = 100, mean = 38, sd = 5),
                         stress   = rlnorm(n = 100, meanlog = 3, sdlog = 0.1))

# 2.1 TRANSFORM DATASET FOR PLOTTING ----
product_stress_pivoted <- product_stress %>% 
    pivot_longer(cols = everything(), names_to = "type", values_to = "value")

When we plot the values using sherlock’s draw_categorical_scatterplot()function, it becomes clear that there is overlap between the two distributions.

# 3. PLOTTING ----

# 3.1 ----
product_stress_pivoted %>% 
    draw_categorical_scatterplot(y_var = value, 
                                 grouping_var_1 = type, 
                                 group_color    = TRUE, 
                                 alpha          = 0.3)
A scatterplot showing the stress and strength data.

Let’s plot this in a different way. How about density curves? Enter kernel density estimation, which is really just a fancy name for a smoothing method that makes density plots that the actual distribution of the data. Have you ever seen those silly normal curves plotted over a histogram that clearly showed some non-normal/skewed data? I have, and I do not like it very much, to put it lightly.

With kernel density estimation, you can plot density curves that follow the actual distribution of the data.

The geom_density() plotting function from gpplot2 does exactly what I just described, so let’s see how to do it:

# 3.2 DENSITY PLOT ----
product_stress_pivoted %>% 
    ggplot(aes(value)) +
    geom_density(aes(fill = type), alpha = 0.2, color = "grey80") +
    scale_x_continuous(limits = c(0, 60), labels = scales::number_format(accuracy = 1)) +
    theme_sherlock() +
    scale_fill_sherlock()
A plot of the two datasets with a kernel density estimation smoothing the density plot to show the distribution of the actual data.

Looks pretty smooth, right? (pun intended!)

We’ve got an overlap problem, which is not really good news for product performance and reliability, even if these are simply estimates.

One might be tempted to calculate the probability of such an event happening, and if I were you, I would want to, too. Here’s how you do it using the overlappingpackage.

# 4. JOINT PROBABILITY CALCULATION (OVERLAPPING AREA) ----

two_distributions <- as.list(product_stress)

overlapping::overlap(two_distributions)

joint_probability <- overlapping::overlap(two_distributions)$OV

joint_probability

resulting in an output of [1] 0.01393566

OK, so the probability of such an event happening given the estimates we are working with is about 1.39%.

Tweaking the code for the the density curves will then give us a plot where the calculated probaililty is also displayed. Pretty neat: everything in one plot.

# 4. ADDING JOINT PROBABILITY TO DENSITY PLOT ----
product_stress %>% 
    pivot_longer(cols = everything(), names_to = "type", values_to = "value") %>% 
    
    ggplot(aes(value)) +
    geom_density(aes(fill = type), alpha = 0.2, color = "grey80") +
    scale_x_continuous(limits = c(0, 60), labels = scales::number_format(accuracy = 1)) +
    theme_sherlock() +
    scale_fill_sherlock() +
    annotate(geom = "text", x = 40, y = 0.09, 
             label = str_glue("Joint probability: {joint_probability %>% scales::percent(accuracy = 0.01)}"), 
             color = "grey50", size = 5)
The sample density plot with the joint probability annotated on the plot.

This was just one of the many, many examples where density curves can be really useful for you, and now you know how to make them in R. I would love to hear from you about what other uses you could have for this in your line of work.

In this week’s edition we learned how to create density plots using the geom_density() function and went over a brief reliability engineering example. Hope you enjoyed this edition.

Resources for this week’s edition:

  • Code
  • sherlock package
  • geom_density() documentation
  • overlapping package

Resources for learning R:

  • R for Data Science: a very thorough reference book by Hadley Wickham, the creator of the tidyverse. Absolutely free of charge and full of relevant examples and practice tests.
  • ggplot2 reference book: a super detailed online book on the gpplot2 plotting package.
  • My favorite R course, Business Science DS4B101-R: I learned R mainly throgh this course. Highly recommended if you want to get up to speed and beyond in a relatively short time! It has everything one will need from data cleaning to data visualization to modeling. This course is especially useful for engineers trying to learn or get good at R as it heavily focuses on the fundamentals but goes way beyond just that. Note: this is an affiliate link, meaning you get a hefty discount if you purchase a course, and I receive a small commission.

Filed Under: Articles, on Tools & Techniques, R for Engineering

About Gabor Szabo

Gabor is a quality engineering and data professional and has over 15 years of experience in quality having worked in the medical device, automotive and other manufacturing industries. He holds a BS in Engineering Management.

Gabor's specialties and interests include problem solving, statistical engineering and analysis through the use of data, and developing others.

« Should The Cop Ticket Fred? Perils Of Performance-Based Regulations
Using Predictive Maintenance in a Plant Wellness Paradigm »

Comments

  1. Larry George says

    June 19, 2023 at 11:11 AM

    Your model, P{Fail]=P[Stress>Strength], extends to multi-component, multivariate examples where failure is defined by a structure function g(basic events) where each basic event is Stress > Strength. It accounts for dependence among stresses (e.g., earthquake) or strengths (similar components. https://lucas-accendo-site-speed.sprod01.rmkr.net/what-to-do-with-obsolescent-nuclear-engineers/#more-461419. I coded nuclear power plant risk analysis in FORTRAN (It used to be available on Internet. https://ftaassociates.files.wordpress.com/2018/12/J.-Wells-L.-George-and-G.-E.-Cummings-Phase-1-Report%E2%80%94Systems-Analysis-Project-VII-Seismic-Safety-Margins-Research-Program-NUREGCR-2015-Vol.-8-Rev-1-November-1983.pdf/) and implemented it in Excel workbook. I’ll translate it to R if anyone wants and has a real need? Loss-of-load probability on electrical distribution system? Network reliability?

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

R for Engineering logo Photo of Gabor SzaboArticles by Gabor Szabo
in the R for Engineering article series

Join Accendo

Receive information and updates about articles and many other resources offered by Accendo Reliability by becoming a member.

It’s free and only takes a minute.

Join Today

Recent Posts

  • test
  • test
  • test
  • Your Most Important Business Equation
  • Your Suppliers Can Be a Risk to Your Project

© 2025 FMS Reliability · Privacy Policy · Terms of Service · Cookies Policy