E6S-056 Attribute Agreement - Rule out the Ruler part 4B
Intro: Welcome to the E6S-Methods podcast with Jacob and Aaron, your source for expert training, coaching, consulting, and leadership in Lean, Six Sigma, and continuous improvement methods. In this episode number 56, “Rule out the Ruler – Part 4B,” we continue our previous discussions on the Attribute Agreement Analysis and highlight some of the dangers and biases possible during the assessment process. Here we go.
I How to perform & interpret MSA data (The official way)
a. Select 30 parts from the process – 50/50 pass/fail
b. Select marginally good and bad samples
c. Select fully trained inspectors
d. Each inspector inspects the parts in random order twice, recording the results
e. Analyze the data, decide if adequate
i. AIAG rules for Discrete (Kappa)
1. coefficient to indicate agreement percentage against that of pure chance
a. -1 perfect disagreement, much worse than by chance alone
b. +1 perfect agreement, much better than by chance alone
2. Some varying rules of thumb:
a. if Kappa > .7 it’s good.
f. Implement any improvements
g. Re-run Gauge Study
h. Document findings
II Tips & Experiences
a. Select 30 parts from the process – 50/50 pass/fail
i. Modify if not doing a “pass/fail,” “yes/no” binary assessment
1. Decisions on call, complaint or defect type classifications
ii. More parts helps shrink the confidence intervals. Aaron has done assessments with as little as 10 parts. Sometimes it’s very easy to find gaps in disagreement without a full-on blind study.
b. Select marginally good and bad samples
i. Be careful of biases. Much like a survey, inspectors may start giving you what they think you want to see. When selecting parts, biases can be imposed.
ii. Aaron often just chooses parts randomly then adding some clear known good and known bad parts, representing the process.
c. Select fully trained inspectors
i. Side note: Attribute agreement is a good measure to qualify a new inspector. Also a good ongoing recalibration exercise
d. Each inspector inspects the parts in random order twice, recording the results
i. Perhaps even a 3rd time if not enough samples and/or it’s easy to remember previous responses
e. Analyze the data
i. Beware the destructive forces of inspection
1. M&M visual quality fades each time you handle them
2. Cannot taste test the same M&M twice
3. Reclassifying the same call, but out of context of call-order can create a bias twist
4. Also consider sample prep is a factor in attribute agreement, not just assessing the final sample.
f. Implement any improvements
i. Retraining
ii. Updating standards & tools
Outro: Thanks for listening to episode 56 of the E6S-Methods Podcast. Stay tuned for episode number 57 for the Variable Gauge R&R. We give an overview of several types of the Gauge R&R in Part 5A of our “Rule Out the Ruler” series. If you would like to be a guest on podcast, contact us through our website. Follow us on twitter @e6sindustries or join a discussion on LinkedIn. Subscribe to past and future episodes on iTunes or stream us live on-demand with Stitcher Radio. Find outlines and graphics for all shows and more at www.E6S-Methods.com. “Journey Through Success”