lean-math - Measurement
http://leanmath.com/tags/measurement
enCpk and the Mystery of Estimated Standard Deviation [guest post]
http://leanmath.com/blog-entry/cpk-and-mystery-estimated-standard-deviation-guest-post
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><p>It all started when my colleague and I noted that we had used the same data to calculate <em>Cpk</em>, but ended up with different results. This led us down an Alice in Wonderland-like path of Google searching, Wikipedia reading, and blogosphere scanning. After several days of investigation, we determined that there was no consensus on how to properly calculate estimated standard deviation. Knowing that there must be a misunderstanding and that this should be purely an effort based on science, we decided to get to the bottom of this. My colleague and I decided that there was a need for a simple, accurate tool that anyone could use and afford. We wanted to break the economic and educational barriers that got in the way of conducting needed process capability studies. More on that in a bit. Our investigation revealed that the biggest confusion out there was with the following two symbols. <a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/levi1.png"><img alt="levi1" height="89" width="193" class="media-image aligncenter size-full wp-image-1222 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/levi1.png" /></a>Or, regular sample standard deviation vs. estimated standard deviation (sporting that little hat over the sigma). Regular sample standard deviation is used to calculate process performance, or<em> Pp/Ppk</em>. It is based on the actual data that your process has actually proven to perform in current reality (overall performance). Estimated standard deviation is used to calculate process capability, or <em>Cp/Cpk</em>. In other words, what is your process capable of when at its current “best” state (within subgroups)? This leads us to the simple tool that I referenced above.</p>
<h1>There’s an App for that</h1>
<p>The creation of “<em>Cpk</em> Calculator App” has been a long and winding road with a lot of research and validation (also known as PDCA). But, in the end we created a tool that automatically calculates standard deviation in 1 of 3 ways depending on data set characteristics (The biggest dilemma on the web):</p>
<p style="padding-left: 30px;">1. If data is in one large group, we use the regular sample standard deviation calculation:</p>
<p style="padding-left: 30px;"><a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/levi2.png"><img alt="levi2" height="118" width="227" class="media-image aligncenter wp-image-1223 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/levi2.png" /></a>Many people use the calculation above to calculate standard deviation and call it <em>Cpk</em>, when in reality what they are calculating is <em>Pp</em>, or <em>Ppk</em> as they are not using estimated standard deviation. <em>Ppk</em> is definitely the more conservative of the two as it’s based on the actual standard deviation, but for whatever reason <em>Cpk</em> has become the more famous of the two.</p>
<p style="padding-left: 30px;">And, they are often confused.</p>
<p style="padding-left: 30px;">2/3. If you collect your data in subgroups, there are two preferred methods of estimating standard deviation using unbiasing constants:</p>
<p style="padding-left: 30px;"><a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/levi3.png"><img alt="levi3" height="83" width="493" class="media-image aligncenter size-full wp-image-1224 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/levi3.png" /></a></p>
<p style="padding-left: 30px;"><em>R</em>bar / <em>d</em>2 is used to estimate standard deviation when subgroup size is at least two, but not more than four. The average of the subgroup ranges is divided by the <em>d</em>2 constant. This calculation is best when you tend to have many small sub groups of data.</p>
<p style="padding-left: 30px;"><a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/levi4.png"><img alt="levi4" height="239" width="241" class="media-image aligncenter size-full wp-image-1225 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/levi4.png" /></a>The calculations shown above reflect another way to estimate standard deviation that should be used when calculating estimated standard deviation of uneven sub groups, or sub groups larger than 4 data points.</p>
<p>
<video class="media-image media-element file-media-large" controls="controls" data-fid="182" data-media-element="1" height="360" width="480"><source src="http://www.youtube.com/watch?v=kpg_Havl8Qc" type="video/youtube"></source></video><br />
Please see these links for the <em>Cpk</em> Calculator App on <a href="https://play.google.com/store/apps/details?id=com.admapps.cpkcalculator">Google Play</a> (for Android) and the <a href="https://itunes.apple.com/us/app/cpkcalculator/id913749703?mt=8">Apple App Store</a>.</p>
<h1>More about <em>Cp</em>, <em>Cpk</em> vs.<em> Pp</em>,<em> Ppk</em></h1>
<p><em>Pp</em>, and <em>Ppk</em> are based on actual, “overall” performance regardless of how the data is subgrouped, and use the normal standard deviation calculation of all data (n-1). <em>Cp</em> and <em>Cpk</em> are based on variation within subgroups, and use estimated standard deviation. <em>Cp</em> and <em>Cpk</em> show statistical capability based on multiple subgroups. Without getting into too much detail on the difference in calculations, think of the estimated standard deviation as the average of all of the subgroup’s standard deviations, and ‘regular’ standard deviation as the standard deviation of all data collected. <strong><em>Cp</em> (process capability)</strong>. The amount of variation that you have versus how much variation you’re allowed based on statistical capability. It doesn’t tell you how close you are to the center, but it tells you the range of variation. Note that nowhere in this formula is the average of your actual data referenced. <a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/levi5.png"><img alt="levi5" height="91" width="241" class="media-image aligncenter size-full wp-image-1226 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/levi5.png" /></a><strong><em>Cpk</em> (process capability index).</strong> Tells you how centered your process capability range is in relation to your specification limits. This only accounts for variation within subgroups and does not account for differences between sub groups. <em>Cpk</em> is “potential” capability because it presumes that there is no variation between subgroups (how good you are when you’re at you best). When your <em>Cpk</em> and <em>Ppk</em> are the same, it shows that your process is in statistical control. <a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/levi6.png"><img alt="levi6" height="519" width="807" class="media-image aligncenter size-full wp-image-1227 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/levi6.png" /></a><strong><em>Pp</em> (process performance).</strong> The amount of variation that you have versus how much variation you’re allowed based on actual performance. It doesn’t tell you how close you are to the center, but it tells you the range of variation. <a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/levi7.png"><img alt="levi7" height="91" width="253" class="media-image aligncenter size-full wp-image-1228 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/levi7.png" /></a><strong><em>Ppk</em> (process performance index).</strong> <em>Ppk</em> indicates how centered your process performance range is in relation to your specification limits (how good are you performing currently). <a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/levi8.png"><img alt="levi8" height="135" width="372" class="media-image aligncenter size-full wp-image-1229 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/levi8.png" /></a></p>
<h1>What’s a "Good"<em> Cpk</em>?</h1>
<p>A <em>Cpk</em> of 1.00 will produce a 0.27% fail rate, or a theoretical 2,700 defects per million parts produced. A <em>Cpk</em> of 1.33 will produce a 0.01% fail rate, or a theoretical 100 defects per million parts produced. In reality, the <em>Cpk</em> that is acceptable depends on your particular industry standard. As a rule of thumb a <em>Cpk</em> of 1.33 is traditionally considered a minimum standard.</p>
<h1>Confidence Interval</h1>
<p>Confidence interval shows the statistical range of your capability (<em>Cpk</em>) based on sample size. Basically the larger the sample size, the tighter the range. The confidence interval shows that there is an x% confidence that your capability is within “a” and “b.” The higher the confidence interval, the wider the range. For example, if we report a <em>Cpk</em> of 1.26, what we are really saying is something like, “I don’t know the true <em>Cpk</em>, but based on a sample of n=145, I am 95% confident that it is between 1.10, and 1.41 <em>Cpk</em>.” This tells us that the larger your sample size, the tighter the range. Therefore, the more data you collect, the more accurate your measurement, and the more accurate your actual process capability, or performance. In most calculations 90 or 95% confidence is required, but confidence interval can be calculated at any %, just remember the fewer data points, the wider the confidence interval range. <a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/levi9.png"><img alt="levi9" height="86" width="564" class="media-image aligncenter size-full wp-image-1230 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/levi9.png" /></a></p>
<h1>Real Life Application</h1>
<p>During the creation and testing of the <em>Cpk</em> Calculator App, we had the opportunity to test every scenario that we encountered in the real world. One of the real life scenarios that we ran into included a routine hourly check of a “widget’s” thickness that determined that the part was out of specification. After 15 minutes of data collection and testing on the floor using the app, we found that our process that normally had a <em>Cpk</em> of 1.3, now reflected a <em>Cpk</em> of 0.80. This led us to discover that the cutting machine cycle time had been reduced in an attempt to improve throughput and productivity by the machine operator. With that in mind, we reset the machine to original settings to confirm that we had found the root cause. Subsequently, we used the <em>Cpk</em> calculator as we gradually reduced cycle time as much as possible without negatively affecting process capability. In the end, we confirmed root cause, and implemented a new and improved cycle time for the piece of equipment. ________________________________________________________ <em><a href="/sites/lean-math/files/blog/wp-content/uploads/2014/11/Levi-Head-Shot-2.jpg"><img alt="Levi Head Shot 2" height="135" width="163" class="media-image alignleft wp-image-1213 size-full media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/11/Levi-Head-Shot-2.jpg" /></a>This post was authored by Levi McKenzie, a continuous improvement kind of guy who enjoys exploring new facets of lean methodology, facts, data, and making things faster and better. Levi Is a co-founder of Brown Belt Institute, a mobile app development company</em> <em>that focuses on providing useful lean six sigma tools that are inexpensive and easy to use for the "blue collar brown belt" sector.</em></p>
</div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/tags/measurement" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Measurement</a></div></div></div>Fri, 07 Nov 2014 01:16:01 +0000MarkRHamel122 at http://leanmath.comhttp://leanmath.com/blog-entry/cpk-and-mystery-estimated-standard-deviation-guest-post#commentsCargo Cult Statistics
http://leanmath.com/blog-entry/cargo-cult-statistics
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><p>Did first-class passengers on the Titanic get preferential treatment during the evacuation? James Cameron’s movie certainly seems to suggest so, but let’s look at the data.</p>
<p> </p>
<table><tbody><tr><td width="111"></td>
<td width="152"><strong>Survived</strong></td>
<td width="134"><strong>Died</strong></td>
</tr><tr><td width="111"><strong>First class</strong></td>
<td width="152">203</td>
<td width="134">122</td>
</tr><tr><td width="111"><strong>Third class</strong></td>
<td width="152">178</td>
<td width="134">528</td>
</tr></tbody></table><p> </p>
<p>The data is compelling. 75% of the third-class passengers perished compared to only 38% for the first-class passengers. The statistically inclined among you might run a Chi-squared test to confirm these observations, and not surprisingly the results will be statistically significant. The difference in the proportion of first-class passengers that perished versus the proportion of third-class passengers that perished is unlikely to have occurred by chance. Well, that must be the end of the story. An analyst might create some pie charts or some stacked bar charts to illustrate the results, but that is the end of the story…right???...not quite.</p>
<p> </p>
<p>Consider a more detailed breakdown of the same data:</p>
<table><tbody><tr><td width="176"></td>
<td width="152"><strong>Survived</strong></td>
<td width="134"><strong>Died</strong></td>
</tr><tr><td width="176"><strong>First class</strong></td>
<td width="152"></td>
<td width="134"></td>
</tr><tr><td width="176">Men</td>
<td width="152">57</td>
<td width="134">118</td>
</tr><tr><td width="176">Women and children</td>
<td width="152">146</td>
<td width="134">4</td>
</tr><tr><td width="176"><strong>Third class</strong></td>
<td width="152"></td>
<td width="134"></td>
</tr><tr><td width="176">Men</td>
<td width="152">75</td>
<td width="134">387</td>
</tr><tr><td width="176">Women and children</td>
<td width="152">103</td>
<td width="134">141</td>
</tr></tbody></table><p>Now the data suggests a possible different story. With this data, it is now evident that 79% of the men died, compared to 37% of the women and children. So which was it? Was it class privilege or chivalry? Or was it something else?</p>
<p>These are questions of history, and there are many lessons to be learned from history. And there is much to be learned from the over simplistic analysis that suggested the cause was class privilege:</p>
<ul><li>Just because the numbers are overwhelming, it doesn’t mean your hypothesis is true.</li>
<li>When analyzing data, t is wise to remember the words of Sherlock Holmes; “when you have eliminated the impossible, whatever remains, however improbable, must be the truth”.</li>
</ul><p>The failure in the initial analysis was not a failure of mathematics or statistics, but a failure of the analyst. They failed to consider other alternatives. Richard Feynman described this error in his essay: Cargo Cult Science, in which he recommends, among other things, that we should not fool ourselves and we should not fool others. We accomplish these goals with a profound honesty, by challenging ourselves to look for other explanations, and by carefully performing and re-performing experiments. And while the systems that we study may be more complex and more dynamic than the systems that a physicist studies, there is no excuse for cargo cult statistics.</p>
</div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/tags/measurement" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Measurement</a></div><div class="field-item odd"><a href="/tags/thoughts" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Thoughts</a></div></div></div>Fri, 10 Oct 2014 03:18:07 +0000drmike121 at http://leanmath.comhttp://leanmath.com/blog-entry/cargo-cult-statistics#commentsPick's Theorem (or Pick an Area, Any Area)
http://leanmath.com/blog-entry/picks-theorem-or-pick-area-any-area
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><p>Pick’s Theorem is a simple way to calculate area. This theorem is particularly useful when calculating the reduction of square feet (or square meters) that was achieved by improving a process layout. To use Pick’s Theorem, overlay a sketch of the area that you want to calculate onto a square grid of points. The grid of points should be fine enough that any bend on the boundary coincides with a grid point. For example: <a href="/sites/lean-math/files/blog/wp-content/uploads/2014/05/pick1.png"><img alt="pick1" height="280" width="342" class="media-image aligncenter size-full wp-image-1090 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/05/pick1.png" /></a>To calculate the area, simply take the sum of the number of points on the boundary divided by two, plus the number of interior points minus one. Or, in shorthand: <em>A = ½ B + I -</em>1 For the above example, the number of boundary points is 21, and the number of interior points is 17. Therefore, the area is: <em>A</em> = ½ 21 + 17 – 1 = 26.5 sq. ft. This method is exact, but it's not always necessary. Don't forget that in many cases you can get an accurate estimate of floor space by simply counting the number of floor tiles (or ceiling tiles) and multiplying by the area of each tile. This method is fast, easy, and often accurate enough for most situations. And, a word of caution. Pick’s Theorem works for lattice polygons, but breaks down if there are any holes in the lattice polygon. For example, in the shaded figure below the number of boundary points is 18, there are 2 interior points, so using Pick’s Theorem the area works out to be 10 sq units. But clearly the actual area is 11 sq. units! The issue is the hole. <a href="/sites/lean-math/files/blog/wp-content/uploads/2014/05/pick2.png"><img alt="pick2" height="133" width="171" class="media-image aligncenter size-full wp-image-1092 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/wp-content/uploads/2014/05/pick2.png" /></a>For cases like these, we recommend calculating the total area without the holes and then subtracting the area of the holes.</p>
</div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/tags/measurement" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Measurement</a></div></div></div>Fri, 02 May 2014 18:19:45 +0000drmike110 at http://leanmath.comhttp://leanmath.com/blog-entry/picks-theorem-or-pick-area-any-area#commentsQuick and Easy Continuous Variable Gage R&R Test
http://leanmath.com/blog-entry/quick-and-easy-continuous-variable-gage-rr-test
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded"><p>Looking for something to do? Why not run a continuous variable gage repeatability and reproducibility test? Our experience is that while many organizations have their key measurement devices on a calibration schedule, calibration simply isn’t enough. Gage R&R tests provide insights into how the users interact with measuring equipment and can uncover issues such as: bias, linearity issues, accuracy issues, and of course, repeatability and reproducibility issues. </p>
<p>The quick and easy way to run a continuous variable gage R&R test is to ask several operators to measure the same parts several times each. Including a part with a known size, allows you to check for accuracy.</p>
<p>There are sophisticated tools for analyzing the results, but the quick and easy way is to make a scatter plot of the results as well as an X-bar and R chart. This approach works well in most circumstances.</p>
<p>The results might look as follows (Part No. 3 has a known size of 1.00 inches):</p>
<p><a href="/sites/lean-math/files/blog/wp-content/uploads/2013/04/Gage-R-and-R1.png"><img alt="Gage R and R" height="1518" width="1027" class="media-image aligncenter size-full wp-image-530 media-element file-media-large" typeof="foaf:Image" src="http://www.leanmath.com/sites/lean-math/files/blog/wp-content/uploads/2013/04/Gage-R-and-R1.png" /></a> What issues has our quick and easy gage R&R test uncovered:</p>
<ul><li>There appears to be an accuracy issue. Part number 3 has a known size of 1.000” but the operators are measuring it to have an approximate size of 1.106” on average.</li>
<li>There appears to be a linearity issue. As the part size gets bigger, the range of the measurements generally gets bigger.</li>
<li>There appears to be an unusual issue that the first measurement is always the smallest, the second measurement is larger, and the third measurement is the largest.</li>
<li>There also appears to be a repeatability issue. The range of the measurements is a bit large. </li>
</ul><p>The criticality of the measurement system and the part dimension would determine which issue we would tackle first. But we gained a lot of insight from a quick and easy test!</p>
</div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-hidden"><div class="field-items"><div class="field-item even"><a href="/tags/measurement" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Measurement</a></div></div></div>Tue, 09 Apr 2013 19:32:11 +0000drmike68 at http://leanmath.comhttp://leanmath.com/blog-entry/quick-and-easy-continuous-variable-gage-rr-test#comments