Gage R&R (G age R epeatability and R eproducibility) is the amount of measurement variation introduced by a measurement system, which consists of the measuring instrument itself and the individuals using the instrument. A Gage R&R study is a critical step in manufacturing Six Sigma projects, and it quantifies three things:
Repeatability variation from the measurement instrument
Reproducibility variation from the individuals using the instrument
Overall Gage R&R, which is the combined effect of (1) and (2)
The overall Gage R&R is normally expressed as a percentage of the tolerance for the CTQ being studied, and a value of 20% Gage R&R or less is considered acceptable in most cases. Example: for a 4.20mm to 4.22mm specification (0.02 total tolerance) on a shaft diameter, an acceptable Gage R&R value would be 20 percent of 0.02mm (0.004mm) or less.
The Difference Between Gage R&R and Accuracy
A Gage R&R study quantifies the inherent variation in the measurement system (the combination of items 1 and 2 noted above), but measurement system accuracy must be verified through a calibration process. For example, when reading an outdoor thermometer, we might find a total Gage R&R of five degrees, meaning that we will observe up to five degrees of temperature variation, independent of the actual temperature at a given time. However, the thermometer itself might also be calibrated ten degrees to the low side, meaning that, on average, the thermometer will read ten degrees below the actual temperature. The effects of poor accuracy and a high Gage R&R can render a measurement system useless if not addressed.
Measurement system variation is often a major contributor to the observed process variation, and in some cases it is found to be the number-one contributor. Remember, Six Sigma is all about reducing variation.
Think about the possible outcomes if a high-variation measurement system is not evaluated and corrected during the Measure phase of a DMAIC project there is a good chance that the team will be mystified by the variation they encounter in the Analyze phase, as they search for variation causes outside the measurement system.
Measurement system variation is inherently built into the values we observe from a measuring instrument, and a high-variation measurement system can completely distort a process capability study (not to mention the effects of false accepts and false rejects from a quality perspective). The following graph shows how an otherwise capable process (Cpk = 2.0: this is a Six Sigma process) is portrayed as marginal or poor as the Gage R&R percentage increases: