|
|
发表于 2006-11-5 11:31:07| 字数 5,760| - 丹麦
|
显示全部楼层
Criticisms of Six Sigma
[edit] Origin
Robert Galvin did not really "invent" Six Sigma in the 1980s, but would more correctly be said to have applied methodologies that had been available since the 1920s and were developed by luminaries like Shewhart, Deming, Juran, Ishikawa, Ohno, Shingo, Taguchi and Shainin. All tools used by and for Six Sigma are actually a subset of the Quality Engineering discipline and can be considered to be a part of the ASQ Certified Quality Engineer body of knowledge. The goal of Six Sigma, then, is to use the old tools in concert, for a greater effect than a sum-of-parts approach.
The use of "Black Belts" as itinerant change agents is controversial as it has created a cottage industry of training and certification which arguably relieves management of accountability for change; pre-Six Sigma implementations, exemplified by the Toyota Production System and Japan's industrial ascension, simply used the technical talent at hand — Design, Manufacturing and Quality Engineers, Toolmakers, Maintenance and Production workers — to optimize the processes.
The expansion of the various "Belts" to include "Green Belt", "Master Black Belt" and "Gold Belt" is commonly seen as a parallel to the various "Belt Factories" that exist in martial arts. Additionally, there is criticism from the martial arts community for the appropriation of the term "Black Belt" for a non martial arts use. This was used as a joke in the comic strip Dilbert.
[edit] The term Six Sigma
Sigma (the lower-case Greek letter σ) is used to represent standard deviation (a measure of variation) of a population (lower-case 's', is an estimate, based on a sample). The term "six sigma process" comes from the notion that if you have six standard deviations between the mean of a process and the nearest specification limit, you will make practically no items that exceed the specifications. This is the basis of the Process Capability Study, often used by quality professionals. The term "Six Sigma" has its roots in this tool, rather than in simple process standard deviation, which is also measured in "sigmas". Criticism of the tool itself, and the way that the term was derived from the tool, often spark criticism of Six Sigma.
The widely accepted definition of a six sigma process is one that produces 3.4 defective parts per million opportunities (DPMO).[1] A process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided Capability Study). So 3.4 DPMO corresponds to 4.5 sigmas, not six. Anyone with access to Minitab or QuikSigma can quickly confirm this by running a Capability Study on data with a mean of 0, a standard deviation of 1, and an upper specification limit of 4.5. So, how is this truly 4.5 sigma process transformed to a 6 sigma process? By arbitrarily adding 1.5 sigmas to the calculated result, the "1.5 sigma shift" (SBTI Black Belt material, ca 1998). Dr. Donald Wheeler, one of the most respected authors on the topics of Control Charts, Capability Studies, and Designed Experiments, dismisses the 1.5 sigma shift as "goofy".[8]
In a Capability Study, sigma refers to the number of standard deviations between the process mean and the nearest specification limit, rather than the standard deviation of the process, which is also measured in "sigmas". As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, the Process Capability sigma number goes down, because fewer standard deviations will then fit between the mean and the nearest specification limit (see Cpk Index). The notion that, in the long term, processes usually do not perform as well as they do in the short term is correct. That requires that that Process Capability sigma based on long term data is less than or equal to an estimate based on short term sigma. However, the original use of the 1.5 sigma shift is as shown above, and implicitly assumes the opposite.
As sample size increases, the error in the estimate of standard deviation converges much more slowly than the estimate of the mean (see confidence interval). Even with a few dozen samples, the estimate of standard deviation often drags an alarming amount of uncertainty into the Capability Study calculations. It follows that estimates of defect rates can be very greatly influenced by uncertainty in the estimate of standard deviation, and that the defective parts per million estimates produced by Capability Studies often ought not to be taken too literally.
Estimates for the number of defective parts per million produced also depends on knowing something about the shape of the distribution from which the samples are drawn. Unfortunately, we have no means for proving that data belong to any particular distribution. We only assume normality, based on finding no evidence to the contrary. Estimating defective parts per million down into the 100s or 10s of units based on such an assumption is wishful thinking, since actual defects are often deviations from normality, which have been assumed not to exist.
[edit] The +/-1.5 Sigma Drift
Everyone with a Six Sigma program knows about the +/-1.5 sigma drift of a process mean, experienced by all processes. What this is saying is that if we are manufacturing a product that is 100 +/- 3 cm (97–103cm), over time, it may drift up to 98.5–104.5 or down to 95.5–101.5. That might be of concern to our customers. So where does the "+/-1.5" come from?
The +/-1.5 shift was introduced by Mikel Harry. Where did he get it? Harry refers to a paper written in 1975 by Evans, "Statistical Tolerancing: The State of the Art. Part 3. Shifts and Drifts". The paper is about tolerancing. That is how the overall error in an assembly is effected by the errors in components. Evans refers to a paper by Bender in 1962, "Benderizing Tolerances – A Simple Practical Probablity Method for Handling Tolerances for Limit Stack Ups". He looked at the classical situation with a stack of disks and how the overall error in the size of the stack, relates to errors in the individual disks. Based on "probability, approximations and experience", he suggests:
v = 1.5 SQRT (var X)
What has this got to do with monitoring the myriad processes that people are concerned about? Very little. Harry then takes things a step further. Imagine a process where 5 samples are taken every half hour and plotted on a control chart. Harry considered the "instantaneous" initial 5 samples as being "short term" (Harry's n=5) and the samples throughout the day as being "long term" (Harry's g=50 points). Because of random variation in |
|