Chapter 12: Problem 5
Why does Six Sigma assume a process that is shifted by 1.5 sigma from the midpoint of the specification interval? What effect does this have on the quality level implied by Six Sigma?
Chapter 12: Problem 5
Why does Six Sigma assume a process that is shifted by 1.5 sigma from the midpoint of the specification interval? What effect does this have on the quality level implied by Six Sigma?
All the tools & learning materials you need for study success - in one app.
Get started for freeThe defect rate in Six Sigma is defined as the number of defects divided by the number of opportunities to create defects. (a) Some practitioners define the number of opportunities as the number of inspections and/or tests. Why is this not a valid way to determine defect rate? (Hint: the best manufacturers tend to do very little test and inspection.) (b) Another school of quality thought defines opportunities as value-added transformations. That is, a product or service is changed by the process, the change matters to the customer (i.e., if a step removes scratches from a previous step, it doesn't count), and only first-time operations count (i.e., rework steps are not opportunities). Will this lead to a more reliable measure of defect rate than the previous definition? How might an unscrupulous practitioner manipulate the calculation of opportunities to make the defect rate look better than it actually is?
Why is quality so difficult to define? Provide your own definition for a specific operation of your choosing.
How might improved internal quality make scheduling a production system easier?
Why is it important to detect quality problems as early in the line as possible?
Why do the operational consequences of rework become more severe as the length of the rework loop increases?
What do you think about this solution?
We value your feedback to improve our textbook solutions.