Home

Statistical Quality Control Questions and Answers

Statistical Quality Control Questions and Answers

 

 

Statistical Quality Control Questions and Answers

Quality Definition: The quality of a product or service is the fitness of that product or service for meeting or exceeding its intended use as required by the customer.

Quality means fitness for use.
Quality is inversely proportional to variability.
There are three aspects are usually associated with the definition of quality: quality of design, quality of conformance, and quality of performance.

    • Quality of Design: Quality of design deals with the stringent conditions that a product or service must minimally possess to satisfy the requirements of the customer. It implies that the product or service must be designed to meet at least minimally the needs of the consumer.
    • Quality of Conformance: Quality of conformance implies that a manufactured product or a service rendered must meet the standards selected in the design phase.
    • Quality of Performance: Quality of performance is concerned with how well a product functions or service performs when put to use. It measures the degree to which the product or service satisfies the customer
  • Quality Characteristics: one or more elements define the intended quality level of a product or service. These elements, known as quality characteristics, can be categorized in these groupings:
    • Structural characteristics include such elements as the length of a part, the weight of a can, the strength of a beam, the viscosity of a fluid, and so on;
    • Sensory characteristics include the taste of good food, the smell of a sweet fragrance, and the beauty of a model, among others;
    • Time-oriented characteristics include such measures as a warranty, reliability, and maintainability; and
    • Ethical characteristics include honesty, courtesy, friendliness, and so on.
  • Variables and Attributes: Quality characteristics fall into two broad classes: variables and attributes. Characteristics that are measurable and are expressed on a numerical scale are called variables. The waiting time in a bank before being served, expressed in minutes, is a variable, as are the density of a liquid in grams per cubic centimeter and the resistance of a coil in ohms.

Prior to defining an attribute, we should define nonconformity and a nonconforming unit. Nonconformityis a quality characteristic that does not meet its stipulated specifications. Let's say that the specification on the fill volume of soft drink bottles is 750 ± 3 milliliters (mL). If we have a bottle containing 745 mL, its fill volume is nonconformity. A nonconforming unit has one or more nonconformities such that the unit is unable to meet the intended standards and is unable to function as required. An example of a nonconforming unit is a cast iron pipe whose internal diameter and weight both fail to satisfy specifications, thereby making the unit dysfunctional.
A quality characteristic is said to be an attribute if it is classified as either conforming or nonconforming to a stipulated specification. A quality characteristic that cannot be measured on a numerical scale is expressed as an attribute. For example, the smell of a cologne is characterized as either acceptable or is not; the color of a fabric is either acceptable or is not. However, there are some variables that are treated as attributes because it is simpler to measure them this way or because it is difficult to obtain data on them. Examples in this category are numerous. For instance, the diameter of a bearing is, in theory, a variable. However, if we measure the diameter using a go/no-go gage and classify it as either conforming or nonconforming (with respect to some established specifications), the characteristic is expressed as an attribute. The reasons for using a go/no-go gage, as opposed to a micrometer, could be economic; that is, the time needed to obtain a measurement using a go/no-go gage may be much shorter and consequently less expensive. Alternatively, an inspector may not have enough time to obtain measurements on a numerical scale using a micrometer, so such a classification of variables would not be feasible.

  • Defects: A defect is associated with a quality characteristic that does not meet certain standards. Furthermore, the severity of one of more defects in a product or service may cause it to be unacceptable (or defective). The modern term for defect is non conformity, and the term for defective is nonconforming item.
  • Standard or Specification:  Since the definition of quality involves meeting the requirements of the customer, these requirements need to be documented. A standard, or a specification, refers to a precise statement that formalizes the requirements of the customer; it may relate to a product, a process, or a service. For example, the specifications for an axle might be 2 ± 0.1 centimeters (cm) for the inside diameter, 4 ± 0.2 cm for the outside diameter, and 10 ± 0.5 cm for the length. This means that for an axle to be acceptable to the customer, each of these dimensions must be within the values specified.
  • Specification: a set of conditions and requirements, of specific and limited application, that provide a detailed description of the procedure, process, material, product, or service for use primarily in procurement and manufacturing. Standards may be referenced or included in a specification.
  • Standard: a prescribed set of conditions and requirements, of general or broad application, established by authority or agreement, to be satisfied by a material, product, process, procedure, convention, test method; and/or the physical, functional, performance, or conformance characteristic thereof. A physical embodiment of a unit of measurement (for example, an object such as the standard kilogram or an apparatus such as the cesium beam clock).

Acceptable bounds on individual quality characteristics (say, 2 ± 0.1 cm for the inside diameter) are usually known as specification limits, whereas the document that addresses the requirements of all the quality characteristics is labeled the standard.

  • Quality control: Quality control may generally be defined as a system that maintains a desired level of quality, through feedback on product/service characteristics and implementation of remedial actions, in case of a deviation of such characteristics from a specified standard. This general area may be divided into three main subareas: off-line quality control, statistical process control, and acceptance sampling plans.
  • Off-Line Quality Control: Off-line quality control procedures deal with measures to select and choose controllable product and process parameters in such a way that the deviation between the product or process output and the standard will be minimized. Much of this task is accomplished through product and process design. The goal is to come up with a design within the constraints of resources and environmental parameters such that when production takes place, the output meets the standard. Thus, to the extent possible, the product and process parameters are set before production begins. Principles of experimental design and the Taguchi method provide information on off-line process control procedures.
  • Online statistical process control:It means that information is gathered about the product, process, or service while it is functional. When the output differs from a determined norm, corrective action is taken in that operational phase. It is preferable to take corrective action on a real-time basis for quality control problems. This approach attempts to bring the system to an acceptable state as soon as possible, thus minimizing either the number of unacceptable items produced or the time over which undesirable service is rendered. Control charts and process capability studies are frequently used online statistical process control techniques. One question that may come to mind is: Shouldn't all procedures be controlled on an off- line basis? The answer is "yes," to the extent possible. The prevailing theme of quality control is that quality has to be designed into a product or service; it cannot be inspected into it. However, despite taking off-line quality control measures, there may be a need for online quality control, because variation in the manufacturing stage of a product or the delivery stage of a service is inevitable. Therefore, some rectifying measures are needed in this phase. Ideally, a combination of off-line and online quality control measures will lead to a desirable level of operation.
  • Acceptance Sampling Plans: Acceptance sampling plans involve inspection of a product or service. When 100% inspection of all items is not feasible, a decision has to be made as to how many items should be sampled or whether the batch should be sampled at all. The information obtained from the sample is used to decide whether to accept or reject the entire batch or lot. In the case of attributes, one parameter is the acceptable number of nonconforming items in the sample. If the number of nonconforming items observed is less than or equal to this number, the batch is accepted. This is known as the acceptance number. In the case of variables, one parameter may be the proportion of items in the sample that are outside the specifications. This proportion would have to be less than or equal to a standard for the lot to be accepted. A plan that determines the number of items to sample and the acceptance criteria of the lot, based on meeting certain stipulated conditions (such as the risk of rejecting a good lot or accepting a bad lot), is known as an acceptance sampling plan.
  • QUALITY ASSURANCE: Quality is not just the responsibility of one person in the organization this is the message. Everyone involved directly or indirectly in the production of an item or the performance of a service is responsible. Unfortunately, something that is viewed as everyone's responsibility can fall apart in the implementation phase and become no one's responsibility. This behavior can create an ineffective system where the quality assurances exist only on paper. Thus, what is needed is a system that ensures that all procedures that have been designed and planned are followed. This is precisely the role and purpose of the quality assurance function.

              The objective of the quality assurance function is to have in place a formal system that continually surveys the effectiveness of the quality philosophy of the company. The quality assurance team thus audits the various departments and assists them in meeting their responsibilities for producing a quality product.

  • Quality circles: A quality circle is typically an informal group of people that consists of operators, supervisors, managers, and so on, who get together to improve ways to make a product or deliver a service. The concept behind quality circles is that in most cases, the persons who are closest to an operation are in a better position to contribute ideas that will lead to an improvement in it. Thus, improvement-seeking ideas do not come only from managers but also from all other personnel who are involved in the particular activity. A quality circle tries to overcome barriers that may exist within the prevailing organizational structure so as to foster an open exchange of ideas.

              A quality circle can be an effective productivity improvement tool because it generates new ideas and implements them. Key to its success is its participative style of management. The group members are actively involved in the decision-making process and therefore develop a positive attitude toward creating a better product or service. They identify with the idea of improvement and no longer feel that they are outsiders or that only management may dictate how things are done. Of course, whatever suggestions that a quality circle comes up with will be examined by management for feasibility. Thus, members of the management team must understand clearly the workings and advantages of the action proposed. Only then can they evaluate its feasibility objectively.

 


12. Quality improvement team: A quality improvement team is another means of identifying feasible solutions to quality control problems. Such teams are typically cross functional in nature and involve people from various disciplines. It is not uncommon to have a quality improvement team with personnel from design and development, engineering, manufacturing, marketing, and servicing. A key advantage of such a team is that it promotes cross-disciplinary flow of information in real time as it solves the problem. When design changes are made, the feasibility of equipment and tools in meeting the new requirements must be analyzed. It is thus essential for information to flow between design, engineering, and manufacturing. Furthermore, the product must be analyzed from the perspective of meeting customer needs. Do the new design changes satisfy the unmet needs of customers? What are typical customer complaints regarding the product? Including personnel from marketing and servicing on these teams assists in answering these questions.
13. Quality and productivity: A misconception that has existed among businesses (and is hopefully in the process of being debunked) is the notion that quality decreases productivity. On the contrary, the relationship between the two is positive: Quality improves productivity. Making a product right the first time lowers total costs and improves productivity. More time is available to produce defect- free output because items do not have to be reworked and extra items to replace scrap do not have to be produced. In fact, doing it right the first time increases the available capacity of the entire production line. As waste is reduced, valuable resources- people, equipment, material, time, and effort-can be utilized for added production of defect-free goods or services. The competitive position of the company is enhanced in the long run, with a concomitant improvement in profits.
14. Statistical Process Control: A control chart is one of the primary techniques of statistical process control (SPC). This chart plots the averages of measurements of a quality characteristic in samples taken from the process versus time (or the sample number). The chart has a center line (CL) and upper and lower control limits (UCL and LCL). The center line represents where this process characteristic should fall if there are no unusual sources of variability present. The control limits are determined from some simple statistical considerations. Classically, control charts are applied to the output variable(s) in a system. However, in some cases they can be usefully applied to the inputs as well.
The control chart is a very useful process monitoring technique; when unusual sources of variability are present, sample averages will plot outside the control limits. This is a signal that some investigation of the process should be made and corrective action to remove these unusual sources of variability taken. Systematic use of a control chart is an excellent way to reduce variability.
15. Design of Experiments: A designed experiment is extremely helpful in discovering the key variables influencing the quality characteristics of interest in the process. A designed experiment is an approach to systematically varying the controllable input factors in the process and determining the effect these factors have on the output product parameters. Statistically designed experiments are invaluable in reducing the variability in the quality characteristics and in determining the levels of the controllable variables that optimize process performance. Often significant breakthroughs in process performance and product quality also result from using designed experiments.
16. Total Quality Management: Total quality management (TQM) is a strategy for implementing and managing quality improvement activities on an organization wide basis. TQM began in the early 1980s, with the philosophies of Deming and Juran as the focal point. It evolved into a broader spectrum of concepts and ideas, involving participative organizations and work culture, customer focus, supplier quality improvement, integration of the quality system with business goals, and many other activities to focus all elements of the organization around the quality improvement goal. Typically, organizations that have implemented a TQM approach to quality improvement have quality councils or high-level teams that deal with strategic quality initiatives, workforce-level teams that focus on routine production or business activities, and cross-functional teams that address specific quality improvement issues.
Or
Total quality management revolves around three main themes: the customer, the process, and the people. At its core are the company vision and mission and management commitment. They bind the customer, the process, and the people into an integrated whole. A company's vision is quite simply what the company wants to be. The mission lays out the company's strategic focus. Every employee should understand the company's vision and mission so that individual efforts will contribute to the organizational mission. When employees do not understand the strategic focus, individuals and even departments pursue their own goals rather than those of the company, and the company's goals are inadvertently sabotaged. The classic example is maximizing production with no regard to quality or cost.
Management commitment is another core value in the TQM model. It must exist at all levels for the company to succeed in implementing TQM. Top management envisions the expectations and satisfaction. Taking measures to eliminate discrepancies is known as gap analysis.
The second theme in TQM is the process. Management is responsible for analyzing the process to improve it continuously. In this framework, vendors are part of the extended process, as advocated by Deming. As discussed earlier, integrating vendors into the process improves the vendors' products, which leads to better final products. Because problems can and do span functional areas, self-directed cross-functional teams are important for generating alternative feasible solutions—the process improves again. Technical tools and techniques along with management tools come in handy in the quest for quality improvement. Self-directed teams are given the authority to make decisions and to make appropriate changes in the process.
The third theme deals with people. Human "capital" is an organization's most important asset. Empowerment—involving employees in the decision-making process so that they take ownership of their work and the process—is a key factor in TQM. It is people who find better ways to do a job, and this is no small source of pride. With pride comes motivation. There is a sense of pride in making things better through the elimination of redundant or non-value-added tasks or combining operations. In TQM, managing is empowering.
17. Why SPC?
Success in the global market depends on quality. Companies that produce high quality consistently will succeed; those who do not will ultimately fail.
The emphasis here is on consistent high quality. It isn’t enough to produce quality sporadically; one bad product can hurt a company’s future sales. Inconsistent quality is also more expensive since bad parts have to be reworked or even scrapped. On the other hand, when quality improves productivity improves, costs drop, and sales go up.
Companies don’t design poor quality; it is usually the result of a variation in some stage of production. Therefore, product quality depends on the ability to control the production process. This is where statistical process control, SPC, comes in. SPC uses statistics to detect variations in the process so they can be controlled.
18. Process: A process is the transformation of a set of inputs, which can include materials, actions, methods and operations into desired outputs, in the form of products, information, services or – generally – results. In each area or function of an organization there will be many processes taking place. Each process may be analysed by an examination of the inputs and outputs. This will determine the action necessary to improve quality.
19. Quality function deployment (QFD): It is a planning tool that focuses on designing quality into a product or service by incorporating customer needs. It is a systems approach involving cross-functional teams (whose members are not necessarily from product design) that looks at the complete cycle of product development. This quality cycle starts with creating a design that meets customer needs and continues on through conducting detailed product analyses of parts and components to achieve the desired product, identifying the processes necessary to make the product, developing product requirements, prototype testing, final product or service testing, and finishing with after-sales troubleshooting.
QFD is customer driven and translates customers' needs into appropriate technical requirements in products and services. It is proactive in nature. Also identified by other names—house of quality, matrix product planning, customer-driven engineering, and decision matrix—it has several advantages. It evaluates competitors from two perspectives, the customer's perspective and a technical perspective. The customer's view of competitors provides the company with valuable information on the market potential of its products. The technical perspective, which is a form of benchmarking, provides information on the relative performance of the company with respect to industry leaders. This analysis identifies the degree of improvements needed in products and processes and serves as a guide for resource allocation.
QFD reduces the product development cycle time in each functional area, from product inception and definition to production and sales. By considering product and design along
20. Six Sigma: The focus of six-sigma is reducing variability in key product quality characteristics to the level at which failure or defects are extremely unlikely.
Now it turns out that in this situation the probability of producing a product within these specifications is 0.9973, which corresponds to 2700 parts per million (ppm) defective. This is referred to as three-sigma quality performance, and it actually sounds pretty good.
However, suppose we have a product that consists of an assembly of 100 independent components or parts and all 100 of these parts must be non-defective for the product to function satisfactorily. The probability that any specific unit of product is non-defective is
0.9973 x0.9973 x. . . x0.9973 (0.9973)100 0.7631
That is, about 23.7% of the products produced under three-sigma quality will be defective. This is not an acceptable situation, because many products used by today’s society are made up of many components. Even a relatively simple service activity, such as a visit by a family of four to a fast-food restaurant, can involve the assembly of several dozen compo- nents. A typical automobile has about 100,000 components and an airplane has between one and two million!
The Motorola six-sigma concept is to reduce the variability in the process so that the specification limits are at least six standard deviations from the mean. Then there will only be about 2 parts per billion defective. Under six-sigma quality, the probability that any specific unit of the hypothetical product above is non-defective is 0.9999998, or 0.2 ppm, a much better situation.
When the six-sigma concept was initially developed, an assumption was made that when the process reached the six-sigma quality level, the process mean was still subject to disturbances that could cause it to shift by as much as 1.5 standard deviations off target. Under this scenario, a six-sigma process would produce about 3.4 ppm defective.
The goals of six-sigma, a 3.4 ppm defect level, may seem artificially or arbitrarily high, but it is easy to demonstrate that even the delivery of relatively simple products or services at high levels of quality can lead to the need for six-sigma thinking. For example, consider the visit to a fast-food restaurant mentioned above. The customer orders a typical meal: a ham- burger (bun, meat, special sauce, cheese, pickle, onion, lettuce, and tomato), fries, and a soft drink. This product has ten components. Is 99% good quality satisfactory? If we assume that all ten components are independent, the probability of a good meal is
P{Single meal good} = (0.99) = 0.9044
which looks pretty good. There is better than a 90% chance that the customer experience will be satisfactory. Now suppose that the customer is a family of four. Again, assuming independence, the probability that all four meals are good is

P{All meal good} = (0.9044) = 0.6690.
This isn’t so nice. The chances are only about two out of three that all of the family meals are good. Now suppose that this hypothetical family of four visits this restaurant once a month (this is about all their cardiovascular systems can stand!). The probability that all visits result in good meals for everybody is
P{All visits during the year good} = (0.6690) = 0.0080.
21. A quality improvement program has been instituted in an organization to reduce total quality costs. Discuss the impact of such a program on prevention, appraisal, and failure costs.
Answer: With the advent of a quality improvement program, typically prevention and appraisal costs will increase during the initial period. Usually, as quality improves with time, appraisal costs should decrease. As the impact of quality improvement activities becomes a reality, it will cause a reduction in internal failure and external failure costs, with time. In the long term, we would expect the total quality costs to decrease. The increase in the prevention and appraisal costs should, hopefully, be more than offset by the reduction in internal failure and external failure costs.
22. Explain how it is feasible to increase productivity, reduce costs, and improve market share at the same time.
Answer: It is quite possible to increase productivity, reduce costs, and improve market share at the same time. Through quality improvement activities, one could eliminate operations and thereby reduce production costs as well as production time. When production time is reduced, it leads to improved efficiency, which in effect increases capacity. Thus productivity is improved and costs are reduced. Additionally, with an improvement in quality, customer satisfaction is improved, which leads to an increase in market share through an expanded customer base.
23. Explain why it is possible for external failure costs to go up even if the first-pass quality level of a product made by a company remains the same.
Answer: External failure costs are influenced by the degree of customer satisfaction with the product or service offered. Such influence is impacted not only by the level of operation of the selected organization, but also its competitors, and the dynamic nature of customer preferences. Hence, even if a company maintains its current level of efficiency, if it does not address the changing needs of the customer, external failure costs may go up since the company does not keep up with the dynamic customer needs. Furthermore, if the company begins to trail more and more relative to its competitors, even though it maintains its current level of first-pass quality, customer satisfaction will decrease, leading to increased external failure costs.
24. Discuss the impact of technological breakthrough on the prevention and appraisal cost and failure cost functions.
Answer: The impact of a technological breakthrough is to shift the location of the total prevention and appraisal cost function, leading to a decrease in such total costs for the same level of quality. This cost function usually increases in a non linear fashion with the quality level q. Additionally, the slope of the function will also reduce at any given level of quality with a technological breakthrough. Such breakthroughs may eventually cause a change in the slope of the prevention and appraisal cost function from concave to convex in nature, beyond a certain level of quality. The failure cost function (internal and external failures) is influenced not only by the company, but also by its competitors and customer preferences. Assuming that, through the breakthroughs, the company is in a better position to meet customer needs and has improved its relative position with respect to its competitors and has approached (or become) the benchmark in its industry, the failure cost function will drop, for each quality level, and its slope may also decrease, at each point, relative to its former level. Such changes may lead to a target level of nonconformance to be zero.
25. What are the reasons for mass inspection not being a feasible alternative for quality improvement?
Answer: Inspection is merely a sorting mechanism that separates the nonconforming items from the conforming ones. It does not determine the causes of nonconformance to identify appropriate remedial actions. Hence, it is not a viable alternative to quality improvement. Depending on mass inspection to ensure quality does not ensure defect-free product. If more than one inspector is involved in mass inspection, an inspector assumes that others will find what he/she has missed. Inspector fatigue also causes defective products to be passed. The fundamental point is mass inspection does not improve or change the process and so it is not an alternative to quality improvement.
26. Explain the organizational barriers that prevent a company from adopting the quality philosophy. Describe some specific action plans to remove such barriers. 

Answer: Organizational barriers prevent or slow down the flow of information internally between departments or between the employee and the supervisor. Among external barriers, these include the flow of information between the company and its vendors, the company and its customers, the company and its investors, and the company and the community in which it resides.
To improve communication channels, there needs to be a change in the organizational culture of the company. Free and open channels of communication needs to be established by management. There should be no punitive measures or repercussions to employees who provide feedback or products/processes, with employees being able to express their opinions honestly. Management can demonstrate this only by example or through implementation of such practices.
A second approach could be to promote a team effort in improving products/processes. While individual skills are important, a variety of persons are involved with multiple operations in making the product or rendering the service. It is the joint impact of all these people that influences quality. The reward structure, created by management, could be established in terms of the output quality of the team unit.
The adoption of cross-functional teams to identify product/process changes for quality improvement will definitely promote open channels of communication and reduce existing barriers. A system to accept suggestions from employees at all levels (including managerial personnel) could also be adopted by senior management. Further, a system that rewards the person/team when a proposed idea is implemented, will definitely boost morale and provide an inducement for fresh ideas.
27. What is the difference between quality control and quality improvement? Discuss the role of management in each of these settings.
Answer: Quality control deals with identification of the special causes, determining remedial actions, and implementing these actions so that the special causes are eliminated from the system. These are sporadic in nature. Frequently, the remedial actions could be determined at the operator level or lower line management level. Quality improvement, on the other hand, deals with identification of common causes that are inherent to the system and determining appropriate actions to eliminate or usually reduce their impact on the variation in the product/service. These decisions are usually made at the management level and involve changes in the system. They require decisions on resource allocation that usually are not made at the operator/lower management level. For example, replacement of major equipment for processing in order to reduce variation is an item of quality improvement. Alternatively, use of an incorrect form that caused a delay in order processing, could be an issue of quality control. Usually, quality control issues are handled first, followed by quality improvement.
28. Describe the total quality management philosophy. Choose a company and discuss how its quality culture fits this theme.
Answer: There are three major themes in the total quality management philosophy - customer, process, and people. Satisfying and exceeding the needs of the customer is the foremost objective. The core values of the company are management commitment and a directed focus of all employees towards a common vision and mission. Senior management creates a strategic plan, while mid-management develops operations plans accordingly. Implementation of plans requires an organizational culture that empowers people to suggest innovations through open channels of communication. Further, focus is on process improvement, where suppliers, customers, and investors are integrated into the extended process. The selection of a company will vary with the individual. Each should identify the particular quality culture of the selected company and the manner in which it fits the general themes previously discussed.
29. Describe Motorola's concept of six sigma quality and explain the level of non- conforming product that could be expected from such a process. 

Answer: Motorola's concept of six-sigma quality, even though it may have started out as a metric for evaluation of quality (say parts per million of nonconforming product), could be viewed as a philosophy or as a methodology for continuous improvement. In terms of a metric, Motorola's assumption is that the distribution of the quality characteristic is normal, and that the process spread is much smaller than the specification spread. In fact, it is assumed that initially, the specification limits are six standard deviations from the mean. Subsequently, shifts in the process mean may take place to the degree of 1.5 standard deviations on a given side of the mean. Here, the assumption is that larger shifts in the process mean will be detected by process controls that are in place and corresponding remedial actions will be taken. Thus, the nearest specification limit is 4.5 standard deviations from the mean, while the farthest specification limit being 7.5 standard deviations from the mean, after the process shift. Using normal distribution tables, it can be shown that the proportion of nonconforming product (outside the nearest specification limit) is 3.4 parts per million. The proportion nonconforming outside the farthest specification limit is negligible, yielding a total nonconformance rate of 3.4 ppm.

 

As a philosophy, the six sigma concept is embraced by senior management as an ideology to promote the concept of continuous quality improvement. It is a strategic business initiative, in this context. When six sigma is considered as a methodology, it comprises the phases of define, measure, analyze, improve, and control, with various tools that could be utilized in each phase. In the define phase, attributes critical to quality, delivery, or cost are identified. Metrics that capture process performance are of interest in the measure phase. In the analyze phase, the impact of the selected factors on the output variable is investigated through data analytic procedures. The improve phase consists of determining level of the input factors to achieve a desired level of the output variable. Finally, methods to sustain the gains identified in the improve phase are used in the control phase. Primarily, statistical process control methods are utilized.
30. Explain the difference between accuracy and precision of measurements. How do you control for accuracy? What can you do about precision?
Answer: Accuracy refers to the bias of the measuring instrument. This is the difference between the average of the measured values and the true value. Precision refers to the variation in the measured values. Accuracy is controlled through calibration. Precision is a function of the measuring instrument. Purchasing an instrument with a higher precision is an alternative.
31. For each situation, define a type I and a type II error in the appropriate context. Consider the costs of such errors and discuss the implications.
a. The Postal Service wishes to prove that the mean delivery time for packages is less than 5 days.
b. A financial institution believes that it has an average loan processing time of less than 10 days.
c. A marketing firm believes that the average contract for a customer exceeds $50,000.
d. A Web-order company wishes to test if it has improved its efficiency of operations by reducing its average response time.
e. A manufacturer of consumer durables believes that over 70% of its customers are satisfied with the product.
Answer:
a. A type I error here implies concluding the mean delivery time is less than 5 days, when in fact it is not. A type II error implies concluding that the mean delivery time is 5 or more days, when in fact it is less. In the first situation, the postal service would be advertising something that they cannot deliver. It may lead to dissatisfied customers. In the second situation, the postal service may miss an opportunity to promote its services. The type I error could be more serious as regards the customer.
b. A type I error implies concluding that the average loan processing time is less than 10 days, when, in fact, it is not. A type error implies concluding that the average loan processing time is 10 or more days, when in fact it is less. In the first situation; the institution would be raising their customer's expectations, when they may not be able to meet them. It may result in dissatisfied customers. In the second situation, the institution may miss an opportunity to promote itself. The type I error could be more serious as regards the customer. 

c. A type I error implies concluding that the average contract amount exceeds $50,000, when in fact it does not. A type II error implies concluding that the average contract amount is no more than $50,000, when in fact it is more. In the first situation, the firm falsely over-projects its customer contracts. If contracts are subject to federal or state restrictions, it could impact them. In the second situation, the firm is under-selling itself. A type I error could be serious under the guidelines of truth-in-advertising. A type II error, in this case, could hurt the firm's chances of obtaining new contracts. 

d. A type I error implies concluding that the company has improved its efficiency, when in fact it has not. A type II error implies concluding that the company has not improved its efficiency, when it has. A type I error here could be serious under the guidelines of truth-in-advertising. A type II error here could lead to missed opportunities by failing to publicize its efficient operations. 

e. A type I error implies concluding that the proportion of consumers satisfied exceeds 70%, when in fact it does not. A type II error implies concluding that the proportion of satisfied customers does not exceed 70%, when in fact it does. A type I error could be serious in the context of guidelines in truth-in-advertising. A type II error here could lead to missed opportunities. 

32. A 95% confidence interval for the mean thickness of a part in millimeters is (10.2, 12.9). Interpret this interval.
Answer: There is a 95% chance that the mean thickness is contained in the interval. Alternatively, if a large number of such intervals are constructed, about 95% of them will enclose the true mean thickness.
33. Explain the relationship between a type I error, power, degree of difference that one wishes to detect in a parameter value, and sample size. How can a type I error be reduced and the power be increased, for a given difference in the parameter? 

Answer: The various parameters and sample size are related. For example, for a given type I error and power, as the degree of difference that one wishes to detect in a parameter decreases, the sample size increases and vice versa. For a given difference in the parameter, the type I error can be reduced and the power increased by increasing the sample size.
34. Explain type I and type II errors in the context of sampling from customers' accounts to identify billing errors in a large retail store. What are the associated costs of these two types of errors?
Answer: H0: No billing errors; Ha: Billing errors. A type I error implies concluding that there are billing errors in a customer account when there are none. This could result in a wasted effort and cost on the part of auditors to detect billing errors. A type II error implies concluding that there are no billing errors when, in fact, they exist. Here, customers who find errors in their bills could be very dissatisfied, leading to loss of future market share. 

35. Population: A population is the set of all items that possess a certain characteristic of interest. A parameter is a characteristic of a population, something that describes it.
36. Sample: A sample is a subset of a population. Realistically, in many manufacturing or service industries, it is not feasible to obtain data on every element in the population. Measurement, storage, and retrieval of large volumes of data are impractical, and the costs of obtaining such information are high. Thus, we usually obtain data from only a portion of the population—a sample. A statistic is a characteristic of a sample. It is used to make inferences on the population parameters that are typically unknown.
37. Statistics: Statistics is the science that deals with the collection, classification, analysis, and making of inferences from data or information. Statistics is subdivided into two categories: descriptive statistics and inferential statistics.
Descriptive statistics describes the characteristics of a product or process using information collected on it. Suppose that we have recorded service times for 500 customers in a fast-food restaurant. We can plot this as a frequency histogram where the horizontal axis represents a range of service time values and the vertical axis denotes the number of service times observed in each time range, which would give us some idea of the process condition. The average service time for 500 customers could also tell us something about the process.
Inferential statistics draws conclusions on unknown product or process parameters based on information contained in a sample. Let's say that we want to test the validity of a claim that the average service time in the fast-food restaurant is no more than 3 minutes (min). Suppose we find that the sample average service time (based on a sample of 500 people) is 3.5 min. We then need to determine whether this observed average of 3.5 min is significantly
38. Random Variable: Data on quality characteristics are described by a random variable and are categorized as continuous or discrete.
Continuous Variable A variable that can assume any value on a continuous scale within a range is said to be continuous. Examples of continuous variables are the hub length of lawn mower tires, the viscosity of a certain resin, the specific gravity of a toner used in photocopying machines, the thickness of a metal plate, and the time to admit a patient to a hospital. Such variables are measurable and have associated numerical values.
Discrete Variable Variables that can assume a finite or countable infinite number of values are said to be discrete. These variables are counts of an event. The number of defective rivets in an assembly is a discrete random variable. Other examples include the number of paint blemishes in an automobile, the number of operating capacitors in an electrical instrument, and the number of satisfied customers in an automobile repair shop.
39. Errors in Hypothesis Testing: There are two types of errors in hypothesis testing: type I and type II. In a type I error, the null hypothesis is rejected when it is actually true. The probability of a type I error is indicated by a, the level of significance of the test. Thus, alpha = P (type I error) = P(rejecting H0/H0 is true). For example, in testing (H0: mu>= 30) against (Ha: mu < 30), suppose that a random sample of 36 parts yields a sample average length of 28 mm when the true mean length of all parts is really 30 mm. If our rejection region is Xbar < 29.542, we must reject the null hypothesis. The magnitude of such an error can be controlled by selecting an acceptable level of a.
In a type II error, the null hypothesis is not rejected even though it is false. The probability of a type II error is denoted by beta. Thus,, beta = f(type II error) = P(not rejecting H0/H0 is false). For example, let's test (H0: mu >= 30) against (Ha: mu < 30) with a rejection region of Xbar < 29.452. Now, suppose that the true population mean length of all parts is 28 mm and a sample of 36 parts yields a sample mean of 29.8 mm. In this case, we do not reject the null hypothesis (because 29.8 does not lie in the region Xbar < 29.452). This is a type II error.
Calculating the probability of a type II error requires information about the population parameter (or at least an assumption about it). In such instances, we predict the probability of a type II error based on the actual or assumed parameter value; this prediction serves as a measure of the goodness of the testing procedure and the acceptability of the chosen rejection region. The values of alpha and ß are inversely related. If all other problem parameters remain the same, ß will decrease as alpha increases, and vice versa. Increasing the sample size can reduce both alpha and ß.
The power of a test is the complement of ß and is defined as
power = 1 - ß = P(rejecting H0/ H0 is false )
The power is the probability of correctly rejecting a null hypothesis that is false. Obviously, tests with high powers are the most desirable.
40. CAUSES OF VARIATION
Variability is a part of any process, no matter how sophisticated, so management and employees must understand it. Several factors over which we have some control, such as methods, equipment, people, materials, and policies, influence variability. Environmental factors also contribute to variability. The causes of variation can be subdivided into two groups: common causes and special causes. Control of a process is achieved through the elimination of special causes. Improvement of a process is accomplished through the reduction of common causes.
Special Causes: Variability caused by special or assignable causes is something that is not inherent in the process. That is, it is not part of the process as designed and does not affect all items. Special causes can be the use of a wrong tool, an improper raw material, or an incorrect procedure. If an observation falls outside the control limits or a nonrandom pattern is exhibited, special causes are assumed to exist, and the process is said to be out of control. One objective of a control chart is to detect the presence of special causes as soon as possible to allow appropriate corrective action. Once the special causes are eliminated through remedial actions, the process is again brought to a state of statistical control.
Deming believed that 15% of all problems are due to special causes. Actions on the part of both management and employees will reduce the occurrence of such causes.
Common Causes: Variability due to common or chance causes is something inherent to a process. It exists as long as the process is not changed and is referred to as the natural variation in a process. It is an inherent part of the process design and effects all items. This variation is the effect of many small causes and cannot be totally eliminated. When this variation is random, we have what is known as a stable system of common causes. A process operating under a stable system of common causes is said to be in statistical control. Examples include inherent variation in incoming raw material from a qualified vendor, the vibration of machines, and fluctuations in working conditions.
Management alone is responsible for common causes. Deming believed that about 85% of all problems are due to common causes and hence can be solved only by action on the part of management. In a control chart, if quality characteristic values are within control limits and no nonrandom pattern is visible, it is assumed that a system of common causes exists and that the process is in a state of statistical control.
41. Errors in Making Inferences from Control Charts
Making inferences from a control chart is analogous to testing a hypothesis. Suppose that we are interested in testing the null hypothesis that the average diameter of a part from a particular process is 25 mm. This situation is represented by the null hypothesis H0: mu = 25; the alternative hypothesis is Ha: mu ≠ 25. The rejection region of the null hypothesis is thus two- tailed. The control limits are the critical points that separate the rejection and acceptance regions. If a sample value (sample average diameter, in this case) falls above the upper control limit or below the lower control limit, we reject the null hypothesis. In such a case, we conclude that the process mean differs from 25 mm and the process is therefore out of control. Types I and II errors can occur when making inferences from control charts.
Type I Errors: Type I errors result from inferring that a process is out of control when it is actually in control. The probability of a type I error is denoted by a. Suppose that a process is in control. If a point on the control chart falls outside the control limits, we assume that the process is out of control. However, since the control limits are a finite distance (usually, 3 standard deviations) from the mean, there is a small chance (about 0.0026) of a sample statistic falling outside the control limits. In such instances, inferring that the process is out of control is a wrong conclusion. It is the sum of the two tail areas outside the control limits.
Type II Errors: Type II errors result from inferring that a process is in control when it is really out of control. If no observations fall outside the control limits, we conclude that the process is in control. Suppose, however, that a process is actually out of control. Perhaps the process mean has changed (say, an operator has inadvertently changed a depth of cut or the quality of raw materials has decreased). Or, the process could go out of control because the process variability has changed (due to the presence of a new operator). Under such circumstances, a sample statistic could fall within the control limits, yet the process would be out of control—this is a type II error.
42. Operating Characteristic Curve: An operating characteristic (OC) curve is a measure of goodness of a control chart's ability to detect changes in process parameters. Specifically, it is a plot of the probability of the type II error versus the shift of a process parameter value from its in-control value. OC curves enable us to determine the chances of not detecting a shift of a certain magnitude in a process parameter on a control chart. The shape of an OC curve is similar to an inverted S. For small shifts in the process mean, the probability of non detection is high. As the change in the process mean increases, the probability of non detection decreases; that is, it becomes more likely that we will detect the shift. For large changes, the probability of non detection is very close to zero. The ability of a control chart to detect changes quickly is indicated by the steepness of the OC curve and the quickness with which the probability of non detection approaches zero. Calculations for constructing an operating characteristic curve are identical to those for finding the probability of a type II error.
43. Average Run Length: An alternative measure of the performance of a control chart, in addition to the OC curve, is the average run length(ARL). This denotes the number of samples, on average, required to detect an out-of-control signal. Suppose that the rule used to detect an out-of-control condition is a point plotting outside the control limits.
44. What are the benefits of using control charts?
Answer: Benefits include when to take corrective action, type of remedial action necessary, when to leave a process alone, information on process capability, and providing a benchmark for quality improvement. 

45. Explain the difference between common causes and special causes. Give examples of each. 

Answer: Special causes are not inherent in the process. Examples are inexperienced operator, or poor quality raw material and components. Common causes are part of the system. They cannot be totally eliminated. Examples are variations in processing time between qualified operators, or variation in quality within a batch received from a qualified supplier.
46. Explain the rationale behind placing the control limits at 3 standard deviations from the mean.
Answer: A normal distribution of the quality characteristic being monitored (for example average strength of a cord) is assumed. For a normal distribution, control limits placed at 3 standard deviations from the mean ensure that about 99.73% of the values will plot within the limits, when no changes have taken place in the process. This implies that very few false alarms will occur.
47. Define and explain type I and type II errors in the context of control charts. Are they related? 
How does the choice of control limits influence these two errors?
Answer: A type I error occurs when we infer that a process is out of control when it is really in control. A type II error occurs when we infer that a process is in control when it is really out of control. The placement of the control limits influences these two errors. As the control limits are placed further out from the center line, the probability of a type I error decreases, but the probability of a type II error increases, when all other conditions remain the same, and vice versa. An increase in the sample size may lead to reducing both errors. 

48. What are warning limits, and what purpose do they serve?
Answer: Warning limits are these that are placed at 2 standard deviations from the centerline. Using the property of a normal distribution, the probability of an observation falling within the warning/control limit on a given side is about 2.15%, if the process is in control. These limits serve as an alert to the user that the process may be going out of control. In fact, one rule states that if 2 out of 3 successive sample statistics fall within the warning/control limit on a given side, the process may be out of control.
49. What is the utility of the operating characteristic curve? How can the discriminatory power of the curve be improved? 

Answer: The operating characteristic (OC) curve associated with a control chart indicates the ability of the control chart to detect changes in process parameters. It is a measure that indicates the goodness of the chart through its ability to detect changes in the process parameters when there are changes. A typical OC curve for a control chart for the mean will be a graph of the probability of non-detection on the vertical axis versus the process mean on the horizontal axis. As the process mean deviates more from the hypothesized (or current) value, the probability of non-detection should decrease. The discriminatory power of the OC curve may be improved by increasing the sample size.
50. Describe the role of the average run length (ARL) in the selection of control chart parameters. Explain how ARL influences sample size. 

Answer: The average run length (ARL) is a measure of goodness of the control chart and represents the number of samples, on average, required to detect an out-of-control signal. For a process in control, the ARL should be high, thus minimizing the number of false alarms. For a process out-of-control, the ARL should be small indicating the sensitivity of the chart. As the degree of shift from the in-control process parameter value increases, the ARL should decrease. Desirable values of the ARL, for both in-control and out of control situations, may be used to determine the location of the control limits. Alternatively, from predetermined ARL graphs, the sample size necessary to achieve a desired ARL, for a certain degree of shift in the process parameter, may be determined.
51. Discuss the relationship between ARL and type I and II errors.

Answer: The ARL is linked to the probability of detection of an out-of-control signal. If Pd represents the probability of detection, we have ARL = 1 / Pd. For an in-control process, 
Pd=alpha = P (type I error). So, for 3 control limits, ARL = 1/0.0026 = 385. For an out-of-control process, Pd=1-P (type II error) = 1 - beta Hence, ARL = 1/(1 -beta ). 

52. How are rational samples selected? Explain the importance of this in the total quality 
systems approach. 

Answer: The selection of rational samples or subgroups hinges on the concept that samples should be so selected such that the variation within a sample is due to only common causes, representing the inherent variation in the process. Further, samples should be selected such that the variation between samples is able to capture special causes that prevail. Utilization of this concept of rational samples is important in the total quality systems approach since the basic premise of setting up the. control limits is based on the inherent variation that exists in the process. Hence, the variability within samples is used to estimate the inherent variation that subsequently impacts the control limits. 

53. State and explain each rule for determining out-of-control points. 

Answer: Rule 1 - A single point plots outside the control limits. Rule 2 - Two out of 3 consecutive points plot outside the two-sigma limits on the same side of the centerline. Rule 3 - Four out of 5 consecutive points fall beyond the one-sigma limit on the same side of the centerline. Rule 4 - Nine or more consecutive points fall on one side of the centerline. Rule 5 - A run of 6 or more consecutive points steadily increasing or decreasing. All of the rules are formulated on the concept that, if a process is in control, the chances of the particular event happening is quite small. This is to provide protection against false alarms.
54. What are some reasons for a process to go out of control due to a sudden shift in the level? 

Answer: Some reasons could be adding a new machine, or a new operator, or a different vendor supplying raw material. 

55. Explain some causes that would make the control chart pattern follow a gradually increasing trend. 

Answer: Typical causes could be tool wear in a machining operation or learning on the job associated with an increase in time spent on job.
56. What are the advantages and disadvantages of using variables rather than attributes in control charts?

Answer: Variables provide more information then attributes since attributes do not show the degree of conformance. Variables charts are usually applied at the lowest level (for example operator or machine level). Sample sizes are typically smaller for variables charts. The pattern of the plot may suggest the type of remedial action, if necessary, to take. The cost of obtaining variable data is usually higher than that for attributes.
57. Describe the use of the Pareto concept in the selection of characteristics for control charts
Answer: The Pareto concept is used to select the "vital few" from the "trivial many" characteristics that may be candidates for selection of monitoring through control charts. The Pareto analysis could be done based on the impact to company revenue. Those characteristics that have a high impact on revenue could be selected.
58. Discuss the preliminary decisions that must be made before you construct control chart. What concepts should be followed when selecting rational samples?
Answer: A variety of preliminary decisions are necessary. These involve selection of rational samples, sample size, frequency of sampling, choice of measuring instruments, and design of data recording forms as well as the type of computer software to use. In selecting rational samples, effort must be made to minimize variation within samples such that it represents the inherent variation due to common causes that exists in the system. Conversely, samples must be so chosen to maximize the chances of detecting differences between samples, which are likely due to special causes.
59. What are some considerations in the interpretation of control charts based on standard values? Is it possible for a process to be in control when its control chart is based on observations from the process but to be out of control when the control chart is based on a specified standard? Explain.
Answer: One has to be careful in drawing conclusions from a control chart based on standard values. The process could indicate "signs of out-of-control" conditions, through plotting of observations outside the control limits, for example, when there may not be special causes. It could be that the process is in control but not capable of meeting the imposed standards. In this situation, management will need to address the common causes and identify means of process improvement.
60. Explain the difference in interpretation between an observation falling below the lower control limit on an X-chart and one falling below the lower control limit on an R-chart. Discuss the impact of each on the revision of control charts in the context of response time to fire alarms.
Answer: On an Xbar chart, if an observation falls below the LCL, it implies an unusually fast response to a fire alarm. For an R chart, an observation plotting below the LCL implies that the spread in the response time is small. For the Xbar chart, the point plotting below the LCL is rather desirable. Hence, if we can identify the special conditions that facilitated its occurrence, we should attempt to adopt them. If such is feasible, we may not delete that observation during the revision process. For the observation below the LCL on the R-chart, it implies the process variability to be small for that situation. Reducing variation is a goal for everyone. Thus, it may be worthwhile looking into conditions that led to its occurrence and emulating them in the future. If this is feasible, we may not delete the observation during the revision process.
61. Selection of Rational Samples for Control charts: The manner in which we sample the process deserves our careful attention. The sampling method should maximize differences between samples and minimize differences within samples. This means that separate control charts may have to be kept for different operators, machines, or vendors.
Lots from which sample are chosen should be homogeneous. If our objective is to determine shifts in process parameters, samples should be made up of items produced at nearly the same time. This gives us a time reference and will be helpful if we need to determine special causes. Alternatively, if we are interested in the nonconformance of items produced since the previous sample was selected, samples should be chosen from items produced since that time.
62. Sample Size decision for control charts: Sample sizes are normally between 4 and 10, and it is quite common in industry to have sample sizes of 4 or 5. The larger the sample size, the better the chance of detecting small shifts. Other factors, such as cost of inspection or cost of shipping a nonconforming item to the customer, also influence the choice of sample size.
63. Frequency of Sampling for control charts: The sampling frequency depends on the cost of obtaining information compared to the cost of not detecting a nonconforming item. As processes are brought into control, the frequency of sampling is likely to diminish.
64. Distinguish between a non conformity and a nonconforming item. Give examples of each in the following contexts:
a. Financial institution
b. Hospital
c. Microelectronics manufacturing
d. Law firm
e. Nonprofit organization
Answer:
a. Examples of nonconformities are errors in customer monthly statements or errors in a loan processing application. On the other hand, rather than count errors, if we define a customer statement as either error-free or not, or a loan processing application as either error-free or not, it would be an example of a nonconforming item.
b. Examples of nonconformities include number of medication errors or errors in laboratory analysis. Nonconforming items include whether a patient is not satisfied or whether a hospital bed is not available.
c. Example of nonconformities includes number of defective solders in a circuit board, while a nonconforming item could be the circuit board being not defect- free.
d. An example of nonconformity is the number of unsubstantiated references in a legal document, while a nonconforming item could be a case that is lost in court.
e. An example of nonconformity is the number of errors in allocating funds, whilea nonconforming item could be the improper distribution of a certain donor's gift.
65. What are the advantages and disadvantages of control charts for attributes over those for variables?
Answer: Certain characteristics are measured as attributes, for example, the performance of a staff member. The number of control charts required could be less when using an attribute chart. For example, several characteristics could be lumped together such that when all criteria are satisfied, the item is classified as acceptable. Further, attribute charts can be used at various levels in the organization whereas variables chart are used at the lowest levels (individual person or operator). One disadvantage is that attribute charts do not provide as much information as variables charts. Also, the response time to detect a shift in the process is usually slower and the sample sizes required, for similar levels of protection, are larger than that for variables charts.
66. Discuss the significance of an appropriate sample size for a proportion-nonconforming chart.
Answer: For a p-chart, the choice of an appropriate sample size is critical. The sample size must be such that it must allow for the possibility of occurrences of nonconforming items in the sample. As an example, if a process has a nonconformance rate of 1%, a sample size of 400 or 500 is necessary.
67. The CEO of a company has been charged with reducing the proportion nonconforming of the product output. Discuss which control charts should be used and where they should be placed.
Answer: A p-chart for the proportion nonconforming should be used and should be at the overall organization level. Thus, if the CEO has responsibility for 5 plants, the p-chart should measure product output quality over these plants. Hopefully, through such monitoring, one could obtain an indication of a specific plant(s) which does not perform up to expectations.
68. How does changing the sample size affect the centerline and the control limits of a p-chart?
Answer: A change in the sample size does not affect the centerline on a p-chart. The control limits are drawn closer to the centerline with an increase in the sample size.
69. What are the advantages and disadvantages of the standardized p-chart as compared to the regular proportion nonconforming chart?
Answer: Since the proportion nonconforming values are normalized in a standardized p-chart, the control limits remain constant, even though the subgroup size may vary. These limits are at ± 3. Also, the tests for detection of out-of-control patterns using runs are easier to apply than the regular p-chart where the subgroup size changes.
70. Discuss the assumptions that must be satisfied to justify using a p-chart. How are they different from the assumptions required for a c-chart?
Answer: The assumptions for a p-chart are those associated with the binomial distribution. This implies that the probability of occurrence of a nonconforming item remains constant for each item, and the items are assumed to be independent of each other (in terms of being nonconforming). The assumptions for a c-chart, that deals with the number of nonconformities, are those associated with the Poisson distribution. The opportunity for occurrence of nonconformities could be large, but the average number of nonconformities per unit must be small. Also, the occurrences of nonconformities must be independent of each other. Further, the chance of occurrence of nonconformities should be the same from sample to sample.
71. Is it possible for a process to be in control and still not meet some desirable standards for the proportion nonconforming? How would one detect such a condition, and what remedial actions would one take?
Answer: When the p-chart is constructed based on data collected from the process, it is quite possible for the process to be in control and still not meet desirable standards. For example, if the desired standards are very stringent, say a 0.001% nonconformance rate, the current process may not be able to meet these standards without major changes. Remedial actions would involve systemic changes in the process that reduce proportion nonconforming. It could be through change in equipment, training of personnel, or scrutiny in selection of vendors. The detection could take place if the centerline and control limits are calculated based on the desirable standard and data from the current process is plotted on that chart.
72. Discuss the role of the customer in influencing the proportion-nonconforming chart. How would the customer be integrated into a total quality systems approach?
Answer: Customer satisfaction or “acceptance” of the product or service may influence the p-chart. A survey of the customers may indicate their needs, based on which management of the organization could design the product/service appropriately. Customer feedback becomes crucial for determining the ultimate acceptance of the product/service. In a total quality system approach, the customer is part of the extended process. Determining customer expectations provides valuable information to product and process design.
73. Discuss the impact of the control limits on the average run length and the operating characteristic curve.
Answer: If the control limits are expanded further out from the centerline, the chance of a false alarm (type I error) will decrease, implying that the ARL, for an in-control process, will increase. The operating characteristic curve represents the probability of failing to detect a process change, when a process change has taken place, which is the probability of a type II error. So, in this situation with the expanded control limits, the ARL to detect a change, for an out-of-control process, will also increase.
74. Explain the conditions under which a u-chart would be used instead of a c-chart.
Answer: In monitoring the number of nonconformities, when the sample size changes from sample to sample, a u-chart is used to monitor the number of nonconformities per unit.
75. Explain why a p- or c-chart is not appropriate for highly conforming processes.
Answer: For highly conforming processes, the occurrence of nonconformities or nonconforming items is very rare. Hence, extremely large sample sizes will be necessary to ensure the observation of such, in order to construct a p-chart or a c-chart. Often times, this may not be feasible. An alternative could be to observe the time or number of items to observe a nonconformity. Further, when the proportion nonconforming is very small, the normal distribution is not a good approximation to the binomial distribution. Also the p- or c-chart may show an increased false alarm rate for highly conforming processes and may also fail to detect a process change, when one takes place. When the proportion nonconforming is very small, LCL may be less than 0. So, process improvement cannot be detected

76. Distinguish between 3σ limits and probability limits. When would you consider constructing probability limits?
Answer: Three-sigma limits are based on the assumption of normality of distribution of the statistic being monitored. So, when the distribution of the statistic cannot be reasonably approximated by the normal distribution, probability limits, that are based on the actual distribution of the statistic, should be used. For example, the time to observe a defect could have an exponential distribution. So, the exponential distribution should be used to find say the lower and upper control limits (say at 0.13% and 99.87%) of the distribution. These limits may not be symmetric about the centerline.
77. Explain the setting under which a U-chart would be used. How does the U-chart incorporate the user's perception of the relative degree of severity of the different categories of defects?
Answer: When defects or nonconformities have different degrees of severity, a U-chart representing demerits per unit is used. The degree of severity of a defect could be influenced by the corresponding user of the product/service. Thus, what one customer perceives as "poor" service in a restaurant could be different from another. Therefore, based on the context of use of the product/service, appropriate severity ratings should be established.
78. Explain the difference between specification limits and control limits. Is there a desired relationship between the two?
Answer: Specification limits are determined by the needs of the customer. These are bounds placed on product or service characteristics to ensure adequate functioning of the product or meeting the service expectations of the consumer. Control limits, on the other hand, represent the variation between samples or subgroups, of the statistic being monitored (such as sample average). Control limits have no relationship to the specification limits.
79. Explain the difference between natural tolerance limits and specification limits. How does a process capability index incorporate both of them? What assumptions are made in constructing the natural tolerance limits?
Answer: Natural tolerance limits represent the inherent variation, when special causes are removed, in the quality characteristic of individual product/service items. Natural tolerance limits are usually found based on the assumption of normality of distribution of the characteristic. Ideally, they should be found based on the actual distribution of the characteristic. A process capability index incorporates both the specification limits and the natural tolerance limits. It determines the ability of the process to meet the specification limits, thus indicating a measure of goodness of the process.
80. What are statistical tolerance limits? Explain how they differ from natural tolerance limits.
Answer: Statistical tolerance limits define bounds of an interval that contains a specified proportion (1-alpha) of the population with a given level of confidence (gamma ). These bounds are found using sample statistics, for example, the sample mean and sample standard deviation. As the sample size becomes large, the statistical tolerance limits approach the values that are found using the population parameters (population mean and population standard deviation, for example). Statistical tolerance limits are usually found based on a normal distribution or using nonparametric methods. These limits, for an in-control process, represent coverage such that just about all, or 99.74% using the normality assumption, of the distribution is contained within these bounds.
81. Is it possible for a process to be in control and still produce nonconforming output? Explain. What are some corrective measures under these circumstances?
Answer: It is possible for a process to be in control, when only common causes prevail, and still produce nonconforming output that does not meet the specification limits. This implies that the inherent variation of the quality characteristic in the process, when it is in control, as determined by the spread between the natural tolerance limits, exceeds the spread between the specification limits. Some corrective measures could be to explore if the customer is willing to loosen the specification limits, reduce the process spread through better equipment, better raw material, or better personnel, or in the short run to shift the process average so as to reduce the total cost of nonconformance (which could be the cost of rework and scrap in a product situation).
82. What are the advantages of having a process spread that is less than the specification spread? What should be the value of Cp be in this situation? Could Cpk be < 1 here?
Answer: When the process spread is less than the specification spread, one main advantage is that if the process mean does not change and is centered between the specification limits, just about all of the items will be acceptable to the customer. This makes an assumption of normality of distribution of the quality characteristic. Cp should be greater than 1. It is possible for Cpk to be <= 1, if the process mean is closer to one of the specification limits by less than 3σ .
83. Discuss the advantages and disadvantages of sampling.
Answer: Some advantages of sampling are the following: For destructive inspection, sampling is the only alternative. It reduces inspection error since in high-quantity repetitive inspection, inspector fatigue could cause errors. If a decision to reject an entire batch is based on the results of sampling inspection, it provides a motivation to improve quality. Some disadvantages are: There is a risk of rejecting "good" lots (producer's risk) or of accepting "poor" lots (consumer's risk). Degree of information content in sampling is less than that of 100% inspection. Time and effort in planning and adopting a sampling plan are necessary.
84. Distinguish between producer's risk and consumer's risk. In this context, explain the terms acceptable quality level and limiting quality level. Discuss instances for which one type of risk might be more important than the other.
Answer: Producer's risk refers to the chance of rejecting "good" lots. Acceptable quality level (AQL) is the quality level of "good" lots, associated with producer's risk, that we prefer not to reject. Consumer's risk is the chance of accepting "poor" lots. The limiting quality level (LQL) is the quality level of "poor" lots, associated with consumer's risk, that we prefer not to accept. The type of use of the product and the cost consequences associated with the risks will influence the relative importance of each. For example, suppose that we are considering a valve that deals with the braking mechanism of an automobile, a critical component. The consumer's risk (of accepting a poor lot) is more important than of rejecting a good lot.
85. What is the importance of the OC curve in the selection of sampling plans? Describe the impact of the sample size and the acceptance number on the OC curve. What is the disadvantage of having an acceptance number of zero?
Answer: The OC curve shows the probability of acceptance of the lot as a function of the lot quality. It shows the discriminatory power of the sampling plan. This means, as the lot quality decreases, does the lot acceptance probability diminish rapidly? For a given sample size, as the acceptance number decreases, the discriminatory power of the sampling plan increases. On the other hand, for a given acceptance number, as the sample size increases, the discriminatory power of the sampling plan increases. For an acceptance number of zero, the OC curve has a convex shape where the probability of lot acceptance starts to rapidly decrease even for good levels of lot quality. This means that the producer's risk will be high.
86. Discuss the relative advantages and disadvantages of single, double, and multiple sampling plans.
Answer: Single sampling plan is the most simple and has the least administrative costs while the converse is true for multiple sampling plans. However, the number of samples inspected to make a decision, on average, is more for single sampling plans implying that inspection costs will be the most for single sampling plans and least for multiple sampling plans. In terms of content of information, which is a function of the sample size, single sampling plans provide the most information while multiple sampling plans provide the least.
87. Distinguish between average outgoing quality and acceptable quality level. Explain the meaning and importance of the average outgoing quality limit.
Answer: The average outgoing quality (AOQ) is the average quality level of a series of batches that leave the inspection station, assuming rectifying inspection, when they enter the inspection station at some quality level (say, p). Hence, as a function of the incoming quality level, p, the AOQ may initially increase, reach a peak, and then decrease as the level of p increases. This is because, for very good batches, lots will be accepted on the basis of results from sampling. So, the outgoing quality level will be very similar to the incoming quality level. However, for very poor batches, the sampling plan will usually reject the lot, which then goes through screening (100% inspection). It is assumed, in 100% inspection, all nonconforming items are detected and replaced with conforming items. Thus, the average outgoing quality level improves as lot quality decreases. The average outgoing quality limit (AOQL) is the peak of the AOQ curve. This tells us the worst average quality that would leave the inspection station, assuming rectification, regardless of the incoming lot quality. Hence, AOQL is used as a performance measure of sampling plans.
88. If you were interested in protection for acceptance of a single lot from a vendor with whom you do not expect to conduct much business, what criteria would you select, and why?
Answer: To protect acceptance of a single lot from a vendor, we would consider a lot-by-lot attribute sampling plan using the associated producer's risk, consumer's risk, or both. For a defined good quality level, associated with the producer's risk, we would desire to accept such batches. Similarly, we could find a sampling plan, associated with a consumer's risk, that we desire to not accept such poor batches.
89. Explain the difference between average sample number and average total inspection. State any assumptions made.
Answer: Average sample number (ASN) represents the average number of items inspected, for a series of incoming lots with a specified lot quality, to make a decision. Average total inspection (ATI) represents the average number of items inspected per lot, for lots that come to the inspection station and a sampling plan is used. It is assumed that lots which are rejected by the sampling plan go through screening (100% inspection).
90. If rectification inspection is used, discuss possible criteria to use in choosing sampling plans.
Answer: For lots that are rejected, if rectifying inspection such as screening is used, some criteria to choose sampling plans could be the average outgoing quality limit (AOQL) or average total inspection (ATI).
91. Discuss the context in which minimizing the average sample number would be a feasible criterion. Which type of sampling plan (single, double, or multiple) would be preferable, and what factors would influence your choice?
Answer: When inspection costs are high, the average number of items inspected to make a decision (ASN) could be a chosen criterion. We would prefer to minimize ASN in this situation. For a single sampling plan, ASN is constant as a function of the lot quality. For a double sampling plan, ASN may initially increase, remain approximately constant, and then decrease as the lot quality decreases. A similar shape may also be followed by a multiple sampling plan. Thus, the incoming quality of lots may be a deciding factor in the selection between single, double, or multiple sampling plans. Usually, for very good lots and very poor lots, multiple sampling plans will have a smaller ASN compared to the other two. However, for lots of average quality, it is possible for single sampling plans to have a smaller ASN.

 

Source: https://www.sircrrengg.ac.in/images/download/MECH/SQC_Short_Answers.doc

Web site to visit: https://www.sircrrengg.ac.in/

Author of the text: indicated on the source document of the above text

If you are the author of the text above and you not agree to share your knowledge for teaching, research, scholarship (for fair use as indicated in the United States copyrigh low) please send us an e-mail and we will remove your text quickly. Fair use is a limitation and exception to the exclusive right granted by copyright law to the author of a creative work. In United States copyright law, fair use is a doctrine that permits limited use of copyrighted material without acquiring permission from the rights holders. Examples of fair use include commentary, search engines, criticism, news reporting, research, teaching, library archiving and scholarship. It provides for the legal, unlicensed citation or incorporation of copyrighted material in another author's work under a four-factor balancing test. (source: http://en.wikipedia.org/wiki/Fair_use)

The information of medicine and health contained in the site are of a general nature and purpose which is purely informative and for this reason may not replace in any case, the council of a doctor or a qualified entity legally to the profession.

 

Statistical Quality Control Questions and Answers

 

The texts are the property of their respective authors and we thank them for giving us the opportunity to share for free to students, teachers and users of the Web their texts will used only for illustrative educational and scientific purposes only.

All the information in our site are given for nonprofit educational purposes

 

Statistical Quality Control Questions and Answers

 

 

Topics and Home
Contacts
Term of use, cookies e privacy

 

Statistical Quality Control Questions and Answers