six sigma full report
#1

[attachment=1450]


SIX SIGMA -Quality Improvement Program

ABSTRACT:
Six Sigma is a smarter way to manage business or department. It is a vision of quality that equates with only 3.4 defects for million opportunities for each product or service transactions. Strives for perfection.
We believe that defects free product can be in any organization implementing six sigma. In this paper, we presented an overview of the process which explains how six sigma increase the overall quality improvement task into a series of project management stages: Define, Measure, Analyses, Innovation, Improve and Control. We will describe dependence of six sigma on Normal Distribution theory and also process capability. It gives a small note on the assumptions made in six sigma methodology of problem solving and the key elements involved .A brief view on Defects Per Million Opportunities (DPMO) Analysis is given.
Ultimate objectives of the methodology to solve problems, improve the quality, profitability and customers satisfaction.
INTRODUCTION:
The main objective of any business is to make profit. For increasing the profit, the selling price should increase and/or the manufacturing cost should come down. Since the price is decided by the competition in the market, hence the only the way to increase the profit is to cut down the manufacturing cost which can be achieved only through continuous improvement in the companyâ„¢s operation. Six sigma quality programs provide an overall framework for continuous improvement in the process of an organization. Six sigma uses facts, data and root cause to solve problems.
EVOLUTION OF SIXSIGMA:
Six sigma background stretches back eighty plus years, from management science concepts developed in the United States to Japanese management breakthroughs to TOTAL QUALITY efforts in 1970s and 1980s. But the real impacts can be seen in the waves of change and positive results sweeping such companies as GE, MOTOROLA, JOHNSON &JOHNSON and AMERICAN EXPRESS.
CONCEPTS:
Six sigma is defined a customer oriented, structured, systematic, proactive and quantitative company wide approach for continuous improvement of manufacturing, services, engineering, suppliers and other business process. It is a statistical measure of the performance of a process or a product. It measures the degree to which the process deviates from the goals and then takes efforts to improve the process to achieve total customer satisfaction.
Six sigma efforts target three main areas:
¢ Improving customer satisfaction.
¢ Reducing cycle time.
¢ Reducing defects.
Three key characteristics separates six sigma from quality programs of the past:
1. Six Sigma is a customer focused.
2. Six sigma projects produce major returns on investments.
3. Six sigma changes how management operates.
6 SIGMA= 3.4 defects per million
Six Sigma equates 3.4 defects for every million parts made or process transactions carried out. This quality equates to 99.99966% defect free products or transactions. High quality standards do make sense but the cost required to pursue such high standards have to be balanced with benefits gained. The six sigma processes exposes the root causes and then focuses on the improvements to achieve the highest level of quality at acceptable cost. This is essential to achieve and maintain a competitive advantage and high levels of customer satisfaction and loyalty.
When we say that a process is at six sigma level, such a process is normally yield two instances of non-conformances out of every million opportunities for non-conformances, provided there is no shift in the process average. The same will yield 3.4 instances of non-conformances out of every million opportunities with an expected of 1.5 sigma in the process average. This is considered to be best-in-class quality.
THEORY:
Six Sigma relies on the normal distribution theory to predict defect rates. As we all know, variation is inevitable in any process. The variation can be due to chance causes that are inherent in the process [chance variation] or due to assignable causes that are external to the process [Assignable variation]. If we detect and remove all the assignable causes and bring the process under the influence of chance causes, then the process is said to be under statistical control. The process capability (PC) is defined as six times the standard deviation (). PC represents the measured inherent reproducibility of the product turned out by the process.

The upper specification limit (USL) and lower specification limit (LSL) of +/- 6 of the mean with a defect rate of 0.002 ppm (refer fig.1).
The process capability index Cp. is defined as ratio of specification width to PC.
Cp= (USL-LSL)/(6)
Cp. is 2 for a six sigma process, which means that the inherent process variation is half of the specification width.

DEFECTS PER MILLION OPPORTUNITIES (DPMO) ANALYSIS:
In practice, most of the delivered products or services will have multiple parts and /or process steps, which represent opportunities for nonconformities or defects. For example, which a watch has numerous parts and assembly steps. In such cases it is important to ask questions such as what is the distribution of defects, how many units can be expected to have zero defect, one defect, two defect, and so on for a given ppm, what will be the defect rates and sigma levels for individual parts and process steps that contributes to the total unit with a given defect rate.
If the number of observed nonconformities as d out of the total number of units produced u.
Defects Per Unit (DPU) = d/u
If each unit manufactured has got m number of opportunities for nonconformance, we can compute the Defects Per Opportunity (DPO) as
Defect Per Opportunity (DPO)= DPU/m
In the calculation of DPO, we are taking into consideration only the active opportunities (those which are getting measured) and not the passive opportunities (which are not getting measured) with in each unit.
From this, the DPMO can be computed as
Defects Per Million Opportunities (DPMO) = DPO x 10^6
The sigma level can be found out from the DPMO value using statistical tables. If the DPMO and the number of defect opportunities are known for each contributing step, the total DPMO for the completed unit can be computed as follows.
Expected Defects (ppm for each step) = DPMO x Number of opportunities (for each step)
Expected defects (ppm for completed unit) = Sum of expected defects of
Individual steps
DPMO for completed unit = (Expected defects)/(Total number of
Opportunities)
PROCESS YIELD:
The process yield represents the proportion of defect- free units before testing or repair. The Poisson Distribution can be used to calculate the
Yield for a unit if the DPU value is known.
YIELD= e^(-DPU)
If the yield is known for each part or process step, the overall yield for the process (ROLLED THROUGHPUT YIELD [YRT]) can be computed as the product of yields of individual process steps. This value will be less than smallest individual yield since these are all in fractions. This clearly shows that for improving the YRT, the individual yields shall be improved. In other words, for minimizing the overall defect rate, the overall defect rate, the individual defect rates of each part or process step shall be minimized. Hence, only with six sigma parts and process steps will an organization experience high YRT for complex products with numerous parts and process steps.

SIX SIGMA -PROBLEM SOLVING PROCESS:
The sigma of the process, which tells us how capable the process is, can be used to compare similar or dissimilar process. Such comparison, known as Benchmarking, will uncover what we do well.
MAIC, DMAIC or DMAIIC are all acronyms used to identify six sigma methodologies by different 6 sigma service providers. The DMAIIC acronym, which is the most hybridized form used by SIX Sigma Innovation
is described as follows:
DEFINE the problem and the scope of the six sigma project in detail.
MEASURE and collect data on the problem and its potential root
causes.
ANALYSE the data selected determine the real root cause (s).
INNOVATE “ to identify the best solutions to the problem.
IMPROVE the process, and then pilot the proposed solution.
CONTROL the new process to ensure that the improvements are
sustained.
KEY ELEMENTS:
1.Management Initiatives
¢ Customer focus
¢ Participative management
¢ Benchmarking
¢ Design for manufacture
¢ Statistical process control
¢ Supplier qualification
2.Improvement Process
¢ Define your product or service
¢ Identify your customers (both internal and external) and their needs.
¢ Identify your suppliers and what you need from them to satisfy your customers.
¢ Define your process
¢ Error-proof the process to avoid operator controllable errors.
¢ Ensure continuous improvement through measurement, analysis and control.
3.Improvement Tools
¢ Histogram
¢ Process mapping
¢ Quality function deployment
¢ Design of experiments
ASSUMPTIONS:
1. The most significant assumption is that each process parameter is characterised by a normal distribution, but in real world, there can be many situations where non-normal distributions are present. In such cases, the actual defect rates might be significantly higher than the predicted defect rates. Therefore, non-normal distribution is likely to lead to unexpected erroneous results.
2. The defects are randomly distributed through out the units. Parts and process steps are independent of each other. This may not always be true; in which case the use of Poisson distribution for computing the defect rates and process yields might become invalid.
SIX SIGMA PRODUCE MAJOR RETURNS ON INVESTMENT.
For example:
At GENERAL ELECTRICALS (GE) six sigma program resulted in the following,
¢ In 1996, costs of $200 million and returns of $150 million
¢ In 1997, costs of $400 million and returns of $600 million
¢ In 1998, costs of $400 million and returns of $1 billion
CONCLUSION:
The term sigma is used to designate the distribution or the spread about the mean of any process. Sigma measures the capability of the process to perform defect-free work. A defect is anything that results in customer dissatisfaction. For a business process, the sigma value is a metric that indicates how well that process is performing. Higher sigma level indicates less likelihood of producing defects and hence better performance.
Six sigma is a performance standard to achieve operational excellence. With six sigma, the common measurement index is defects-per-unit where a unit can be virtually anything “ a component, piece of material, administrative form etc. Conceptually, six sigma is defined as achieving a defect level of 3.4 ppm or better. Operationally, six sigma is defined a staying within half the expected range around the target. The approach aims at continuous improvement in all the process within the organisation. This works on the belief that quality is free, in that the more we work towards zero-defect production, the more return on investment we will have. The advantages of six sigma approaches are reduction in defects/rejections, cycle time, work in progress etc. and increase in product Quality &Reliability, customer satisfaction, productivity etc. leading ultimately to excellent business results.






SIX SIGMA
-Quality Improvement Program

ABSTRACT:
Six Sigma is a smarter way to manage business or department. It is a vision of quality that equates with only 3.4 defects for million opportunities for each product or service transactions. Strives for perfection.
We believe that defects free product can be in any organization implementing six sigma. In this paper, we presented an overview of the process which explains how six sigma increase the overall quality improvement task into a series of project management stages: Define, Measure, Analyses, Innovation, Improve and Control. We will describe dependence of six sigma on Normal Distribution theory and also process capability. It gives a small note on the assumptions made in six sigma methodology of problem solving and the key elements involved .A brief view on Defects Per Million Opportunities (DPMO) Analysis is given.
Ultimate objectives of the methodology to solve problems, improve the quality, profitability and customers satisfaction.
INTRODUCTION:
The main objective of any business is to make profit. For increasing the profit, the selling price should increase and/or the manufacturing cost should come down. Since the price is decided by the competition in the market, hence the only the way to increase the profit is to cut down the manufacturing cost which can be achieved only through continuous improvement in the companyâ„¢s operation. Six sigma quality programs provide an overall framework for continuous improvement in the process of an organization. Six sigma uses facts, data and root cause to solve problems.
EVOLUTION OF SIXSIGMA:
Six sigma background stretches back eighty plus years, from management science concepts developed in the United States to Japanese management breakthroughs to TOTAL QUALITY efforts in 1970s and 1980s. But the real impacts can be seen in the waves of change and positive results sweeping such companies as GE, MOTOROLA, JOHNSON &JOHNSON and AMERICAN EXPRESS.
CONCEPTS:
Six sigma is defined a customer oriented, structured, systematic, proactive and quantitative company wide approach for continuous improvement of manufacturing, services, engineering, suppliers and other business process. It is a statistical measure of the performance of a process or a product. It measures the degree to which the process deviates from the goals and then takes efforts to improve the process to achieve total customer satisfaction.
Six sigma efforts target three main areas:
¢ Improving customer satisfaction.
¢ Reducing cycle time.
¢ Reducing defects.
Three key characteristics separates six sigma from quality programs of the past:
1. Six Sigma is a customer focused.
2. Six sigma projects produce major returns on investments.
3. Six sigma changes how management operates.
6 SIGMA= 3.4 defects per million
Six Sigma equates 3.4 defects for every million parts made or process transactions carried out. This quality equates to 99.99966% defect free products or transactions. High quality standards do make sense but the cost required to pursue such high standards have to be balanced with benefits gained. The six sigma processes exposes the root causes and then focuses on the improvements to achieve the highest level of quality at acceptable cost. This is essential to achieve and maintain a competitive advantage and high levels of customer satisfaction and loyalty.
When we say that a process is at six sigma level, such a process is normally yield two instances of non-conformances out of every million opportunities for non-conformances, provided there is no shift in the process average. The same will yield 3.4 instances of non-conformances out of every million opportunities with an expected of 1.5 sigma in the process average. This is considered to be best-in-class quality.
THEORY:
Six Sigma relies on the normal distribution theory to predict defect rates. As we all know, variation is inevitable in any process. The variation can be due to chance causes that are inherent in the process [chance variation] or due to assignable causes that are external to the process [Assignable variation]. If we detect and remove all the assignable causes and bring the process under the influence of chance causes, then the process is said to be under statistical control. The process capability (PC) is defined as six times the standard deviation (). PC represents the measured inherent reproducibility of the product turned out by the process.

The upper specification limit (USL) and lower specification limit (LSL) of +/- 6 of the mean with a defect rate of 0.002 ppm (refer fig.1).
The process capability index Cp. is defined as ratio of specification width to PC.
Cp= (USL-LSL)/(6)
Cp. is 2 for a six sigma process, which means that the inherent process variation is half of the specification width.

DEFECTS PER MILLION OPPORTUNITIES (DPMO) ANALYSIS:
In practice, most of the delivered products or services will have multiple parts and /or process steps, which represent opportunities for nonconformities or defects. For example, which a watch has numerous parts and assembly steps. In such cases it is important to ask questions such as what is the distribution of defects, how many units can be expected to have zero defect, one defect, two defect, and so on for a given ppm, what will be the defect rates and sigma levels for individual parts and process steps that contributes to the total unit with a given defect rate.
If the number of observed nonconformities as d out of the total number of units produced u.
Defects Per Unit (DPU) = d/u
If each unit manufactured has got m number of opportunities for nonconformance, we can compute the Defects Per Opportunity (DPO) as
Defect Per Opportunity (DPO)= DPU/m
In the calculation of DPO, we are taking into consideration only the active opportunities (those which are getting measured) and not the passive opportunities (which are not getting measured) with in each unit.
From this, the DPMO can be computed as
Defects Per Million Opportunities (DPMO) = DPO x 10^6
The sigma level can be found out from the DPMO value using statistical tables. If the DPMO and the number of defect opportunities are known for each contributing step, the total DPMO for the completed unit can be computed as follows.
Expected Defects (ppm for each step) = DPMO x Number of opportunities (for each step)
Expected defects (ppm for completed unit) = Sum of expected defects of
Individual steps
DPMO for completed unit = (Expected defects)/(Total number of
Opportunities)
PROCESS YIELD:
The process yield represents the proportion of defect- free units before testing or repair. The Poisson Distribution can be used to calculate the
Yield for a unit if the DPU value is known.
YIELD= e^(-DPU)
If the yield is known for each part or process step, the overall yield for the process (ROLLED THROUGHPUT YIELD [YRT]) can be computed as the product of yields of individual process steps. This value will be less than smallest individual yield since these are all in fractions. This clearly shows that for improving the YRT, the individual yields shall be improved. In other words, for minimizing the overall defect rate, the overall defect rate, the individual defect rates of each part or process step shall be minimized. Hence, only with six sigma parts and process steps will an organization experience high YRT for complex products with numerous parts and process steps.

SIX SIGMA -PROBLEM SOLVING PROCESS:
The sigma of the process, which tells us how capable the process is, can be used to compare similar or dissimilar process. Such comparison, known as Benchmarking, will uncover what we do well.
MAIC, DMAIC or DMAIIC are all acronyms used to identify six sigma methodologies by different 6 sigma service providers. The DMAIIC acronym, which is the most hybridized form used by SIX Sigma Innovation
is described as follows:
DEFINE the problem and the scope of the six sigma project in detail.
MEASURE and collect data on the problem and its potential root
causes.
ANALYSE the data selected determine the real root cause (s).
INNOVATE “ to identify the best solutions to the problem.
IMPROVE the process, and then pilot the proposed solution.
CONTROL the new process to ensure that the improvements are
sustained.
KEY ELEMENTS:
1.Management Initiatives
¢ Customer focus
¢ Participative management
¢ Benchmarking
¢ Design for manufacture
¢ Statistical process control
¢ Supplier qualification
2.Improvement Process
¢ Define your product or service
¢ Identify your customers (both internal and external) and their needs.
¢ Identify your suppliers and what you need from them to satisfy your customers.
¢ Define your process
¢ Error-proof the process to avoid operator controllable errors.
¢ Ensure continuous improvement through measurement, analysis and control.
3.Improvement Tools
¢ Histogram
¢ Process mapping
¢ Quality function deployment
¢ Design of experiments
ASSUMPTIONS:
1. The most significant assumption is that each process parameter is characterised by a normal distribution, but in real world, there can be many situations where non-normal distributions are present. In such cases, the actual defect rates might be significantly higher than the predicted defect rates. Therefore, non-normal distribution is likely to lead to unexpected erroneous results.
2. The defects are randomly distributed through out the units. Parts and process steps are independent of each other. This may not always be true; in which case the use of Poisson distribution for computing the defect rates and process yields might become invalid.
SIX SIGMA PRODUCE MAJOR RETURNS ON INVESTMENT.
For example:
At GENERAL ELECTRICALS (GE) six sigma program resulted in the following,
¢ In 1996, costs of $200 million and returns of $150 million
¢ In 1997, costs of $400 million and returns of $600 million
¢ In 1998, costs of $400 million and returns of $1 billion
CONCLUSION:
The term sigma is used to designate the distribution or the spread about the mean of any process. Sigma measures the capability of the process to perform defect-free work. A defect is anything that results in customer dissatisfaction. For a business process, the sigma value is a metric that indicates how well that process is performing. Higher sigma level indicates less likelihood of producing defects and hence better performance.
Six sigma is a performance standard to achieve operational excellence. With six sigma, the common measurement index is defects-per-unit where a unit can be virtually anything “ a component, piece of material, administrative form etc. Conceptually, six sigma is defined as achieving a defect level of 3.4 ppm or better. Operationally, six sigma is defined a staying within half the expected range around the target. The approach aims at continuous improvement in all the process within the organisation. This works on the belief that quality is free, in that the more we work towards zero-defect production, the more return on investment we will have. The advantages of six sigma approaches are reduction in defects/rejections, cycle time, work in progress etc. and increase in product Quality &Reliability, customer satisfaction, productivity etc. leading ultimately to excellent business results.
Reply
#2


Six Sigma

[attachment=7921]

The precise definition of Six Sigma is not important; the content of the program is
A disciplined quantitative approach for improvement of defined metrics
Can be applied to all business processes, manufacturing, finance and services

Focus of Six Sigma
Accelerating fast breakthrough performance
Significant financial results in 4-8 months
Ensuring Six Sigma is an extension of the Corporate culture, not the program of the month
Results first, then culture change!

Six Sigma: Reasons for Success
The Success at Motorola, GE and AlliedSignal has been attributed to:
Strong leadership (Jack Welch, Larry Bossidy and Bob Galvin personally involved)
Initial focus on operations
Aggressive project selection (potential savings in cost of poor quality > $50,000/year)
Training the right people




Reply
#3
TOOLS FOR ANALYSIS

Six Sigma


Six Sigma is a quality management program to achieve "six sigma" levels of quality. It was pioneered by Motorola in the mid-1980s and has spread to many other manufacturing companies. It continues to spread to service companies as well. In 2000, Fort Wayne, Indiana became the first city to implement the program in a city government.
Six Sigma aims to have the total number of failures in quality, or customer satisfaction, occur beyond the sixth sigma of likelihood in a normal distribution of customers. Here sigma stands for a step of one standard deviation; designing processes with tolerances of at least six standard deviations will, on reasonable assumptions, yield fewer than 3.4 defects in one million. (See below for those assumptions.)
Achievement of six-sigma quality is defined by Motorola in terms of the number of Defects Per Million Opportunities (DPMO).
That is, fewer than four in one million customers will have a legitimate issue with the company's products and service.
Many people believed that six-sigma quality was impossible, and settled for three to four sigmas. However market leaders have measurably reached six sigmas in numerous processes.

[attachment=8285]

Why six?
Anyone looking at a table of probabilities for the normal (Gaussian) distribution will wonder what six-sigma has to do with 3.4 defects per million thingies. Only one billionth of the normal curve lies beyond six standard deviations, or two billionths if you count both too-high and too-low values. Conversely, a mere three sigma corresponds to just 2.6 problems in a thousand, which would seem a good result in many businesses.
The answer has to do with practical considerations for manufacturing processes. (The following discussion is based loosely on the treatment by Robert V. Binder in a discussion of whether six-sigma practices can apply to software .) Suppose that the tolerance for some manufacturing step (perhaps the placement of a hole into which a pin must fit) is 300 micrometres, and the standard deviation for the process of drilling the hole is 100 micrometres. Then only about 1 part in 400 will be out of spec. But in a manufacturing process, the average value of a measurement is likely to drift over time, and the drift can be 1.5 standard deviations in either direction. At any time, 6.6% of the output will be off by 1.5 sigma in each direction. Thus, when the process has drifted by 150 micrometres, 6.6% of the product will be off by 150 + 150 or 300 micrometres, and therefore out of spec. This is a high defect rate.
If you set the tolerance to six sigma, then a drift of 1.5 sigma in the manufacturing process will still produce a defect only for parts that are more than 4.5 sigma away from the average in the same direction. By the mathematics of the normal curve, this is 3.4 defects per million.
There is another reason for six sigma: a manufactured item probably has more than one part, and some of the parts will have to fit together, which means that the total error in two or more parts must be within tolerance. If each step is done to three-sigma precision, an item with 100 parts will hardly ever be defect-free. With six-sigma, even an object with 10,000 parts can be made defect-free 96% of the time.
Clearly, many things on which people rely (services, software products, etc.) are not manufactured by machine tools to particular measurements. In these cases, "six sigma" has nothing to do with statistical distributions, but refers to a goal of very few defects per million, by analogy to a manufacturing process. The usefulness of the analogy is controversial among those concerned with quality in non-manufacturing processes.

DMAIC
Basic methodology to improve existing processes Define: out of tolerance range. Measure: key internal processes critical to quality. Analyze: why defects occur. Improve: the process to stay within tolerance. Control: the process to stay within goals.

DMADV
Basic methodology of introducing new processes. Define: the process and where it would fail to meet customer needs. Measure: and determine if process meets customer needs. Analyze: the options to meet customer needs. Design: in changes to the process to meet customers needs. Verify: the changes have met customer needs

**********************************************


Six Sigma/TQM-Which Is Better?
"The correct answer for you is probably what works best in your company. Most all of us are semi-purists based on our training and experience. The real fanatics among us can and will argue any pure position. Most others of us try to keep the best and discard the worst of all the 'greatest since sliced bread' programs. Early TQM philosophies asked for 5-15% improvement per year. Those programs did a lot of good but in many companies they ran out of steam for any number of reasons. Today's Six Sigma programs can ask for 50-100% improvement within 3-6 months which looks great. Six Sigma became especially attractive to the 'Fad Followers' because of the $$ in savings connected to it, but Six Sigma is really not about $$. It is about a system and set of tools which has an organization and discipline to make significant improvements. I like Six Sigma and hope that we can keep it fresh for a while. I think that it can make its most important contribution in the design area by preventing problems for customers."

*************************************
pest market analysis tool

The PEST analysis is a useful tool for understanding market growth or decline, and as such the position, potential and direction for a business. A PEST analysis is a business measurement tool. PEST is an acronym for Political, Economic, Social and Technological factors, which are used to assess the market for a business or organizational unit. The PEST analysis headings are a framework for reviewing a situation, and can also, like SWOT analysis, and Porter's Five Forces model, be used to review a strategy or position, direction of a company, a marketing proposition, or idea. Completing a PEST analysis is very simple, and is a good subject for workshop sessions. PEST analysis also works well in brainstorming meetings. Use PEST analysis for business and strategic planning, marketing planning, business and product development and research reports. You can also use PEST analysis exercises for team building games. PEST analysis is similar to SWOT analysis - it's simple, quick, and uses four key perspectives. As PEST factors are essentially external, completing a PEST analysis is helpful prior to completing a SWOT analysis (a SWOT analysis - Strengths, Weaknesses, Opportunities, Threats - is based broadly on half internal and half external factors).
A PEST analysis measures a market; a SWOT analysis measures a business unit, a proposition or idea.
N.B. The PEST model is sometimes extended (some would say unnecessarily) to seven factors, by adding Ecological (or Environmental), Legislative (or Legal), and Industry Analysis (the model is then known as PESTELI). Arguably if completed properly, the basic PEST analysis should naturally cover these 'additional' factors: Ecological factors are found under the four main PEST headings; Legislative factors would normally be covered under the Political heading; Industry Analysis is effectively covered under the Economic heading. If you prefer to keep things simple, perhaps use PESTELI only if you are worried about missing something within the three extra headings.
A SWOT analysis measures a business unit or proposition, a PEST analysis measures the market potential and situation, particularly indicating growth or decline, and thereby market attractiveness, business potential, and suitability of access - market potential and 'fit' in other words. PEST analysis uses four perspectives, which give a logical structure, in this case organized by the PEST format, that helps understanding, presentation, discussion and decision-making. The four dimensions are an extension of a basic two heading list of pro's and con's (free pro's and con's template here).
PEST analysis can be used for marketing and business development assessment and decision-making, and the PEST template encourages proactive thinking, rather than relying on habitual or instinctive reactions.
Here the PEST analysis template is presented as a grid, comprising four sections, one for each of the PEST headings: Political, Economic, Social and Technological. The free PEST template below includes sample questions or prompts, whose answers are can be inserted into the relevant section of the PEST grid. The questions are examples of discussion points, and obviously can be altered depending on the subject of the PEST analysis, and how you want to use it. Make up your own PEST questions and prompts to suit the issue being analysed and the situation (ie., the people doing the work and the expectations of them). Like SWOT analysis, it is important to clearly identify the subject of a PEST analysis, because a PEST analysis is four-way perspective in relation to a particular business unit or proposition - if you blur the focus you will produce a blurred picture - so be clear about the market that you use PEST to analyse.
A market is defined by what is addressing it, be it a product, company, brand, business unit, proposition, idea, etc, so be clear about how you define the market being analysed, particularly if you use PEST analysis in workshops, team exercises or as a delegated task. The PEST subject should be a clear definition of the market being addressed, which might be from any of the following standpoints:
• a company looking at its market
• a product looking at its market
• a brand in relation to its market
• a local business unit
• a strategic option, such as entering a new market or launching a new product
• a potential acquisition
• a potential partnership
• an investment opportunity
Be sure to describe the subject for the PEST analysis clearly so that people contributing to the analysis, and those seeing the finished PEST analysis, properly understand the purpose of the PEST assessment and implications.

PEST analysis template
Other than the four main headings, the questions and issues in the template below are examples and not exhaustive - add your own and amend these prompts to suit your situation, the experience and skill level of whoever is completing the analysis, and what you aim to produce from the analysis.
If Environmental is a more relevant heading than Economic, then substitute it. Ensure you consider the three additional 'PESTELI' headings: Ecological (or Environmental), Legislative (or Legal), and Industry Analysis.
The analysis can be converted into a more scientific measurement by scoring the items in each of the sections. There is are established good or bad reference points - these are for you to decide. Scoring is particularly beneficial if more than one market is being analysed, for the purpose of comparing which market or opportunity holds most potential and/or obstacles. This is useful when considering business development and investment options, ie, whether to develop market A or B; whether to concentrate on local distribution or export; whether to acquire compnay X or compnay Y., etc. If helpful when comparing more than one different market analysis, scoring can also be weighted according to the more or less significant factors.


Subject of PEST analysis: (define the standpoint and market here)

political
• ecological/environmental issues
• current legislation home market
• future legislation
• European/international legislation
• regulatory bodies and processes
• government policies
• government term and change
• trading policies
• funding, grants and initiatives
• home market lobbying/pressure groups
• international pressure groups economic
• home economy situation
• home economy trends
• overseas economies and trends
• general taxation issues
• taxation specific to product/services
• seasonality/weather issues
• market and trade cycles
• specific industry factors
• market routes and distribution trends
• customer/end-user drivers
• interest and exchange rates
social
• lifestyle trends
• demographics
• consumer attitudes and opinions
• media views
• law changes affecting social factors
• brand, company, technology image
• consumer buying patterns
• fashion and role models
• major events and influences
• buying access and trends
• ethnic/religious factors
• advertising and publicity technological
• competing technology development
• research funding
• associated/dependent technologies
• replacement technology/solutions
• maturity of technology
• manufacturing maturity and capacity
• information and communications
• consumer buying mechanisms/technology
• technology legislation
• innovation potential
• technology access, licencing, patents
• intellectual property issues


more on the difference and relationship between PEST and SWOT
PEST is useful before SWOT - not generally vice-versa - PEST definitely helps to identify SWOT factors. There is overlap between PEST and SWOT, in that similar factors would appear in each. That said, PEST and SWOT are certainly two different perspectives:
PEST assesses a market, including competitors, from the standpoint of a particular proposition or a business.
SWOT is an assessment of a business or a proposition, whether your own or a competitor's.
Strategic planning is not a precise science - no tool is mandatory - it's a matter of pragmatic choice as to what helps best to identify and explain the issues.
PEST becomes more useful and relevant the larger and more complex the business or proposition, but even for a very small local businesses a PEST analysis can still throw up one or two very significant issues that might otherwise be missed.
The four quadrants in PEST vary in significance depending on the type of business, eg., social factors are more obviously relevant to consumer businesses or a B2B business close to the consumer-end of the supply chain, whereas political factors are more obviously relevant to a global munitions supplier or aerosol propellant manufacturer.
All businesses benefit from a SWOT analysis, and all businesses benefit from completing a SWOT analysis of their main competitors, which interestingly can then provide some feed back into the economic aspects of the PEST analysis.
****************************************

Multidimensional scaling

General Purpose
Multidimensional scaling (MDS) can be considered to be an alternative to factor analysis (see Factor Analysis). In general, the goal of the analysis is to detect meaningful underlying dimensions that allow the researcher to explain observed similarities or dissimilarities (distances) between the investigated objects. In factor analysis, the similarities between objects (e.g., variables) are expressed in the correlation matrix. With MDS one may analyze any kind of similarity or dissimilarity matrix, in addition to correlation matrices.
Logic of MDS
The following simple example may demonstrate the logic of an MDS analysis. Suppose we take a matrix of distances between major US cities from a map. We then analyze this matrix, specifying that we want to reproduce the distances based on two dimensions. As a result of the MDS analysis, we would most likely obtain a two-dimensional representation of the locations of the cities, that is, we would basically obtain a two-dimensional map.
In general then, MDS attempts to arrange "objects" (major cities in this example) in a space with a particular number of dimensions (two-dimensional in this example) so as to reproduce the observed distances. As a result, we can "explain" the distances in terms of underlying dimensions; in our example, we could explain the distances in terms of the two geographical dimensions: north/south and east/west.
Orientation of axes. As in factor analysis, the actual orientation of axes in the final solution is arbitrary. To return to our example, we could rotate the map in any way we want, the distances between cities remain the same. Thus, the final orientation of axes in the plane or space is mostly the result of a subjective decision by the researcher, who will choose an orientation that can be most easily explained. To return to our example, we could have chosen an orientation of axes other than north/south and east/west; however, that orientation is most convenient because it "makes the most sense" (i.e., it is easily interpretable).


Computational Approach
MDS is not so much an exact procedure as rather a way to "rearrange" objects in an efficient manner, so as to arrive at a configuration that best approximates the observed distances. It actually moves objects around in the space defined by the requested number of dimensions, and checks how well the distances between objects can be reproduced by the new configuration. In more technical terms, it uses a function minimization algorithm that evaluates different configurations with the goal of maximizing the goodness-of-fit (or minimizing "lack of fit").
Measures of goodness-of-fit: Stress. The most common measure that is used to evaluate how well (or poorly) a particular configuration reproduces the observed distance matrix is the stress measure. The raw stress value Phi of a configuration is defined by:
Phi = [dij - f (ij)]2


In this formula, dij stands for the reproduced distances, given the respective number of dimensions, and ij (deltaij) stands for the input data (i.e., observed distances). The expression f (ij) indicates a nonmetric, monotone transformation of the observed input data (distances). Thus, it will attempt to reproduce the general rank-ordering of distances between the objects in the analysis.
There are several similar related measures that are commonly used; however, most of them amount to the computation of the sum of squared deviations of observed distances (or some monotone transformation of those distances) from the reproduced distances. Thus, the smaller the stress value, the better is the fit of the reproduced distance matrix to the observed distance matrix.

Shepard diagram. One can plot the reproduced distances for a particular number of dimensions against the observed input data (distances). This scatterplot is referred to as a Shepard diagram. This plot shows the reproduced distances plotted on the vertical (Y) axis versus the original similarities plotted on the horizontal (X) axis (hence, the generally negative slope). This plot also shows a step-function. This line represents the so- called D-hat values, that is, the result of the monotone transformation f() of the input data. If all reproduced distances fall onto the step-line, then the rank-ordering of distances (or similarities) would be perfectly reproduced by the respective solution (dimensional model). Deviations from the step-line indicate lack of fit.


How Many Dimensions to Specify?
If you are familiar with factor analysis, you will be quite aware of this issue. If you are not familiar with factor analysis, you may want to read the Factor Analysis section in the manual; however, this is not necessary in order to understand the following discussion. In general, the more dimensions we use in order to reproduce the distance matrix, the better is the fit of the reproduced matrix to the observed matrix (i.e., the smaller is the stress). In fact, if we use as many dimensions as there are variables, then we can perfectly reproduce the observed distance matrix. Of course, our goal is to reduce the observed complexity of nature, that is, to explain the distance matrix in terms of fewer underlying dimensions. To return to the example of distances between cities, once we have a two-dimensional map it is much easier to visualize the location of and navigate between cities, as compared to relying on the distance matrix only.
Sources of misfit. Let us consider for a moment why fewer factors may produce a worse representation of a distance matrix than would more factors. Imagine the three cities A, B, and C, and the three cities D, E, and F; shown below are their distances from each other.
A B C D E F
A
B
C 0
90
90
0
90

0 D
E
F 0
90
180
0
90

0
In the first matrix, all cities are exactly 90 miles apart from each other; in the second matrix, cities D and F are 180 miles apart. Now, can we arrange the three cities (objects) on one dimension (line)? Indeed, we can arrange cities D, E, and F on one dimension:
D---90 miles---E---90 miles---F
D is 90 miles away from E, and E is 90 miles away form F; thus, D is 90+90=180 miles away from F. If you try to do the same thing with cities A, B, and C you will see that there is no way to arrange the three cities on one line so that the distances can be reproduced. However, we can arrange those cities in two dimensions, in the shape of a triangle:
A
90 miles 90 miles
B 90 miles C
Arranging the three cities in this manner, we can perfectly reproduce the distances between them. Without going into much detail, this small example illustrates how a particular distance matrix implies a particular number of dimensions. Of course, "real" data are never this "clean," and contain a lot of noise, that is, random variability that contributes to the differences between the reproduced and observed matrix.
Scree test. A common way to decide how many dimensions to use is to plot the stress value against different numbers of dimensions. This test was first proposed by Cattell (1966) in the context of the number-of-factors problem in factor analysis (see Factor Analysis); Kruskal and Wish (1978; pp. 53-60) discuss the application of this plot to MDS.
Cattell suggests to find the place where the smooth decrease of stress values (eigenvalues in factor analysis) appears to level off to the right of the plot. To the right of this point one finds, presumably, only "factorial scree" -- "scree" is the geological term referring to the debris which collects on the lower part of a rocky slope.
Interpretability of configuration. A second criterion for deciding how many dimensions to interpret is the clarity of the final configuration. Sometimes, as in our example of distances between cities, the resultant dimensions are easily interpreted. At other times, the points in the plot form a sort of "random cloud," and there is no straightforward and easy way to interpret the dimensions. In the latter case one should try to include more or fewer dimensions and examine the resultant final configurations. Often, more interpretable solutions emerge. However, if the data points in the plot do not follow any pattern, and if the stress plot does not show any clear "elbow," then the data are most likely random "noise."
To index



Interpreting the Dimensions
The interpretation of dimensions usually represents the final step of the analysis. As mentioned earlier, the actual orientations of the axes from the MDS analysis are arbitrary, and can be rotated in any direction. A first step is to produce scatterplots of the objects in the different two-dimensional planes.

Three-dimensional solutions can also be illustrated graphically, however, their interpretation is somewhat more complex.

In addition to "meaningful dimensions," one should also look for clusters of points or particular patterns and configurations (such as circles, manifolds, etc.). For a detailed discussion of how to interpret final configurations, see Borg and Lingoes (1987), Borg and Shye (in press), or Guttman (1968).
Use of multiple regression techniques. An analytical way of interpreting dimensions (described in Kruskal & Wish, 1978) is to use multiple regression techniques to regress some meaningful variables on the coordinates for the different dimensions. Note that this can easily be done via Multiple Regression.


Applications
The "beauty" of MDS is that we can analyze any kind of distance or similarity matrix. These similarities can represent people's ratings of similarities between objects, the percent agreement between judges, the number of times a subjects fails to discriminate between stimuli, etc. For example, MDS methods used to be very popular in psychological research on person perception where similarities between trait descriptors were analyzed to uncover the underlying dimensionality of people's perceptions of traits (see, for example Rosenberg, 1977). They are also very popular in marketing research, in order to detect the number and nature of dimensions underlying the perceptions of different brands or products & Carmone, 1970).
In general, MDS methods allow the researcher to ask relatively unobtrusive questions ("how similar is brand A to brand B") and to derive from those questions underlying dimensions without the respondents ever knowing what is the researcher's real interest.


MDS and Factor Analysis
Even though there are similarities in the type of research questions to which these two procedures can be applied, MDS and factor analysis are fundamentally different methods. Factor analysis requires that the underlying data are distributed as multivariate normal, and that the relationships are linear. MDS imposes no such restrictions. As long as the rank-ordering of distances (or similarities) in the matrix is meaningful, MDS can be used. In terms of resultant differences, factor analysis tends to extract more factors (dimensions) than MDS; as a result, MDS often yields more readily, interpretable solutions. Most importantly, however, MDS can be applied to any kind of distances or similarities, while factor analysis requires us to first compute a correlation matrix. MDS can be based on subjects' direct assessment of similarities between stimuli, while factor analysis requires subjects to rate those stimuli on some list of attributes (for which the factor analysis is performed).

******************************************

Chi square

Overview
Chi square is a non-parametric test of statistical significance for bivariate tabular analysis (also known as crossbreaks). Any appropriately performed test of statistical significance lets you know the degree of confidence you can have in accepting or rejecting an hypothesis. Typically, the hypothesis tested with chi square is whether or not two different samples (of people, texts, whatever) are different enough in some characteristic or aspect of their behavior that we can generalize from our samples that the populations from which our samples are drawn are also different in the behavior or characteristic.
A non-parametric test, like chi square, is a rough estimate of confidence; it accepts weaker, less accurate data as input than parametric tests (like t-tests and analysis of variance, for example) and therefore has less status in the pantheon of statistical tests. Nonetheless, its limitations are also its strengths; because chi square is more 'forgiving' in the data it will accept, it can be used in a wide variety of research contexts.
Chi square is used most frequently to test the statistical significance of results reported in bivariate tables, and interpreting bivariate tables is integral to interpreting the results of a chi square test, so we'll take a look at bivariate tabular (crossbreak) analysis.
Bivariate Tabular Analysis
Bivariate tabular (crossbreak) analysis is used when you are trying to summarize the intersections of independent and dependent variables and understand the relationship (if any) between those variables. For example, if we wanted to know if there is any relationship between the biological sex of American undergraduates at a particular university and their footwear preferences, we might select 50 males and 50 females as randomly as possible, and ask them, "On average, do you prefer to wear sandals, sneakers, leather shoes, boots, or something else?" In this example, our independent variable is biological sex. (In experimental research, the independent variable is actively manipulated by the researcher; for example, whether or not a rat gets a food pellet when it pulls on a striped bar. In most sociological research, the independent variable is not actively manipulated in this way, but controlled by sampling for, e.g., males vs. females.) Put another way, the independent variable is the quality or characteristic that you hypothesize helps to predict or explain some other quality or characteristic (the dependent variable). We control the independent variable (and as much else as possible and natural) and elicit and measure the dependent variable to test our hypothesis that there is some relationship between them. Bivariate tabular analysis is good for asking the following kinds of questions:
1. Is there a relationship between any two variables IN THE DATA?
2. How strong is the relationship IN THE DATA?
3. What is the direction and shape of the relationship IN THE DATA?
4. Is the relationship due to some intervening variable(s) IN THE DATA??
To see any patterns or systematic relationship between biological sex of undergraduates at University of X and reported footwear preferences, we could summarize our results in a table like this:
Table 1.a. Male and Female Undergraduate Footwear Preferences
Sandals Sneakers Leather
shoes Boots Other
Male
Female
Depending upon how our 50 male and 50 female subjects responded, we could make a definitive claim about the (reported) footwear preferences of those 100 people.
In constructing bivariate tables, typically values on the independent variable are arrayed on vertical axis, while values on the dependent variable are arrayed on the horizontal axis. This allows us to read 'across' from hypothetically 'causal' values on the independent variable to their 'effects', or values on the dependent variable. How you arrange the values on each axis should be guided "iconically" by your research question/hypothesis. For example, if values on an independent variable were arranged from lowest to highest value on the variable and values on the dependent variable were arranged left to right from lowest to highest, a positive relationship would show up as a rising left to right line. (But remember, association does not equal causation; an observed relationship between two variables is not necessarily causal.)
Each intersection/cell--of a value on the independent variable and a value on the independent variable--reports the result of how many times that combination of values was chosen/observed in the sample being analyzed. (So you can see that crosstabs are structurally most suitable for analyzing relationships between nominal and ordinal variables. Interval and ratio variables will have to first be grouped before they can "fit" into a bivariate table.) Each cell reports, essentially, how many subjects/observations produced that combination of independent and dependent variable values. So, for example, the top left cell of the table above answers the question: "How many male undergraduates at University of X prefer sandals?" (Answer: 6 out of the 50 sampled.)
Table 1.b. Male and Female Undergraduate Footwear Preferences
Sandals Sneakers Leather
shoes Boots Other
Male 6 17 13 9 5
Female 13 5 7 16 9
Reporting and interpreting crosstabs is most easily done by converting raw frequencies (in each cell) into percentages of each cell within the values/categories of the independent variable. For example, in the Footwear Preferences table above, total each row, then divide each cell by its row total, and multiply that fraction by 100.
Table 1.c. Male and Female Undergraduate Footwear Preferences (Percentages)
Sandals Sneakers Leather
shoes Boots Other N
Male 12 34 26 18 10 50
Female 26 10 14 32 18 50
Percentages basically standardize cell frequencies as if there were 100 subjects/observations in each category of the independent variable. This is useful for comparing across values on the independent variable, but that usefulness comes at the price of a generalization--from the actual number of subjects/observations in that column in your data to a hypothetical 100 subjects/observations. If the raw row total was 93, then percentages do little violence to the raw scores; but if the raw total is 9, then the generalization (on no statistical basis, i.e., with no knowledge of sample-population representativeness) is drastic. So you should provide that total N at the end of each row/independent variable category (for replicability and to enable the reader to assess your interpretation of the table's meaning).
With this caveat in mind, you can compare the patterns of distribution of subjects/observations along the dependent variable between the values of the independent variable: e.g., compare male and female undergraduate footwear preference. (For some data, plotting the results on a line graph can also help you interpret the results: i.e., whether there is a positive (/), negative (\), or curvilinear (\/, /\) relationship between the variables.) Table 1.c shows that within our sample, roughly twice as many females preferred sandals and boots as males; and within our sample, about three times as many men preferred sneakers as women and twice as many men preferred leather shoes. We might also infer from the 'Other' category that female students within our sample had a broader range of footwear preferences than did male students.
Generalizing from Samples to Populations
Converting raw observed values or frequencies into percentages does allow us to see more easily patterns in the data, but that is all we can see: what is in the data. Knowing with great certainty the footwear preferences of a particular group of 100 undergraduates at University of X is of limited use to us; we usually want to measure a sample in order to know something about the larger populations from which our samples were drawn. On the basis of raw observed frequencies (or percentages) of a sample's behavior or characteristics, we can make claims about the sample itself, but we cannot generalize to make claims about the population from which we drew our sample, unless we submit our results to a test of statistical significance. A test of statistical significance tells us how confidently we can generalize to a larger (unmeasured) population from a (measured) sample of that population.
How does chi square do this? Basically, the chi square test of statistical significance is a series of mathematical formulas which compare the actual observed frequencies of some phenomenon (in our sample) with the frequencies we would expect if there were no relationship at all between the two variables in the larger (sampled) population. That is, chi square tests our actual results against the null hypothesis and assesses whether the actual results are different enough to overcome a certain probability that they are due to sampling error. In a sense, chi-square is a lot like percentages; it extrapolates a population characteristic (a parameter) from the sampling characteristic (a statistic) similarly to the way percentage standardizes a frequency to a total column N of 100. But chi-square works within the frequencies provided by the sample and does not inflate (or minimize) the column and row totals.
Chi Square Requirements
As mentioned before, chi square is a nonparametric test. It does not require the sample data to be more or less normally distributed (as parametric tests like t-tests do), although it relies on the assumption that the variable is normally distributed in the population from which the sample is drawn.
But chi square, while forgiving, does have some requirements:
1. The sample must be randomly drawn from the population.
2. Data must be reported in raw frequencies (not percentages);
3. Measured variables must be independent;
4. Values/categories on independent and dependent variables must be mutually exclusive and exhaustive;
5. Observed frequencies cannot be too small.
1) As with any test of statistical significance, your data must be from a random sample of the population to which you wish to generalize your claims.
2) You should only use chi square when your data are in the form of raw frequency counts of things in two or more mutually exclusive and exhaustive categories. As discussed above, converting raw frequencies into percentages standardizes cell frequencies as if there were 100 subjects/observations in each category of the independent variable for comparability. Part of the chi square mathematical procedure accomplishes this standardizing, so computing the chi square of percentages would amount to standardizing an already standardized measurement.
3) Any observation must fall into only one category or value on each variable. In our footwear example, our data are counts of male versus female undergraduates expressing a preference for five different categories of footwear. Each observation/subject is counted only once, as either male or female (an exhaustive typology of biological sex) and as preferring sandals, sneakers, leather shoes, boots, or other kinds of footwear. For some variables, no 'other' category may be needed, but often 'other' ensures that the variable has been exhaustively categorized. (For some kinds of analysis, you may need to include an "uncodable" category.) In any case, you must include the results for the whole sample.
4) Furthermore, you should use chi square only when observations are independent: i.e., no category or response is dependent upon or influenced by another. (In linguistics, often this rule is fudged a bit. For example, if we have one dependent variable/column for linguistic feature X and another column for number of words spoken or written (where the rows correspond to individual speakers/texts or groups of speakers/texts which are being compared), there is clearly some relation between the frequency of feature X in a text and the number of words in a text, but it is a distant, not immediate dependency.)
5) Chi-square is an approximate test of the probability of getting the frequencies you've actually observed if the null hypothesis were true. It's based on the expectation that within any category, sample frequencies are normally distributed about the expected population value. Since (logically) frequencies cannot be negative, the distribution cannot be normal when expected population values are close to zero--since the sample frequencies cannot be much below the expected frequency while they can be much above it (an asymmetric/non-normal distribution). So, when expected frequencies are large, there is no problem with the assumption of normal distribution, but the smaller the expected frequencies, the less valid are the results of the chi-square test. We'll discuss expected frequencies in greater detail later, but for now remember that expected frequencies are derived from observed frequencies. Therefore, if you have cells in your bivariate table which show very low raw observed frequencies (5 or below), your expected frequencies may also be too low for chi square to be appropriately used. In addition, because some of the mathematical formulas used in chi square use division, no cell in your table can have an observed raw frequency of 0.
The following minimum frequency thresholds should be obeyed:
• for a 1 X 2 or 2 X 2 table, expected frequencies in each cell should be at least 5;
• for a 2 X 3 table, expected frequencies should be at least 2;
• for a 2 X 4 or 3 X 3 or larger table, if all expected frequencies but one are at least 5 and if the one small cell is at least 1, chi-square is still a good approximation.
In general, the greater the degrees of freedom (i.e., the more values/categories on the independent and dependent variables), the more lenient the minimum expected frequencies threshold. (We'll discuss degrees of freedom in a moment.)
Collapsing Values
A brief word about collapsing values/categories on a variable is necessary. First, although categories on a variable--especially a dependent variable--may be collapsed, they cannot be excluded from a chi-square analysis. That is, you cannot arbitrarily exclude some subset of your data from your analysis. Second, a decision to collapse categories should be carefully motivated, with consideration for preserving the integrity of the data as it was originally collected. (For example, how could you collapse the footwear preference categories in our example and still preserve the integrity of the original question/data? You can't, since there's no way to know if combining, e.g., boots and leather shoes versus sandals and sneakers is true to your subjects' typology of footwear.) As a rule, you should perform a chi square on the data in its uncollapsed form; if the chi square value achieved is significant, then you may collapse categories to test subsequent refinements of your original hypothesis.
Computing Chi Square
Let's walk through the process by which a chi square value is computed, using Table 1.b. above (renamed 1.d., below).
The first step is to determine our threshold of tolerance for error. That is, what odds are we willing to accept that we are wrong in generalizing from the results in our sample to the population it represents? Are we willing to stake a claim on a 50 percent chance that we're wrong? A 10 percent chance? A five percent chance? 1 percent? The answer depends largely on our research question and the consequences of being wrong. If people's lives depend on our interpretation of our results, we might want to take only 1 chance in 100,000 (or 1,000,000) that we're wrong. But if the stakes are smaller, for example, whether or not two texts use the same frequencies of some linguistic feature (assuming this is not a forensic issue in a capital murder case!), we might accept a greater probability--1 in 100 or even 1 in 20--that our data do not represent the population we're generalizing about. The important thing is to explicitly motivate your threshold before you perform any test of statistical significance, to minimize any temptation for post hoc compromise of scientific standards. For our purposes, we'll set a probability of error thresold of 1 in 20, or p < .05, for our Footwear study.)
The second step is to total all rows and columns:
Table 1.d. Male and Female Undergraduate Footwear Preferences: Observed Frequencies with Row and Column Totals
Sandals Sneakers Leather
shoes Boots Other Total
Male 6 17 13 9 5 50
Female 13 5 7 16 9 50
Total 19 22 20 25 14 100
Remember that chi square operates by comparing the actual, or observed, frequencies in each cell in the table to the frequencies we would expect if there were no relationship at all between the two variables in the populations from which the sample is drawn. In other words, chi square compares what actually happened to what hypothetically would have happened if 'all other things were equal' (basically, the null hypothesis). If our actual results are sufficiently different from the predicted null hypothesis results, we can reject the null hypothesis and claim that a statistically significant relationship exists between our variables.
Chi square derives a representation of the null hypothesis--the 'all other things being equal' scenario--in the following way. The expected frequency in each cell is the product of that cell's row total multiplied by that cell's column total, divided by the sum total of all observations. So, to derive the expected frequency of the "Males who prefer Sandals" cell, we multiply the top row total (50) by the first column total (19) and divide that product by the sum total (100): ((50 X 19)/100) = 9.5. The logic of this is that we are deriving the expected frequency of each cell from the union of the total frequencies of the relevant values on each variable (in this case, Male and Sandals), as a proportion of all observed frequencies (across all values of each variable). This calculation is performed to derive the expected frequency of each cell, as shown in Table 1.e below (the computation for each cell is listed below Table 1.e.):
Table 1.e. Male and Female Undergraduate Footwear Preferences: Observed and Expected Frequencies
Sandals Sneakers Leather
shoes Boots Other Total
Male observed 6 17 13 9 5 50
Male expected 9.5 11 10 12.5 7
Female observed 13 5 7 16 9 50
Female expected 9.5 11 10 12.5 7
Total 19 22 20 25 14 100

Male/Sandals: ((19 X 50)/100) = 9.5
Male/Sneakers: ((22 X 50)/100) = 11
Male/Leather Shoes: ((20 X 50)/100) = 10
Male/Boots: ((25 X 50)/100) = 12.5
Male/Other: ((14 X 50)/100) = 7
Female/Sandals: ((19 X 50)/100) = 9.5
Female/Sneakers: ((22 X 50)/100) = 11
Female/Leather Shoes: ((20 X 50)/100) = 10
Female/Boots: ((25 X 50)/100) = 12.5
Female/Other: ((14 X 50)/100) = 7
(Notice that because we originally obtained a balanced male/female sample, our male and female expected scores are the same. This usually will not be the case.)
We now have a comparison of the observed results versus the results we would expect if the null hypothesis were true. We can informally analyze this table, comparing observed and expected frequencies in each cell (Males prefer sandals less than expected), across values on the independent variable (Males prefer sneakers more than expected, Females less than expected), or across values on the dependent variable (Females prefer sandals and boots more than expected, but sneakers and shoes less than expected). But so far, the extra computation doesn't really add much more information than interpretation of the results in percentage form. We need some way to measure how different our observed results are from the null hypothesis. Or, to put it another way, we need some way to determine whether we can reject the null hypothesis, and if we can, with what degree of confidence that we're not making a mistake in generalizing from our sample results to the larger population.
Logically, we need to measure the size of the difference between the pair of observed and expected frequencies in each cell. More specifically, we calculate the difference between the observed and expected frequency in each cell, square that difference, and then divide that product by the difference itself. The formula can be expressed as:
((O - E)2/E)
Squaring the difference ensures a positive number, so that we end up with an absolute value of differences. If we didn't work with absolute values, the positive and negative differences across the entire table would always add up to 0. (You really understand the logic of chi square if you can figure out why this is true.) Dividing the squared difference by the expected frequency essentially removes the expected frequency from the equation, so that the remaining measures of observed/expected difference are comparable across all cells.
So, for example, the difference between observed and expecetd frequencies for the Male/Sandals preference is calculated as follows:
1. Observed (6) minus Expected (9.5) = Difference (-3.5)
2. Difference (-3.5) squared = 12.25
3. Difference squared (12.25) divided by Expected (9.5) = 1.289
The sum of all products of this calculation on each cell is the total chi square value for the table.
The computation of chi square for each cell is listed below Table 1.f.:
Table 1.f. Male and Female Undergraduate Footwear Preferences: Observed and Expected Frequencies Plus Chi Square
Sandals Sneakers Leather
shoes Boots Other Total
Male observed 6 17 13 9 5 50
Male expected 9.5 11 10 12.5 7
Female observed 13 5 7 16 9 50
Female expected 9.5 11 10 12.5 7
Total 19 22 20 25 14 100

Male/Sandals: ((6 - 9.5)2/9.5) = 1.289
Male/Sneakers: ((17 - 11)2/11) = 3.273
Male/Leather Shoes: ((13 - 10)2/10) = 0.900
Male/Boots: ((9 - 12.5)2/12.5) = 0.980
Male/Other: ((5 - 7)2/7) = 0.571
Female/Sandals: ((13 - 9.5)2/9.5) = 1.289
Female/Sneakers: ((5 - 11)2/11) = 3.273
Female/Leather Shoes: ((7 - 10)2/10) = 0.900
Female/Boots: ((16 - 12.5)2/12.5) = 0.980
Female/Other: ((9 - 7)2/7) = 0.571
(Again, because of our balanced male/female sample, our row totals were the same, so the male and female observed-expected frequency differences were identical. This is usually not the case.)
The total chi square value for Table 1 is 14.026.
Interpreting the Chi Square Value
We now need some criterion or yardstick against which to measure the table's chi square value, to tell us whether or not it is significant. What we need to know is the probability of getting a chi square value of a minimum given size even if our variables are not related at all in the larger population from which our sample was drawn. That is, we need to know how much larger than 0 (the absolute chi square value of the null hypothesis) our table's chi square value must be before we can confidently reject the null hypothesis. The probability we seek depends in part on the degrees of freedom of the table from which our chi square value is derived.
Degrees of freedom
Mechanically, a table's degrees of freedom (df) can be expressed by the following formula:
df = (r-1)(c-1)
That is, a table's degrees of freedom equals the number of rows in the table minus one multiplied by the number of columns in the table minus one. (For 1 X 2 tables: df = k - 1, where k = number of values/categories on the variable.)
Degrees of freedom is an issue because of the way in which expected values in each cell are computed from the row and column totals of each cell. All but one of the expected values in a given row or column are free to vary (within the total observed--and therefore expected) frequency of that row or column); once the free to vary expected cells are specified, the last one is fixed by virtue of the fact that the expected frequencies must add up to the observed row and column totals (from which they are derived).
Another way to conceive of a table's degrees of freedom is to think of one row and one column in the table as fixed, with the remaining cells free to vary. Consider the following visuals (where X = fixed):
X X
X
X
(r-1)(c-1) = (3-1)(2-1) = 2 X 1 = 2
X X X
X
X
X
X
(r-1)(c-1) = (5-1)(3-1) = 4 X 2 = 8
So, for our Table 1, df = (2-1)(5-1) = 4:
Sandals Sneakers Leather
shoes Boots Other
Male X X X X X
Female X
In a statistics book, the sampling distribution of chi square (also know as 'critical values of chi square') is typically listed in an appendix. You read down the column representing your previously chosen probability of error threshold (e.g., p < .05) and across the row representing the degrees of freedom in your table. If your chi square value is larger than the critical value in that cell, your data present a statistically significant relationship between the variables in your table.
Table 1's chi square value of 14.026, with 4 degrees of freedom, handily clears the related critical value of 9.49, so we can reject the null hypothesis and affirm the claim that male and female undergraduates at University of X differ in their (self-reported) footwear preferences.
Statistical significance does not help you to interpret the nature or explanation of that relationship; that must be done by other means (including bivariate tabular analysis and qualitative analysis of the data). But a statistically significant chi square value does denote the degree of confidence you may hold that relationship between variables described in your results is systematic in the larger population and not attributable to random error.
Statistical significance also does not ensure substantive significance. A large enough sample may demonstrate a statistically significant relationship between two variables, but that relationship may be a trivially weak one. Statistical significance means only that the pattern of distribution and relationship between variables which is found in the data from a sample can be confidently generalized to the larger populattion from which the sample was randomly drawn. By itself, it does not ensure that the relationship is theoretically or practically important or even very large.
Measures of Association
While the issue of theoretical or practical importance of a statistically significant result cannot be quantified, the relative magnitude of a statistically significant relationship can be measured. Chi-square allows you to make decisions about whether there is a relationship netween two or more variables; if the null hypothesis is rejected, we conclude that there is a statistically significant relationship between the variables. But we frequently want a measure of the strength of that relationship--an index of degree oof correlation, a measure of the degree of association between the variables represented in our table (and data). Luckily, several related measures of association can be derived from a table's chi square value.
For tables larger than 2 X 2 (like our Table 1), a measure called 'Cramer's phi' is derived by the following formula (where N = the total number of observations, and k = the smaller of the number of rows or columns):
Cramer's phi = the square root of (chi-square divided by (N times (k minus 1)))
So, for our Table 1 (a 2 X 5), we would compute Cramer's phi as follows:
1. N(k - 1) = 100 (2-1) = 100
2. chi square/100 = 14.026/100 = 0.14
3. square root of (2) = 0.37
The product is interpreted as a Pearson r (that is, as a correlation coefficient).
(For 2 X 2 tables, a measure called 'phi' is derived by dividing the table's chi square value by N (the total number of observations) and then taking the square root of the product. Phi is also interpreted as a Pearson r.)
A complete account of how to interpret correlation coefficients is unnecessary for present purposes. It will suffice to say that r2 is a measure called shared variance. Shared variance is the portion of the total behavior (or distribution) of the variables measured in the sample data which is accounted for by the relationship we've already detected with our chi square. For Table 1, r2 = 0.137, so appproximately 14% of the total footwear preference story is explained/predicted by biological sex.
Computing a measure of association like phi or Cramer's phi is rarely done in quantitative linguistic analyses, but it is an important benchmark of just 'how much' of the phenomenon under investigation has been explained. For example, Table 1's Cramer's phi of 0.37 (r2 = 0.137) means that there are one or more variables still undetected which, cumulatively, account for and predict 86% of footwear preferences. This measure, of course, doesn't begin to address the nature of the relation(s) between these variables, which is a crucial part of any adequate explanation or theory.
*************************************

Total quality managment


management strategy to embed awareness of quality in all organizational processes. Quality assurance through statistical methods is a key component. TQM aims to do things right the first time, rather than need to fix problems after they emerge or fester. TQM may operate within quality circles, which encourage the meeting of minds of the workforce in different departments in order to improve production and reduce wastage.

In a manufacturing organization, TQM generally starts by sampling a random selection of the product. The sample is then tested for things that matter to the real customers. The causes of any failures are isolated, secondary measures of the production process are designed, and then the causes of the failure are corrected. The statistical distributions of important measurements are tracked. When parts' measures drift out of the error band, the process is fixed. The error band is usually tighter than the failure band. The production process is thereby fixed before failing parts can be produced.

It's important to record not just the measurement ranges, but what failures caused them to be chosen. In that way, cheaper fixes can be substituted later, (say, when the produce is redesigned), with no loss of quality. After TQM has been in use, it's very common for parts to be redesigned so that critical measurements either cease to exist, or become much wider.

It took people a while to develop tests to find emergent problems. One popular test is a "life test" in which the sample product is operated until a part fails. Another popular test is called "shake and bake". The product is mounted on a vibrator in an environmental oven, and operated at progressively more extreme vibration and temperatures until something fails. The failure is then isolated and engineers design an improvement.

A commonly-discovered failure is for the product to come apart. If fasteners fail, the improvements might be to use measured-tension nutdrivers to ensure that screws don't come off, or improved adhesives to ensure that parts remain glued.

If a gearbox wears out first, a typical engineering design improvement might be to substitute a brushless stepper motor for a DC motor with a gearbox. The improvement is that a stepper motor has no brushes or gears to wear out, so it lasts ten times or more longer. The stepper motor is more expensive than a DC motor, but cheaper than a DC motor combined with a gearbox. The electronics is radically different, but equally expensive. One disadvantage might be that a stepper motor can hum or whine, and usually needs noise-isolating mounts.

Often a TQMed product is cheaper to produce (because there's no need to repair dead-on-arrival products), and can yield an immensely more desirable product.

TQM can be applied to services (such as mortgage issue or insurance underwriting), or even normal business paperwork.

TQM is not a focused improvement approach. The customer desires and product tests select what to fix. Theoretical constraints are not considered at all.

Reply
#4
[attachment=12986]
WHAT IS SIX SIGMA
Six Sigma is a disciplined methodology that uses data and statistical analysis to measure and improve a company's operational performance.
Six Sigma is a performance target focused on critical customer requirements.
Key concepts :- (a) Critical to Quality
(b) Process Capability © Stable Operations
FUNDAMENTAL OF SIX SIGMA
NEED OF SIX SIGMA

Sigma (the lower-case Greek letter σ) is used to represent standard deviation of a population.
A sigma quality level offers an indicator of how often defects are likely to occur, where a higher sigma quality level indicates a process that is less likely to create defects.
Standard deviation can be thought of as a comparison between expected results or outcomes in a group of operations, versus those that fail.
Six Sigma is the definition of outcomes as close as possible to perfection. With six standard deviations, we arrive at 3.4 defects per million opportunities, or 99.99966 %.
DIFFERENT SIGMA LEVELS
OBJECTIVE OF SIX SIGMA
BENEFITS OF SIX SIGMA

Improved customer loyalty.
Reduced cycle time.
Less wastage.
Better Time management.
Generates sustained gains, improvements and Success.
Systematic problem solving.
Assures strategic planning.
Reductions of incidents and accidents.
Better safety performance.
Understanding of processes.
SIX SIGMA SUCCESS STORIES
MOTOROLA

Only two years after launching Six Sigma, Motorola was honored with the Malcolm Baldrige National Quality Award.
The company’s total employment has risen from 71,000 employees in 1980 to over 130,000 today.
More than a set of tools, though, Motorola applied Six Sigma as a way to transform the business, a way driven by communication, training, leadership, teamwork, measurement, and a focus on customers.
GENERAL ELECTRIC
Reply
#5
detailed report about six sigma limit and new development in this quality control tool
what is SAP.how it is used in mechanical engineering
what is SAP in mechanical engineering
Reply
#6
what is SAP how it is used in mechanical engineering
Reply
#7
The business applications are getting complex day by day. The staff levels and the IT budgets are shrinking at the same time. Expensive implementations and high maintenance solutions cannot be afforded by the small and medium scale enterprises. In the case of mechanical engineering industry, there is need for powerful information technology which can respond quickly to the changing needs. An ERP solution suitable for the complexity of the midsized enterprise. An experisnced consulting artner is needed at the entry level.
Combined know how of SAP and IT is beneficial. Efficient operation of the SAP application is required which can lead to the lasting reduction of the IT costs of the industry. Some companies provide the preconfigured industry solution for ME industry, and in the mechanical plant engineering sector. These are based on the best SAP practices and the industry know-how.
http://studentbank.in/report-sap-r-3-full-report
http://slideshareITSolutions/sap-me-mechanical-engineering
http://chetanasforumlofiversion/index.php/t17889.html
Reply
#8
You'll be displayed your contact list, and you'll be able to choose which contacts you want to send the invitation to.



__________________________
tents[/url],ps3 games, tents,
Reply
#9
[attachment=15497]
ABSTRACT
Never in the history of modern world listening to the voice of the customer was more important than today. Old business models now no longer work. Today’s competitive environment leaves no room for error. The companies must delight their customers and relentlessly look for new ways to exceed customers’ expectations. This is where “Six Sigma” counts.
Six sigma is a powerful business strategy that employs a disciplined approach to tackle process variability using the application of statistical and non-statistical tools and techniques in a rigorous manner. This paper examines the pros and cons of six sigma in a detailed manner. This paper examines the pros and cons of six sigma in a detailed manner.
The main purpose of Six Sigma is to make the manufacturing processes BETTER IN ACCURACY, FASTER IN SERVICE, and LOWER AT COST IN PRODUCTION. It can also be used to improve every field of business, from production, to human resources, to order entry, to technical support. Six Sigma can be used for any activity that is concerned with cost, timeliness, and quality of results.
Keywords: Six sigma ,Variation, Standard Deviation, Black belts
INTRODUCTION
Six Sigma (6 )

 The Greek symbol (sigma) refers to the amount of deviation in a process around the mean value for that process
 Processes have acceptable upper and lower limits
 Six Sigma is concerned with reducing the variations to get more output within those limits
The basic assumption in six sigma is that variation is the enemy of quality. The more Variation in a product, the fewer the number of items which will work as designed. To reduce variation, one must be able to measure it. There are various ways to measure it, but the usual measure is the standard deviation. The standard deviation is a measure of variability that is more convenient than percentile differences for further investigation and analysis of statistical data. The Standard Deviation of a set of measurements x1, x2… xn with the mean, is defined as the square root of the mean of the squares of the deviations; it is usually designated by the Greek letter sigma. In symbols
The square of the standard deviation is the variance. If the standard deviation is small, the measurements are tightly clustered around the mean; if it is large, they are widely scattered.
Definition
” Six Sigma: A comprehensive and flexible system for achieving, sustaining and maximizing business success. Six sigma is uniquely driven by a close understanding of customer needs, disciplined use of facts, data, and statistical analysis and diligent attention to managing, improving, and reinventing business processes.” 2
The History of Six Sigma
 “Six Sigma” originated at Motorola in 1982
 Early adopters
 Allied Signal (Honeywell)
 General Electric(1996)
Six Sigma management philosophy today
 A well-developed, thorough approach to quality improvement.
 Uses statistics and management by fact.
 Is effective in manufacturing and services firms.
What is Six Sigma?
Six Sigma is a data-driven, disciplined approach to minimizing defects in any type of process. Popularized in the mid-90 Six Sigma has grown greatly in acceptance among thousands of companies, and has proven to be both a time and money saver when implemented properly.
The goal of Six Sigma is to statistically represent how a process is performed, and determine where defects can be eliminated. Six Sigma strives for just 3.4 defects per million - near perfection.
Why 6 ?
Simply because Six Sigma
 Delivers business excellence;
 Improves profits;
 Delights customers;
 Increases entry barrier for competition.
6 SIGMA - HOW IT REALLY WORKS AND HOW IT'S EVOLVING
Six Sigma is a collection of over 100 concepts, techniques and sophisticated statistical tools that are woven together to create a unique problem solving methodology. 6 Sigma uses facts, data and root cause analysis to solve problems. This methodology is used to resolve process issues in manufacturing operations and business transactions. Typical problems that can be solved include quality, warranty, downtime, scrap and rework issues in manufacturing operations and flaws in business processes or customer services. Ultimate objectives of the methodology are to solve problems to improve quality, profitability and customer satisfaction. 6 Sigma is often referred to as "TQM on steroids".
Reply
#10

To get more information about the topic " six sigma full report" please refer the page link below

http://studentbank.in/report-six-sigma-full-report

http://studentbank.in/report-six-sigma-f...3#pid56383
Reply
#11
to get information about the topic "mechanical engg six sigma" full report ppt and related topic refer the page link bellow


http://studentbank.in/report-mechanical-...rts-titles

http://studentbank.in/report-mechanical-...nar-topics

http://studentbank.in/report-six-sigma-f...e=threaded

http://studentbank.in/report-projects-fo...g-students

http://studentbank.in/report-mechanical-...d-report-4
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: ishra ff completed, female dragon, what is six sigma, who is jenny johnson hi5, sixth sigma project report, how to arrange an, female finnish,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Treatment of Distillery Wastewater Using Membrane Technologies full report project topics 2 11,990 02-01-2013, 10:23 AM
Last Post: seminar details
  Parking Guidance System full report smart paper boy 3 6,528 24-11-2012, 12:48 PM
Last Post: seminar details
  FILTER FABRIC full report smart paper boy 0 1,878 18-06-2011, 11:44 AM
Last Post: smart paper boy
  vibrational modes of MEMS-resonators full report computer science topics 1 2,893 10-05-2011, 11:29 AM
Last Post: project topics

Forum Jump: