When assessing the quality of information obtained from survey research, the manager must determine the accuracy of those results. This requires careful consideration of the research methodology employed in relation to the various types of errors that might result (see Exhibit 6.1).
Sampling Error
Two major types of errors may be encountered in connection with the sampling process. They are random error and systematic error, sometimes referred to as bias. Surveys often attempt to obtain information from a representative cross section of a target population. The goal is to make inferences about the total population based on the responses given by respondents sampled. Even when all aspects of the sample are investigated properly, the results are still subject to a certain amount of random error (or random sampling error) because of chance variation. Chance variation is the difference between the sample value and the true value of the population mean. This error cannot be eliminated, but it can be reduced by increasing the sample size. It is possible to estimate the range of random error at a particular level of confidence.
Systematic Error
Systematic error, or bias, results from mistakes or problems in the research design or from flaws in the execution of the sample design. Systematic error exists in the results of a sample if those results show a consistent tendency to vary in one direction (consistently higher or consistently lower) from the true value of the population parameter. Systematic error includes all sources of error except those introduced by the random sampling process. Therefore, systematic errors are sometimes called nonsampling errors. The nonsampling errors that can systematically influence survey answers can be categorized as sample design error and measurement error.
Sample Design Error Sample design error is a systematic error that results from a problem in the sample design or sampling procedures. Types of sample design errors include frame errors, population specification errors, and selection errors
Frame Error The sampling frame is the list of population elements or members from which units to be sampled are selected. Frame error results from using an incomplete or inaccurate sampling frame. The problem is that a sample drawn from a list that is subject to frame error may not be a true cross section of the target population. A common source of frame error in marketing research is the use of a published telephone directory as a sampling frame for a telephone survey. Many households are not listed in a current telephone book because they do not want to be listed or are not listed accurately because they have recently moved or changed their telephone number. Research has shown that those people who are listed in telephone directories are systematically different from those who are not listed in certain important ways, such as socioeconomic levels. This means that if a study purporting to represent the opinions of all households in a particular area is based on listings in the current telephone directory, it will be subject to frame error.
Systematic error, or bias, results from mistakes or problems in the research design or from flaws in the execution of the sample design. Systematic error exists in the results of a sample if those results show a consistent tendency to vary in one direction (consistently higher or consistently lower) from the true value of the population parameter. Systematic error includes all sources of error except those introduced by the random sampling process. Therefore, systematic errors are sometimes called nonsampling errors. The nonsampling errors that can systematically influence survey answers can be categorized as sample design error and measurement error.
Sample Design Error Sample design error is a systematic error that results from a problem in the sample design or sampling procedures. Types of sample design errors include frame errors, population specification errors, and selection errors
Frame Error The sampling frame is the list of population elements or members from which units to be sampled are selected. Frame error results from using an incomplete or inaccurate sampling frame. The problem is that a sample drawn from a list that is subject to frame error may not be a true cross section of the target population. A common source of frame error in marketing research is the use of a published telephone directory as a sampling frame for a telephone survey. Many households are not listed in a current telephone book because they do not want to be listed or are not listed accurately because they have recently moved or changed their telephone number. Research has shown that those people who are listed in telephone directories are systematically different from those who are not listed in certain important ways, such as socioeconomic levels. This means that if a study purporting to represent the opinions of all households in a particular area is based on listings in the current telephone directory, it will be subject to frame error.
Population Specification Error Population specification error results from an incorrect definition of the population or universe from which the sample is to be selected. For example, suppose a researcher defined the population or universe for a study as people over the age of 35. Later, it was determined that younger individuals should have been included and that the population should have been defined as people 20 years of age or older. If those younger people who were excluded are significantly different in regard to the variables of interest, then the sample results will be biased.
Selection Error Selection error can occur even when the analyst has a proper sampling frame and has defined the population correctly. Selection error occurs when sampling procedures are incomplete or improper or when appropriate selection procedures are not properly followed. For example, door-to-door interviewers might decide to avoid houses that do not look neat and tidy because they think the inhabitants will not be agreeable to doing a survey. If people who live in messy houses are systematically different from those who live in tidy houses, then selection error will be introduced into the results of the survey.
Measurement Error Measurement error is often a much more serious threat to survey accuracy than is random error. When the results of public opinion polls are given in the media and in professional marketing research reports, an error figure is frequently reported (say, plus or minus 5 percent). The television viewer or the user of a marketing research study is left with the impression that this figure refers to total survey error. Unfortunately, this is not the case. This figure refers only to random sampling error. It does not include sample design error and speaks in no way to the measurement error that may exist in the research results. Measurement error occurs when there is variation between the information being sought (true value) and the information actually obtained by the measurement process. Our main concern in this text is with systematic measurement error. Various types of error may be caused by numerous deficiencies in the measurement process. These errors include surrogate information error, interviewer error, measurement instrument bias, processing error, nonresponse bias, and response bias.
Surrogate Information Error Surrogate information error occurs when there is a discrepancy between the information actually required to solve a problem and the information being sought by the researcher. It relates to general problems in the research design, particularly failure to properly define the problem. A few years ago, Kellogg spent millions developing a line of 17 breakfast cereals that featured ingredients that would help consumers cut down on their cholesterol. The product line was called Ensemble. It failed miserably in the marketplace. Yes, people want to lower their cholesterol, but the real question was whether they would purchase a line of breakfast cereals to accomplish this task. This question was never asked in the research. Also, the name “Ensemble” usually refers to either an orchestra or something you wear. Consumers didn’t understand either the product line or the need to consume it.
Interviewer Error Interviewer error, or interviewer bias, results from the interviewer’s influencing a respondent—consciously or unconsciously—to give untrue or inaccurate answers. The dress, age, gender, facial expressions, body language, or tone of voice of the interviewer may influence the answers given by some or all respondents. This type of error is caused by problems in the selection and training of interviewers or by the failure of interviewers to follow instructions. Interviewers must be properly trained and supervised to appear neutral at all times. Another type of interviewer error occurs when deliberate cheating takes place. This can be a particular problem in door-to-door interviewing, where interviewers may be tempted to falsify interviews and get paid for work they did not actually do.
Measurement Instrument Bias Measurement instrument bias (sometimes called questionnaire bias) results from problems with the measurement instrument or questionnaire . Examples of such problems include leading questions or elements of the questionnaire design that make recording responses difficult and prone to recording errors. Problems of this type can be avoided by paying careful attention to detail in the questionnaire design phase and by using questionnaire pretests before field interviewing begins.
Input Error Input errors may be due to mistakes that occur when information from survey documents is entered into the computer. For example, a document may be scanned incorrectly. Individuals filling out surveys on a smartphone or laptop may hit the wrong keys.
Nonresponse Bias Ideally, if a sample of 400 people is selected from a particular population, all 400 of those individuals should be interviewed. As a practical matter, this will never happen. Response rates of 5 percent or less are common in mail surveys. The question is, “Are those who did respond to the survey systematically differ in some important way from those who did not respond?” Such differences lead to nonresponse bias. We recently examined the results of a study conducted among customers of a large savings and loan association.The response rate to the questionnaire, included in customer monthly statements, was slightly under 1 percent. Analysis of the occupations of those who responded revealed that the percentage of retired people among respondents was 20 times higher than in the local metropolitan area. This overrepresentation of retired individuals raised serious doubts about the accuracy of the results.
Obviously, the higher the response rate, the less the possible impact of nonresponse because nonrespondents then represent a smaller subset of the overall picture. If the decrease in bias associated with improved response rates is trivial, then allocating resources to obtain higher response rates might be wasteful in studies in which resources could be used for better purposes. Nonresponse error occurs when the following happens:
▪ A person cannot be reached at a particular time.
▪ A potential respondent is reached but cannot or will not participate at that time (for example, the telephone request to participate in a survey comes just as the family is sitting down to dinner).
▪ A person is reached but refuses to participate in the survey. This is the most serious problem because it may be possible to achieve future participation in the first two circumstances.
The refusal rate is the percentage of persons contacted who refused to participate in a survey. Although response rates for mobile and Internet surveys hover around 60 percent with panel surveys even higher, telephone and mail response rates are very low. Pew Research has found that the response rate for a typical telephone survey was 36 percent in 1997 and only 8 percent in 2012.1 The movement from landlines to smartphones has been a contributing factor. Comparing 1997 to 2012, we find that:
Selection Error Selection error can occur even when the analyst has a proper sampling frame and has defined the population correctly. Selection error occurs when sampling procedures are incomplete or improper or when appropriate selection procedures are not properly followed. For example, door-to-door interviewers might decide to avoid houses that do not look neat and tidy because they think the inhabitants will not be agreeable to doing a survey. If people who live in messy houses are systematically different from those who live in tidy houses, then selection error will be introduced into the results of the survey.
Measurement Error Measurement error is often a much more serious threat to survey accuracy than is random error. When the results of public opinion polls are given in the media and in professional marketing research reports, an error figure is frequently reported (say, plus or minus 5 percent). The television viewer or the user of a marketing research study is left with the impression that this figure refers to total survey error. Unfortunately, this is not the case. This figure refers only to random sampling error. It does not include sample design error and speaks in no way to the measurement error that may exist in the research results. Measurement error occurs when there is variation between the information being sought (true value) and the information actually obtained by the measurement process. Our main concern in this text is with systematic measurement error. Various types of error may be caused by numerous deficiencies in the measurement process. These errors include surrogate information error, interviewer error, measurement instrument bias, processing error, nonresponse bias, and response bias.
Surrogate Information Error Surrogate information error occurs when there is a discrepancy between the information actually required to solve a problem and the information being sought by the researcher. It relates to general problems in the research design, particularly failure to properly define the problem. A few years ago, Kellogg spent millions developing a line of 17 breakfast cereals that featured ingredients that would help consumers cut down on their cholesterol. The product line was called Ensemble. It failed miserably in the marketplace. Yes, people want to lower their cholesterol, but the real question was whether they would purchase a line of breakfast cereals to accomplish this task. This question was never asked in the research. Also, the name “Ensemble” usually refers to either an orchestra or something you wear. Consumers didn’t understand either the product line or the need to consume it.
Interviewer Error Interviewer error, or interviewer bias, results from the interviewer’s influencing a respondent—consciously or unconsciously—to give untrue or inaccurate answers. The dress, age, gender, facial expressions, body language, or tone of voice of the interviewer may influence the answers given by some or all respondents. This type of error is caused by problems in the selection and training of interviewers or by the failure of interviewers to follow instructions. Interviewers must be properly trained and supervised to appear neutral at all times. Another type of interviewer error occurs when deliberate cheating takes place. This can be a particular problem in door-to-door interviewing, where interviewers may be tempted to falsify interviews and get paid for work they did not actually do.
Measurement Instrument Bias Measurement instrument bias (sometimes called questionnaire bias) results from problems with the measurement instrument or questionnaire . Examples of such problems include leading questions or elements of the questionnaire design that make recording responses difficult and prone to recording errors. Problems of this type can be avoided by paying careful attention to detail in the questionnaire design phase and by using questionnaire pretests before field interviewing begins.
Input Error Input errors may be due to mistakes that occur when information from survey documents is entered into the computer. For example, a document may be scanned incorrectly. Individuals filling out surveys on a smartphone or laptop may hit the wrong keys.
Nonresponse Bias Ideally, if a sample of 400 people is selected from a particular population, all 400 of those individuals should be interviewed. As a practical matter, this will never happen. Response rates of 5 percent or less are common in mail surveys. The question is, “Are those who did respond to the survey systematically differ in some important way from those who did not respond?” Such differences lead to nonresponse bias. We recently examined the results of a study conducted among customers of a large savings and loan association.The response rate to the questionnaire, included in customer monthly statements, was slightly under 1 percent. Analysis of the occupations of those who responded revealed that the percentage of retired people among respondents was 20 times higher than in the local metropolitan area. This overrepresentation of retired individuals raised serious doubts about the accuracy of the results.
Obviously, the higher the response rate, the less the possible impact of nonresponse because nonrespondents then represent a smaller subset of the overall picture. If the decrease in bias associated with improved response rates is trivial, then allocating resources to obtain higher response rates might be wasteful in studies in which resources could be used for better purposes. Nonresponse error occurs when the following happens:
▪ A person cannot be reached at a particular time.
▪ A potential respondent is reached but cannot or will not participate at that time (for example, the telephone request to participate in a survey comes just as the family is sitting down to dinner).
▪ A person is reached but refuses to participate in the survey. This is the most serious problem because it may be possible to achieve future participation in the first two circumstances.
The refusal rate is the percentage of persons contacted who refused to participate in a survey. Although response rates for mobile and Internet surveys hover around 60 percent with panel surveys even higher, telephone and mail response rates are very low. Pew Research has found that the response rate for a typical telephone survey was 36 percent in 1997 and only 8 percent in 2012.1 The movement from landlines to smartphones has been a contributing factor. Comparing 1997 to 2012, we find that:
Response Bias If there is a tendency for people to answer a particular question in a certain way, then there is response bias. Response bias can result from deliberate falsificationor unconscious misrepresentation.
Deliberate falsification occurs when people purposefully give untrue answers to questions. There are many reasons why people might knowingly misrepresent information in a survey. They may wish to appear intelligent, they may not reveal information that they feel is embarrassing, or they may want to conceal information that they consider to be personal. For example, in a survey about fast-food buying behavior, the respondents may have a fairly good idea of how many times they visited a fast-food restaurant in the past month. However, they may not remember which fast-food restaurants they visited or how many times they visited each restaurant. Rather than answering “Don’t know” in response to a question regarding which restaurants they visited, the respondents may simply guess. Unconscious misrepresentation occurs when a respondent is legitimately trying to be truthful and accurate but gives an inaccurate response. This type of bias may occur because of question format, question content, or various other reasons.
Deliberate falsification occurs when people purposefully give untrue answers to questions. There are many reasons why people might knowingly misrepresent information in a survey. They may wish to appear intelligent, they may not reveal information that they feel is embarrassing, or they may want to conceal information that they consider to be personal. For example, in a survey about fast-food buying behavior, the respondents may have a fairly good idea of how many times they visited a fast-food restaurant in the past month. However, they may not remember which fast-food restaurants they visited or how many times they visited each restaurant. Rather than answering “Don’t know” in response to a question regarding which restaurants they visited, the respondents may simply guess. Unconscious misrepresentation occurs when a respondent is legitimately trying to be truthful and accurate but gives an inaccurate response. This type of bias may occur because of question format, question content, or various other reasons.
No comments:
Post a Comment