How Do We Know
What We Know about Organizational Behavior?
By Uma Sekaran
This book has examined the skills and knowledge that managers need to be successful in their jobs. But how do you know how much faith you should have in all the information you acquire from textbooks and management journals? Are some theories and statements more applicable than others? Even when applicable, will they apply at all times and under all circumstances? You can find answers to these important questions once you know the foundation on which theories and assertions rest. This appendix provides that foundation. It first examines why managers need to know about research, and then discusses the basis for knowledge in this field. It then looks at the research process and research design and ends with a discussion of how research knowledge affects you.
Why is it necessary for you to know about research? First, this knowledge helps you determine how much of what is offered in textbooks is of practical use to you as a manager. Second, a basic understanding of how good empirical research is done can make you an effective manager by helping you to make intelligent decisions about research proposals and reports that reach your desk. Third, it enables you to become an informed and discriminating consumer of research articles published in the management journals that you need to read to keep up with new ideas and technology. For your convenience, a list of the current academic and practitioner-oriented journals that frequently publish articles on organizational behavior is provided in Table B.1.
Table B.1 Journals with Organizational Behavior Articles
Academic Journals Practitioner-Oriented Journals
Academy of Management Journal Academy of Management Executive
Academy of Management Review Business Horizons
Administrative Science Quarterly California Management Review
Advances in International Columbia Journal of World Business
Comparative Management Harvard Business Review
American Journal of Small Business Human Resource Development
Behavioral Science Quarterly
Group and Organization Studies Industrial Relations
Human Relations Industry Week
Human Resource Management Organizational Dynamics
Human Resource Management Personnel Journal
Review SAM Advanced Management Journal
Human Resource Planning Sloan Management Review
Industrial and Labor Relations Review Supervision
International Journal of Training
Management Training and Development Journal
Journal of Applied Behavioral Science
Journal of Applied Business Research
Journal of Applied Psychology
Journal of Business
Journal of Business Ethics
Journal of Business Research
Journal of Human Resource
Management
Journal of International Business
Studies
Journal of Management
Journal of Management Studies
Journal of Occupational Psychology
Journal of Organizational Behavior
Journal of Organizational Behavior
Management
Journal of Vocational Behavior
Organizational Behavior and Human
Decision Processes
Personnel Administrator
Sex Roles
Understanding scientific research methods enables you to differentiate between good and appropriate research, which you can apply in your setting, and flawed or inappropriate research, which you cannot use. Moreover, knowledge of techniques such as sampling design enables you to decide whether the results of a study using a particular type of sample in certain types of organizations is applicable to your setting.
Managers need to understand, predict, and control the research-oriented problems in their environment. Some of these problems may be relatively simple and can be solved through simple data gathering and analysis. Others may be relatively complex, needing the assistance of researchers or consultants. In either case, without some basic knowledge of scientific research, managers will be unable to solve the problems themselves or to work effectively with consultants.
Managers need to discuss their problems with consultants in a useful way. This includes informing the problem solvers right at the start of the consulting process of any constraints (such as company records that are off-limits to outsiders) or of types of recommendations that will not be considered (such as laying off or hiring more people). Such discussions not only save time but also help the managers and researchers start off on the right foot. Managers who don’t understand the important aspects of research will not be equipped to anticipate and forestall the inevitable hurdles in manager–researcher interactions. Also, paying a consultant handsomely for a research report will not help the company unless the manager is capable of determining how much scientific value can be placed on the findings. For these and other reasons, a working knowledge of the scientific research process and research design is necessary.
Observation and scientific data gathering have led to some of our knowledge about management. For instance, very early on, Frederick Winslow Taylor observed, studied, experimented, and demonstrated that coal-mining operations could be more efficiently managed by changing the way men shoveled coal—changing how the shovel was handled, how the body movements were made, and so on. The era of scientific management that Taylor’s work ushered in provided much knowledge about how management could improve efficiency. This type of knowledge is not easy to come by, however, when we are examining employees’ feelings, attitudes, and behaviors. Our knowledge of organizational behavior stems instead from armchair theories, case studies, and scientific research.
Armchair Theories
In trying to understand organizational behavior, management experts and scholars initially resorted to armchair theorizing—theorizing based on the observation of various phenomena and behaviors in the workplace. For instance, Douglas McGregor, through observation and experience, theorized that managers have two different world views of employees. Some managers (Theory X) assume that employees are by nature lazy and not very bright, that they dislike responsibility and prefer to be led rather than to lead, and that they resist change. Other managers (Theory Y) assume that employees have the opposite characteristics. McGregor’s concept of Theory X and Theory Y managers has become a classic armchair theory.
Few people either totally accept or totally dispute this theory because of the lack of hard data to either substantiate or negate this interesting notion. Armchair theories are based on natural observation with no systematic experimentation and hence are not very useful for application in organizations.
Case Studies
Case studies—studies that examine the environment and background in which events occur in specific organizations in a particular period of time—help us to understand behavior in those organizations at that time. For example, we could study a particular organization in depth to determine the contributing factors that led to its fast recovery after a prolonged recession. We might find several factors, including price reductions, the offering of good incentives to a highly motivated work force, and the taking of big risks. However, the findings from this one-time study of an organization offer only limited knowledge about fast recovery from recessions because the findings may not hold true for other organizations or for even the same organization at another time. The replication of case studies is almost impossible, since environmental and background factors are rarely the same from organization to organization. Most of the companies whose problems you have been asked to solve are from real cases written by management scholars who studied the companies. The solutions they found may not work for other organizations experiencing similar problems because of differences in size, technology, environment, labor force, clientele, and other internal and external factors. However, through case studies, we do gather information and gain insights and knowledge that might help us to develop theories and test them later.
Scientific Research
Empirical or data-based scientific research identifies a problem and solves it after a systematic gathering and analysis of the relevant data. This type of research offers in-depth understanding, confidence in the findings, and the capability of applying the knowledge gained to similar organizations. Scientific research is the main focus of this appendix.
Scientific inquiry involves a well-planned and well-organized systematic effort to identify and solve a problem. It encompasses a series of well-thought-out and carefully executed activities that help to solve the problem—as opposed to the symptoms—that is identified.
Purposes of Scientific Research:
Applied and Basic Research
Scientific inquiry can be undertaken for two different purposes: to solve an existing problem that a particular organization faces, or to examine problems that organizations generally encounter and to generate solutions, thereby expanding the knowledge base. Research undertaken to solve an existing problem in a specific setting is applied research. In this type of research, the findings are immediately applied to solve the problem. Many professors acting as consultants to organizations do applied research.
Research undertaken to add information to our existing base of knowledge is basic research. A large number of issues are of common interest to many organizations—for example, how to increase the productivity of a diverse workforce or how to eradicate sexual harassment in the workplace. The knowledge gained from research on such general issues can become useful later for application in organizational settings, but that is not the primary goal of basic research. The goal is to generate knowledge with which to build better theories that can be tested later. Basic research is often published in academic journals.
The Two Faces of Science:
Theory and Empirical Research
Theory and empirical research are the two faces of science. Organizations benefit when good theories are developed and then substantiated through scientific research, because the results can then be confidently used for problem solving.
THEORY A theory is a postulated network of associations among various factors that a researcher is interested in investigating. For example, given what has been published thus far, you might theorize that self-confident employees perceive their work environment positively, which fosters their productivity, which in turn generates more profits for the company. In constructing this theory, you have postulated a positive relationship between (1) the self-confidence of employees and their positive attitude toward their work environment, (2) their attitude toward the work environment and their productivity, and (3) their productivity and the company’s profits.
No doubt, this theory appeals to common sense; but in order to establish whether or not it holds true, we need to actually test it in organizations. Thus, theories offer the basis for doing scientific, data-based research; and the theories and research together add to our knowledge. Conducting empirical research without the basis of sound theories does not steer us in the right direction, and building theories without empirically testing them limits their value.
The usefulness of good theories cannot be overstated. A good theory is formulated only after a careful examination of all the previous research and writings on the topic of interest, so that no factor already established as important is inadvertently omitted. Theory building offers unique opportunities to look at phenomena from different perspectives or to add new dimensions to existing ways of examining a phenomenon. New insights and creative ideas for theory building can come through personal observation, through intuition, or even through informal discussions with employees.
Testable theories are theories whose hypothesized relationships among measurable variables can be empirically tested and verified. When tested and substantiated repeatedly, such theories become the foundation on which subsequent theory building progresses. The next issue of interest is how theories are affirmed through empirical research.
EMPIRICAL RESEARCH As we have just seen, theories are of no practical use unless we have confidence that they work and can be applied to problem solving in organizational settings. Empirical research allows us to test the value of theories.
Empirical research is research that involves identifying the factors to be studied, gathering the relevant data, analyzing them, and drawing conclusions from the results of data analysis. It could involve simple qualitative analysis of the data, or it could be more complex, using a hypothetico-deductive approach. In qualitative analysis, responses to open-ended questions are obtained and meaningfully classified, and certain conclusions are drawn. In the hypothetico-deductive approach, a problem is identified, defined, and studied in depth; then, a theory is formulated; from that theory, testable hypotheses are generated; next, a research design is developed, relevant data are gathered and analyzed, results are interpreted, and conclusions (or deductions) are drawn from the results. Figure B.1 illustrates this approach.
To be called “scientific,” research should conform to certain basic principles. It should be conducted objectively (without subjective biases). It should have a good and rigorous design (which we will examine shortly). It should be testable; that is, the conjectured relationships among factors in a setting should be capable of being tested. It should be replicable; that is, the results must be similar each time similar research is conducted. Finally, the findings should be generalizable (applicable to similar settings). It goes without saying, then, that scientific research offers precision (a good degree of exactitude) and a high degree of confidence in the results of the research (i.e. the researcher can say that 95 percent of the times, the results generated by the research will hold true, with only a 5 percent chance of its not being so).
The research process starts with a definition of the problem. To help define the problem, the researcher may interview people and study published materials in the area of interest in order to better understand what is happening in the environment. After defining the problem in clear and precise terms, the researcher develops a theoretical framework, generates hypotheses, creates the research design, collects data, analyzes data, interprets results, and draws conclusions.
Problem Definition
The first job for the researcher is to define the problem. Often, however, it is difficult to precisely state the specific research question to be investigated. The researcher might simply know the broad area of interest—for instance, discrimination—without being clear about which aspect of discrimination to study. In order to focus on the issue to be investigated, the researcher might need to collect some preliminary information that will help to narrow down the issue.
Such information can be obtained by interviewing people in organizations and by doing a literature survey. For example, employees of different gender, race, age, physical ability, and the like may be interviewed to determine the specific aspect of discrimination on which to focus. These interviews also provide insight into what the employees (rather than the researchers) consider important. The literature survey ensures that no pertinent variable is inadvertently omitted and that there is a credible and defensible basis for the research to be done. The researcher conducts an exhaustive search for all the published work in the area of interest to determine what research has been done thus far in the particular area and with what results. The search consumes a lot of time, as one must wade through several psychological, sociological, anthropological, and other relevant journals.
With all this information in hand, the researcher is now ready to define the problem. A well-defined, precise problem statement is a must for any study. The problem definition for the broad topic of discrimination could be this: What are the important factors that contribute to employees’ beliefs that they are being discriminated against by their immediate supervisor in cross-gender or cross-racial supervisor–employee relationships?
Theoretical Framework
The next step is to develop a theoretical framework for the study. It involves focusing on the pertinent variables for the study and discussing the anticipated or theorized network of connections among the variables. For the discrimination problem, the framework might identify three factors related to employees’ beliefs that they were discriminated against by the supervisor: (1) the level of mutual trust that is perceived by the employee to exist between the supervisor and employee (high to low), (2) the manner in which the supervisor offers performance feedback to the employee (in a forthright and helpful manner rather than in a derogatory and hurtful way), and (3) the extent to which the supervisor plays the role of mentor to the employee (training the subordinate and promoting the person’s interests in career advancement to being indifferent toward the employee’s career progress).
A network of logical connections among these four variables of interest to the study—discrimination (the dependent variable) and trust, performance feedback, and mentoring (the three independent variables)—can then be formulated. These connections with the anticipated nature and direction of the relationships among the variables are postulated in the theoretical framework.
Hypotheses
On the basis of the theoretical framework, the researcher next generates hypotheses. A hypothesis is a testable statement of the conjectured relationship between two or more variables. It is derived from the connections postulated in the theoretical framework. An example of a hypothesis is this: The more the employee perceives the supervisor as performing the mentoring role, the less the employee will feel discriminated against by the supervisor. The statement can be tested through data gathering and correlational analysis to see if it is supported.
Research Design
The next step in the research process is research design. Because this step is complex, it is covered in a separate section of the chapter, after the research process.
Data Collection
After creating the research design, the researcher must gather the relevant data. In our example of the discrimination problem, we would collect data on the four variables of interest from employees in one or more organizations, we would obtain information about their race and gender and that of their supervisors, and we would seek such demographic data as age, educational level, and position in the organization. This information helps us describe the sample and enable us to see later if demographic characteristics make a difference in the results. For example, we might discover during data analysis that older employees sense less discrimination than their younger counterparts. Such information could even provide a basis for further theory development.
Data Analysis
Having collected the data, the researcher must next analyze them, using statistical procedures, to test whether the hypotheses have been substantiated. In the case of the discrimination hypothesis, if a correlational analysis between the variables of mentoring and discrimination indicates a significant negative correlation, the hypothesis will have been supported; that is, we have been correct in conjecturing that the more the supervisor is perceived as a mentor, the less the employee feels discriminated against. Each of the hypotheses formulated from the theoretical framework is tested, and the results are examined.
Interpreting Results and Drawing Conclusions
The final step is to interpret the results of the data analysis and draw conclusions about them. In our example, if a significant negative relationship is indeed found between mentoring and discrimination, then one of our conclusions might be that mentoring helps fight feelings of discrimination. We might therefore recommend that if the organization wants to create a climate where employees do not feel discriminated against, supervisors should actively engage in mentoring. If the organization accepts this recommendation, it might conduct training programs to make supervisors better mentors. By testing and substantiating each of the hypotheses, we might find a multitude of solutions to overcome the perception of discrimination by employees.
Summary
We can see that every step in the research process is important. Unless the problem is well defined, the research endeavor will be fruitless. If a thorough literature survey is not done, a defensible theoretical framework cannot be developed and useful hypotheses cannot be generated—which compromises effective problem solving. Using the correct methods in data gathering and analysis and drawing relevant conclusions are all indispensable methodological steps for conducting empirical research. We next examine some of the research design issues which are integral to conducting good research.
Issues regarding research design relate particularly to how the variables are measured, how the data are collected, what sampling design is used, and how the data are analyzed. Before decisions in these areas are made, some details about the nature and purpose of the study have to be determined so there is a good match between the purpose of the study and the design choices. If the research design does not mesh with the research goals, the right solutions will not be found.
Important Concepts in Research Design
Five important concepts in research design must be understood before an adequate design can be created: nature of study, study setting, types of study, researcher interference, and time horizon. The nature of study is the purpose of the study—whether it is to establish correlations among variables or causation. The study setting could be either the environment in which the phenomena studied normally and naturally occur—the field—or it could be in a contrived, artificial setting—the laboratory. The type of study is either experimental (to establish causal connections) or correlational (to establish correlations). Experiments can be conducted in an artificial setting—a lab experiment, or it could be conducted in the organization itself where events naturally occur—field experiment. Researcher interference is the extent to which the researcher manipulates the independent variable and controls other contaminating factors in the study setting that are likely to affect the cause–effect relationship. The time horizon is the number of data collection points in the study; the study could be either one-shot (various types of data are collected only once during the investigation) or longitudinal (same or similar data are collected more than once from the same system during the course of the study).
Purpose of Study and Design Choices
One of the primary issues to consider before making any research design decision is the purpose of the study. Is the research to establish a causal relationship (that variable X causes variable Y), or is it to detect any correlations that might exist between two or more variables? A study to establish a cause–effect relationship differs in many areas (for example, the setting, type of study, extent of researcher interference with the ongoing processes, and time frame of the study) from a study to examine correlations among factors. Figure B.2 depicts the fit between the goal of the study and the characteristics of the study.
Causal Studies
Studies conducted to detect causal relationships call for an experimental design, considerable researcher interference, and a longitudinal time span. The design could consist of laboratory experiments, field experiments, or simulations.
LABORATORY EXPERIMENTS A rigorous causal study may call for a laboratory experiment, where participants are exposed to an artificial environment and an artificial stimulus in order to establish a cause–effect relationship. The experiment is set up with maximum researcher interference; both manipulation and controls (described later) are used, and data are collected from the subjects more than once during the experiment (longitudinally). Following is an example of how a lab experiment is conducted.
Suppose a manager wants to know which of two incentives—offering stock options or giving a bonus—would better improve employee productivity. To determine this, the manager has to experiment with each of the two types of incentives to see which offers better results. Not knowing how to proceed, the manager might hire a researcher, who is likely to recommend conducting a lab experiment first, and then a field experiment. The lab experiment firmly establishes the causal relationship, and the field experiment confirms whether or not the causal relationship established during the lab experiment holds good in the organizational setting.
To set up a lab experiment in which thirty subjects participate, the following is done:
1. An artificial setting is created. It will consist of three conference rooms in which the experiment is conducted after regular work hours.
2. A simple job—assembling cardboard houses—is given to the subjects who take part in the experiment for two hours.
3. The subjects receive an imaginary bonus in the form of cardboard chips and stock options in the form of fake certificates.
4. Each subject is randomly assigned to one of the three conference rooms, so three ten-member groups are formed.
For the first hour, all three groups will be assigned the task of assembling the cardboard houses. Thereafter, the researcher manipulates the incentives—giving one group stock options; another a bonus; and a third, called the control group, no incentives at all. The researcher has already exercised tight control to ensure that all three groups have more or less the same types of members in terms of ability, experience, and the like by randomly assigning members to each of the groups. In random assignment, every member has an equal chance of being assigned to any of the groups. This control helps avoid contamination of the cause–effect relationship, since all factors that might affect the causal relationship (age, ability, and so on) are randomly distributed among the groups.
The data are collected at two different times, in the following manner. At the end of the first hour, when all three groups have worked without any incentives, the number of cardboard houses built by each group will be recorded by the researcher. The numbers are again counted and recorded at the end of the second hour, after the introduction of the incentives. Determining the difference between the number of houses assembled during the second hour and the number assembled during the first hour for the three groups clarifies the following two issues:
l Do the incentives make any difference at all to performance? Obviously, if the performance has increased during the second hour for either or both of the two groups provided with incentives, while there is no difference for the control group, then it is safe to surmise that either or both of the incentives have caused performance to rise. If there is no difference in the production between the second and first hour for all three groups, then, of course, the incentives have not caused an increase in performance.
l If the incentives do make a difference, which of the two incentives has worked better? By examining which group—the group that received the stock options or the group that received the bonus—performed better during the second hour, we know which of the two incentives worked better. The incentive that increases performance more will obviously be preferred by the company.
Because all possible contaminating factors have been controlled by the random assignment of members to the three groups, the cause–effect relationships found can be accepted with a high degree of confidence.
FIELD EXPERIMENTS What occurred in the tightly controlled artificial lab setting may or may not happen in an organizational setting, where many of the factors (such as employees’ ages and experience) cannot be controlled and the jobs to be done might be quite complex. But having established a strong causal relationship in the lab setting, the researcher is eager to see if the causal relationship found in the lab setting is generalizable to the organization, or field setting.
For the field experiment, three experimental cells (three branches or departments of the company, or whatever other units are appropriate for the organization) can be chosen. Real bonus and stock options can be offered to two groups, while the third group is treated as a control group and given no incentives. Work performance data can be collected for the three cells before the incentives are introduced and again six months after the incentives are introduced.
While it is possible to manipulate the incentive in a field experiment, it is not possible to control the contaminating factors (ability, experience, and so on). Because employees are already placed, members cannot be randomly assigned to the three units. Under the circumstances, researcher interference can be only partial, since the independent variable can be manipulated but other factors cannot be controlled. Even manipulating the independent variable is not easy, because people in organizations get suspicious and anxious as the word spreads that some strange changes are being made at some sites. Not only does this cause apprehension among employees, but it may also produce invalid results. Because of these difficulties, very few field experiments are conducted in organizational behavior research. However, if the manipulation is successful and the results of the field experiment are similar to those of the lab experiment, the manager can confidently introduce the changes needed to obtain the desired results.
If you read journal articles describing experimental designs, you will want to see how well the manipulations were done (were the independent variables successfully manipulated, or did the subjects see through the manipulations?) and how tightly the contaminating variables were controlled. If the subjects in the various groups differed in some characteristics that are relevant to the cause-effect relationship, then it cannot be said with confidence that only the manipulated independent variable caused the dependent variable. Other factors in the setting might also have influenced the dependent variable, and they might be impossible to trace.1
SIMULATIONS Somewhere between lab and field experiments are simulations—experiments that are conducted in settings that closely resemble field settings. The specially created settings look much like actual environments in which events normally occur—for example, offices with desks, computers, and phones. Members of the experimental group are randomly selected and exposed to real-world experiences over a period of time, during which their behavior is studied. A free simulation for studying leadership styles, called “Looking Glass,” has been used in management classes. This simulation enables students to study different kinds
of behavior as the researcher manipulates some of the stimuli while allowing the flow of events to be governed by the reactions of the participants.2
Correlational Studies
Researchers and managers may not be interested in establishing causal connections. Instead, they may want to understand, describe, or predict occurrences in the workplace. In general, they want to know which factors are related to desirable outcomes (such as employee loyalty to the organization) and which to undesirable outcomes (such as high turnover rates). Correlational studies are studies that are not specifically geared toward establishing cause–effect relationships. Such studies may be exploratory—trying to understand certain relationships; descriptive—trying to describe certain phenomena at the workplace; or analytical—focusing on testing hypotheses. Correlational studies are always conducted in the field setting with minimum researcher interference, and they can be either one-shot or longitudinal. The vast majority of the research articles published in organizational behavior journals are field studies examining correlations among factors.
To conduct a scientific study, whether causal or correlational, certain research design decisions must be made. As Figure B.3 shows, these decisions involve measurement, issues, data collection methods, sampling design, and data analysis procedures.
Measurement Issues
We saw earlier that it is difficult to measure attitudes, feelings, and other abstract concepts. Since the measurement of variables in the organizational sciences is not as exact as in the physical sciences, management research cannot be completely scientific. It is possible, however, to minimize biases in measurement by carefully developing valid and reliable measures for even abstract concepts. The primary aspects in measurement are: operational definition, the “goodness” of measures, and the measurement techniques to be used.
OPERATIONAL DEFINITION Attitudes such as job satisfaction and organizational commitment do not easily lend themselves to measurement. To measure them, we first need to translate them into observable behaviors. Operational definition is the reduction of the level of abstraction of concepts so as to identify the observable behaviors and measure them.
For example, how can we measure the motivational level of individuals? We know that highly motivated people engage in the following types of behaviors, among others: They are driven by work, and they keep persevering even if they initially fail to accomplish what they want. We can measure the abstract concept of motivation by developing an instrument that asks subjects to respond to several suitably worded questions tapping these behaviors.3 Most of the abstract concepts that are important to the study of organizational behavior have been operationally defined by scholars, who have developed “good” instruments for measuring them.4
“GOODNESS” OF MEASURES “Good” measurement instruments offer researchers the confidence that they do indeed measure what is desired to be measured and in a thorough and accurate manner. The goodness of instruments is established through their validity and reliability.
Validity is our confidence that the instrument used does indeed measure the concept it is supposed to measure. For instance, if a twenty-item instrument is developed to measure job satisfaction, we need to know that it does indeed measure job satisfaction, not employees’ general happiness.
Researchers usually establish various types of validity for the measures they use. Among them are content validity, criterion-related validity, predictive validity, construct validity, and convergent and discriminant validity. Journal articles often explain the types of validity established for the instrument used, especially if it is newly developed. In general, only such measures as are both valid and reliable are frequently used by researchers.5
Reliability is the ability of an instrument to accurately and stably measure a concept over time and across situations. For example, it is not enough for an instrument to measure job satisfaction; it must do so consistently and accurately time and again in all settings. Most researchers discuss the reliability of their instruments in terms of stability and consistency. Test–retest reliability is one indicator of the stability of a measure over time. Cronbach’s alpha and split-half reliability are two indicators of the internal consistency of instruments. These are the terms you are likely to come across in published empirical research.
Authors of studies usually provide details of the measures they use, and at a minimum cite their source. Journal editors and reviewers try to ensure that studies to be published have used valid and reliable measures. Discriminating readers of journals reporting empirical studies pay attention to the “goodness” of the measures. If variables are not validly and reliably measured, how can we place any confidence in the results of the study?
MEASUREMENT TECHNIQUES Concepts are not measured solely through questionnaires or interviews. Sometimes, in order to tap certain ideas, feelings, and thoughts that are not easily verbalized, researchers use projective tests—word association, sentence completion, thematic apperception tests, and ink-blot tests are some familiar projective tests. In word association (e.g., Work could be associated with excitement or drudgery,) and sentence completion (“I Like ”) tests, the respondent is believed to give answers based on deeply embedded feelings, attitudes, and orientations. Marketing researchers use these techniques to assess consumer preferences. Thematic apperception tests and ink-blot tests ask the subject to offer a story or interpret an ink blot. They can be interpreted only by trained psychologists.
Data Collection Methods
Data can be collected through questionnaires, interviews, computers, observation, unobtrusive methods, or a combination of these. The most frequently used method in organizational behavior research is questionnaires.
QUESTIONNAIRES A questionnaire is a written set of questions to which respondents record their answers, usually within a close range of alternatives given to them. Questionnaires can be mailed to respondents or administered personally.
Mail questionnaires are commonly used because of the large number of people who can be reached economically even when they are geographically dispersed. As a rule, however, they do not elicit a good response rate, even when stamped, self-addressed envelopes are enclosed for their return. (Researchers sometimes even include, as a small token of their appreciation, a $1 bill.) A 30 percent response rate for mail questionnaires is considered good. Mail responses generally fall far short of even this low percentage. Because of the low response rate, certain types of nonresponse biases can creep into research. For example, we cannot know if those who responded to the survey differ from those who did not. Thus, we cannot be sure that the data are representative of the population we are trying to study.
Personally administered questionnaires are questionnaires given to groups of subjects by the researcher, who collects the responses immediately after completion. This method ensures practically a 100 percent response rate. However, many organizations are reluctant to spare company time for the research effort unless the study is of vital importance to them.
INTERVIEWS Interviews have the potential to elicit a good deal of information. In structured interviews, specific questions are asked of all respondents, and the responses are noted down by the interviewer. In unstructured interviews, there is no predetermined format; questions are framed according to responses given to the previous question. Structured interviews are conducted when the interviewer knows precisely what sort of information is needed. They are efficient in terms of the amount of time involved in both obtaining the required information and categorizing the data obtained. Unstructured interviews are conducted when the researcher wants to explore a problem or become more knowledgeable about particular situations.
Face-to-face interviews offer the researcher the advantage of being able to observe the interviewees as they respond to questions. Nonverbal messages transmitted by the interviewees can be observed and explored further. Telephone interviews, on the other hand, help the researcher reach a vast number of geographically dispersed individuals. In both face-to-face and telephone interviews, certain types of biases can enter. The way a question is worded and asked, the inflection of a voice, the frame of mind of the interviewee at the time the interview is conducted, and other factors can all contribute to biases in the data.
COMPUTERS Computer-assisted interviewing and computer-aided surveys will become more popular in the future as more and more people become comfortable using their computers at home and responding to questions contained on diskettes or displayed on web sites. Interview and questionnaire methods of data collection are greatly facilitated through computers. However, computer literacy of respondents is prerequisite for using computer-assisted data collection techniques effectively.
OBSERVATIONAL SURVEYS Observational surveys are another data collection method whereby information is obtained without asking questions of subjects. In this method, the researcher observes firsthand what is going on and how people are behaving in the work setting. The data are collected by either nonparticipant observers (researchers who observe behavior as outsiders) or participant observers (integral members of the work team). An example of a nonparticipant study is one done by Henry Mintzberg, who observed the nature of managerial work over a period of time.
Like interviews, observational surveys can be either structured or unstructured. In a structured observational survey, the observer identifies the factors that are to be observed. For example, the observer might want to note the number of times a manager gives instructions to staff members and how much time this takes. In an unstructured observational survey, the observer might simply want to know how the manager spends the day at the workplace and might jot down all the activities the manager engages in and the time periods and frequencies involved.
Observational studies help prevent respondent bias, since information is not given by the subjects directly. Any bias that might creep in through the self-consciousness of subjects usually lasts only a few days. Then, subjects begin to function and behave normally, oblivious to the presence of the observer.
However, observer fatigue and observer bias cannot be totally avoided in observational studies. Moreover, when several observers are involved in a large research project, interobserver reliability could become an issue for concern; different observers might interpret and categorize the same behavior differently. This problem can be minimized by training the observers before the start of the project.
UNOBTRUSIVE METHODS Data collection by unobtrusive methods offers valid and reliable information; bias is minimized because the source of the data is tangible elements rather than people. For example, the usage of library books can be determined by the wear and tear on them, a source of information more reliable than surveys of users of the library. The number of cans or bottles of beer in the recycling bins outside houses on garbage collection days would offer a good idea of the beer consumption patterns in households. The personnel records of a company would indicate the absenteeism patterns of employees. Unobtrusive methods thus have the potential to offer the most reliable and unbiased data. They are, however, time consuming and labor intensive; also, the researcher must obtain the company’s permission to gain access to such data.
MULTIPLE METHODS Each data collection method has advantages and disadvantages. The best approach is using multiple methods of collecting data, since it offers researchers a chance to cross-check the information obtained through the various methods. This approach, however, is expensive and hence is used infrequently in organizational behavior research.
When you read journal articles, you should assess the data collection methods used by the researchers to determine if they are adequate. Authors of published studies often discuss the limitations of their research and the biases they have attempted to minimize. The biases could relate to the types of measures used, the data collection methods adopted, the sampling design, and other research process and design issues. Sophisticated managers pay attention to all research design details in order to evaluate the quality of the research.
Sampling Design
Sampling is the process of drawing a limited number of subjects from a larger population, or universe. Since researchers cannot possibly survey the entire universe of people they are interested in studying, they usually draw a sample of subjects from the population for investigation. The sampling design used makes a difference in the generalizability of the findings and determines the usefulness and scientific nature of the study. Sample size is another important issue. There are two broad categories of sampling—probability sampling and nonprobability sampling.
PROBABILITY SAMPLING Probability sampling is sampling that ensures that the elements in the population have some known chance, or probability, of being selected for the sample. Because of this, probability sampling designs offer more generalizability than nonprobability designs. There are many probability designs. The simple random sampling design, wherein every element in the population has a known and equal chance of being chosen, lends itself to the greatest generalizability. However, other probability designs can be more efficient and offer good generalizability as well. Among them are systematic sampling, stratified random sampling, cluster sampling, and area sampling.
In systematic sampling, every nth element in the population is chosen as a subject. In stratified random sampling, the population is first divided into meaningful strata (for example, blue-collar and white-collar employees); a sample is then drawn from each stratum using either simple random sampling or systematic sampling. Cluster sampling is the random selection of chunks (clusters or groups) of elements from the population; every chunk has an equal chance of being selected, and all the members in each chosen chunk participate in the research. For example, in an attitude survey, three departments in an organization can be randomly chosen; all the members of the three departments are the subjects. Area sampling is cluster sampling confined to particular geographical areas, such as counties or city blocks. Marketing researchers use cluster and area sampling extensively for surveys.
NONPROBABILITY SAMPLING For some research projects, probability sampling may be impossible or inappropriate. In such cases, nonprobability sampling may be used, even if generalizability is impaired or lost. In nonprobability sampling, the subjects do not have a known probability of being chosen for the study. For instance, the sample of subjects in a study of sexual harassment must come from those who have experienced such harassment; there is nothing to be gained by researching all the employees of the organization. When the choice of subjects for a study involves a limited number of people who are in a position to provide the required information, a probability sampling design is infeasible. The results of such a study are not generalizable; nevertheless, this type of sampling is the best way to learn about certain problems, such as sexual harassment.
Nonprobability sampling includes convenience sampling, judgment sampling, and quota sampling. In convenience sampling, information is collected from whoever is conveniently available. In judgment sampling, subjects who are in the best position to provide the required information are chosen. In quota sampling, people from different groups—some of which are underrepresented—are sampled for comparison purposes. One example might be a study of middle-class African-Americans and whites.
As noted earlier, nonprobability sampling does not lend itself to generalizability. In reading research articles, you should determine the type of sampling design being used and how much generalizability the author claims for the research.
SAMPLE SIZE Another critical issue in sampling is sample size. Too small or too large a sample could distort the results of the research. Tables providing ideal sample sizes for desired levels of precision and confidence are available to researchers. In examining any business report or journal article, you should note the sampling design and the sample size used by the researcher to assess the generalizability of the findings.
Data Analysis Procedures
Beyond good measures, appropriate data collection methods, and an acceptable sampling design, a good research project should also have suitable data analysis procedures. Some data cannot be subjected to sophisticated statistical tests. Two examples are data collected on a nominal scale, that divides subjects into mutually exclusive groups, such as men and women or the poor and the rich; and data collected on an ordinal scale, that rank-orders the subjects and indicates a preference (X is better than Y). Various simple ways are available to analyze such data that are qualitative or nonparametric in nature. For instance, if we have categorized under distinct heading the verbal responses of organizational members to an open-ended question on how they perceive their work environment, a frequency count of the responses in each category would be adequate to describe how the work environment is perceived. Likewise, to detect if the gender of the worker (male versus female) is independent of members’ commitment to the organization (less committed versus more committed) a simple x2 (chi-square) test would suffice.
Sophisticated statistical tests are possible when data have been gathered on interval or ratio scales. Data collected on interval scales—through individuals’ responses to questions on equal-appearing multipoint scales—allow for the computation of the arithmetic mean and standard deviation. Data collected on ratio scales also allow us to compute proportions and ratios. For example, an individual who weighs 250 pounds is twice as heavy as one who weighs 125 pounds. Pearson correlations can be calculated, and multiple regression and many multivariate analyses can be made with data obtained on interval and ratio scales. These sorts of analyses cannot be made with data obtained on nominal and ratio scales. Illustrations of the four scales appear in Figure B.4.
One decision that needs to be made before collecting the data is: What kinds of analyses are needed to find answers to the research question? This decision will determine which scales should be used in data collection. Sometimes researchers are tempted to apply more sophisticated statistical analyses to data that do not lend themselves to such analyses (this includes sample sizes below thirty). Using inappropriate methods can negatively affect the interpretation of the results and can compromise the problem solution.
Biases in Interpretation of Results
Thus far, we have examined the biases that would result from poor research process and design decisions. Another source of bias is in the interpretation of results. Objectivity plays a large part in the validity of interpretations from the results of data analysis. Objectivity may be difficult, however, if the results of the study do not substantiate the theories painstakingly developed by the researcher.
When data analysis does not substantiate one or more of the hypotheses generated, the researcher may be tempted to downplay the results or try to explain them away. For example, a researcher may say that the results were actually in the expected direction even though they were not statistically significant. If a hypothesis has not passed the appropriate statistical test, the hypothesis is just not substantiated, regardless of whether the results were in the theorized direction. When authors try to explain their results, you have to decide for yourself whether the explanations offered are valid.
It is seldom possible to do completely scientific research in the field of organizational behavior. First, adherence to good research design principles may not always be possible, since certain choices (such as obtaining the most representative sample for better generalizability or utilizing the best data collection methods) may be beyond the researcher’s control. Second, attitudes and feelings cannot be measured accurately. Hence there are likely to be several types of biases in research in this field. However, by paying careful attention to the research process and rigorously making good research design choices, we are able to minimize the biases and enhance the objectivity, testability, replicability, precision and confidence, and generalizability of our research.
Bias can enter at every stage of the process, from problem definition to problem solution. Errors can creep into experimental designs by way of poor or inadequate manipulations and controls. They can enter into measurement, data collection, sampling, data analysis, interpretation of results, and the drawing of conclusions therefrom.
Unless managers are knowledgeable about some of the methodological flaws that can adversely affect research results, they may inappropriately apply the conclusions drawn in published research to their own settings. Having been exposed to the rudiments of scientific research, you can critically examine and evaluate all published works before you assess their usefulness for your organization. For instance, you would not consider applying the results of good research done in a service organization to a manufacturing firm. Good research results in the hands of knowledgeable managers are highly useful tools. That is where research knowledge becomes invaluable. By grasping the essentials of good research, you will become a discriminating consumer of business reports and published articles and can become an effective manager. Research knowledge can often make the difference between managerial excellence and mediocrity.
References
1. Two sources for further reference on experimental design are D. T. Campbell and J. C. Stanley, Experimental and Quasi-Experimental Designs for Research (Chicago: Rand McNally, 1966); and T. D. Cook and D. T. Campbell, Quasi-Experimentation: Design and Analysis Issues for Field Settings (Boston: Houghton Mifflin, 1979).
2. M. L. Lombardo, M. McCall, and D. L. DeVries, Looking Glass (Glenview, Ill.: Scott, Foresman, 1983).
3. Elaboration of how such measures are developed is beyond the scope of this appendix but can be found in U. Sekaran, Research Methods for Business: A Skill Building Approach, 2nd ed. (New York: Wiley, 1992).
4. Several measures are available in Psychological Measurement Year-books; J. L. Price, Handbook of Organizational Measurement (Lexington, Mass.: D. C. Heath, 1972); and Michigan Organizational Assessment Packages (Ann Arbor, Mich.: Institute of Survey Research).
5. One such instrument is the Job Descriptive Index, which is used to measure job satisfaction. It was developed by P. C. Smith, L. Kendall, and C. Hulin. See their book The Measurement of Satisfaction in Work and Retirement (Chicago: Rand McNally, 1969), pp. 79–84.
FIGURE B.1
Steps in the Hypothetico-Deductive Approach to Research
FIGURE B.2
Fit between Goal of Study and Study Characteristics
FIGURE B.3
Research Design Decisions
FIGURE B.4
Illustrations of Four Data Analysis Scales
Source: http://www.swcollege.com/management/nelson-quick/uob1e/AppendixB.doc
Web site to visit: http://www.swcollege.com
Author of the text: indicated on the source document of the above text
If you are the author of the text above and you not agree to share your knowledge for teaching, research, scholarship (for fair use as indicated in the United States copyrigh low) please send us an e-mail and we will remove your text quickly. Fair use is a limitation and exception to the exclusive right granted by copyright law to the author of a creative work. In United States copyright law, fair use is a doctrine that permits limited use of copyrighted material without acquiring permission from the rights holders. Examples of fair use include commentary, search engines, criticism, news reporting, research, teaching, library archiving and scholarship. It provides for the legal, unlicensed citation or incorporation of copyrighted material in another author's work under a four-factor balancing test. (source: http://en.wikipedia.org/wiki/Fair_use)
The information of medicine and health contained in the site are of a general nature and purpose which is purely informative and for this reason may not replace in any case, the council of a doctor or a qualified entity legally to the profession.
The texts are the property of their respective authors and we thank them for giving us the opportunity to share for free to students, teachers and users of the Web their texts will used only for illustrative educational and scientific purposes only.
All the information in our site are given for nonprofit educational purposes