MB0034 set 1 & 2

Embed Size (px)

Citation preview

  • 8/6/2019 MB0034 set 1 & 2

    1/50

    Master of BusinessAdministration MBA Semester 3

    Name:Girish R. Bhavsar

    Roll No.:

    Subject:Marketing Management

    Subjectcode: MB0034

    ResearchMethodology

    ASSIGNMENT SET-1

    Q 1. Give examples of specific situations that would call for the following types ofresearch, explaining why a) Exploratory research b) Descriptive research c) Diagnosticresearch d) Evaluation research.

    Ans.: Research may be classified crudely according to its major intent or the methods. Accordingto the intent, research may be classified as:Basic(aka fundamental or pure) research is driven by a scientist's curiosity or interest in ascientific question. The main motivation is to expand man's knowledge, not to create or inventsomething. There is no obvious commercial value to the discoveries that result from basicresearch.For example, basic science investigations probe for answers to questions such as:

    How did the universe begin?

    What are protons, neutrons, and electrons composed of?

    How do slime molds reproduce?

    What is the specific genetic code of the fruit fly?

    Most scientists believe that a basic, fundamental understanding of all branches of science isneeded in order for progress to take place. In other words, basic research lays down thefoundation for the applied science that follows. If basic work is done first, then applied spin-offsoften eventually result from this research. As Dr. George Smoot of LBNL says, "People cannot

  • 8/6/2019 MB0034 set 1 & 2

    2/50

    foresee the future well enough to predict what's going to develop from basic research. If we onlydid applied research, we would still be making better spears."

    Applied research is designed to solve practical problems of the modern world, rather than toacquire knowledge for knowledge's sake. One might say that the goal of the applied scientist is toimprove the human condition.For example, applied researchers may investigate ways to:

    Improve agricultural crop production

    Treat or cure a specific disease

    Improve the energy efficiency of homes, offices, or modes of transportation

    Some scientists feel that the time has come for a shift in emphasis away from purely basicresearch and toward applied science. This trend, they feel, is necessitated by the problemsresulting from global overpopulation, pollution, and the overuse of the earth's natural resources.Exploratory research provides insights into and comprehension of an issue or situation. Itshould draw definitive conclusions only with extreme caution. Exploratory research is a type ofresearch conducted because a problem has not been clearly defined. Exploratory research helpsdetermine the best research design, data collection method and selection of subjects. Given its

    fundamental nature, exploratory research often concludes that a perceived problem does notactually exist. Exploratory research often relies on secondary research such as reviewingavailable literature and/or data, or qualitative approaches such as informal discussions withconsumers, employees, management or competitors, and more formal approaches through in-depth interviews, focus groups, projective methods, case studies or pilot studies. The Internetallows for research methods that are more interactive in nature: E.g., RSS feeds efficiently supplyresearchers with up-to-date information; major search engine search results may be sent byemail to researchers by services such as Google Alerts; comprehensive search results aretracked over lengthy periods of time by services such as Google Trends; and Web sites may becreated to attract worldwide feedback on any subject. The results of exploratory research are notusually useful for decision-making by themselves, but they can provide significant insight into agiven situation. Although the results of qualitative research can give some indication as to the"why", "how" and "when" something occurs, it cannot tell us "how often" or "how many."Exploratory research is not typically generalizable to the population at large. A defining

    characteristic of causal research is the random assignment of participants to the conditions of theexperiment; e.g., an Experimental and a Control Condition... Such assignment results in thegroups being comparable at the beginning of the experiment. Any difference between the groupsat the end of the experiment is attributable to the manipulated variable. Observational researchtypically looks for difference among "in-tact" defined groups. A common example comparessmokers and non-smokers with regard to health problems. Causal conclusions can't be drawnfrom such a study because of other possible differences between the groups; e.g., smokers maydrink more alcohol than non-smokers. Other unknown differences could exist as well. Hence, wemay see a relation between smoking and health but a conclusion that smoking is a cause wouldnot be warranted in this situation. (Cp)Descriptive research, also known as statistical research, describes data and characteristicsabout the population or phenomenon being studied. Descriptive research answers the questionswho, what, where, when and how.

    Although the data description is factual, accurate and systematic, the research cannot describewhat caused a situation. Thus, descriptive research cannot be used to create a causalrelationship, where one variable affects another. In other words, descriptive research can be saidto have a low requirement for internal validity.The description is used for frequencies, averages and other statistical calculations. Often the bestapproach, prior to writing descriptive research, is to conduct a survey investigation. Qualitativeresearch often has the aim of description and researchers may follow-up with examinations ofwhy the observations exist and what the implications of the findings are.In short descriptive research deals with everything that can be counted and studied. But there arealways restrictions to that. Your research must have an impact to the life of the people around

  • 8/6/2019 MB0034 set 1 & 2

    3/50

    you. For example, finding the most frequent disease that affects the children of a town. Thereader of the research will know what to do to prevent that disease thus; more people will live ahealthy life.Diagnostic study: it is similar to descriptive study but with different focus. It is directed towardsdiscovering what is happening and what can be done about. It aims at identifying the causes of aproblem and the possible solutions for it. It may also be concerned with discovering and testingwhether certain variables are associated. This type of research requires prior knowledge of theproblem, its thorough formulation, clear-cut definition of the given population, adequate methodsfor collecting accurate information, precise measurement of variables, statistical analysis and testof significance.Evaluation Studies: it is a type of applied research. It is made for assessing the effectiveness ofsocial or economic programmes implemented or for assessing the impact of development of theproject area. It is thus directed to assess or appraise the quality and quantity of an activity and itsperformance and to specify its attributes and conditions required for its success. It is concernedwith causal relationships and is more actively guided by hypothesis. It is concerned also withchange over time.

    Action research is a reflective process of progressive problem solving led by individuals workingwith others in teams or as part of a "community of practice" to improve the way they addressissues and solve problems. Action research can also be undertaken by larger organizations orinstitutions, assisted or guided by professional researchers, with the aim of improving their

    strategies, practices, and knowledge of the environments within which they practice. As designersand stakeholders, researchers work with others to propose a new course of action to help theircommunity improve its work practices (Center for Collaborative Action Research). Kurt Lewin,then a professor at MIT, first coined the term action research in about 1944, and it appears inhis 1946 paper Action Research and Minority Problems. In that paper, he described actionresearch as a comparative research on the conditions and effects of various forms of socialaction and research leading to social action that uses a spiral of steps, each of which iscomposed of a circle of planning, action, and fact-finding about the result of the action.Action research is an interactive inquiry process that balances problem solving actionsimplemented in a collaborative context with data-driven collaborative analysis or research tounderstand underlying causes enabling future predictions about personal and organizationalchange (Reason & Bradbury, 2001). After six decades of action research development, manymethodologies have evolved that adjust the balance to focus more on the actions taken or more

    on the research that results from the reflective understanding of the actions. This tension existsbetween

    those that are more driven by the researchers agenda to those more driven by

    participants;

    Those that are motivated primarily by instrumental goal attainment to those motivatedprimarily by the aim of personal, organizational, or societal transformation; and

    1st-, to 2nd-, to 3rd-person research, that is, my research on my own action, aimedprimarily at personal change; our research on our group (family/team), aimedprimarily at improving the group; and scholarly research aimed primarily attheoretical generalization and/or large scale change.

    Action research challenges traditional social science, by moving beyond reflective knowledgecreated by outside experts sampling variables to an active moment-to-moment theorizing, data

    collecting, and inquiring occurring in the midst of emergent structure. Knowledge is alwaysgained through action and for action. From this starting point, to question the validity of socialknowledge is to question, not how to develop a reflective science about action, but how todevelop genuinely well-informed action how to conduct an action science (Tolbert 2001).

  • 8/6/2019 MB0034 set 1 & 2

    4/50

    Q 2.In the context of hypothesis testing, briefly explain the difference between a) Null andalternative hypothesis b) Type 1 and type 2 error c) Two tailed and one tailed test d)Parametric and non-parametric tests.

    Ans.:Some basic concepts in the context of testing of hypotheses are explained below -1) Null Hypotheses and Alternative Hypotheses: In the context of statistical analysis, we often

    talk about null and alternative hypotheses. If we are to compare the superiority of method A withthat of method B and we proceed on the assumption that both methods are equally good, thenthis assumption is termed as a null hypothesis. On the other hand, if we think that method A issuperior, then it is known as an alternative hypothesis.These are symbolically represented as:Null hypothesis = H0 and Alternative hypothesis = HaSuppose we want to test the hypothesis that the population mean is equal to the hypothesizedmean ( H0) = 100. Then we would say that the null hypothesis is that the population mean isequal to the hypothesized mean 100 and symbolically we can express it as: H0: = H0=100If our sample results do not support this null hypothesis, we should conclude that something elseis true. What we conclude rejecting the null hypothesis is known as an alternative hypothesis. Ifwe accept H0, then we are rejecting Ha and if we reject H0, then we are accepting Ha. For H0:= H0=100, we may consider three possible alternative hypotheses as follows:

    AlternativeHypotheses

    To be read as follows

    Ha: H0 (The alternative hypothesis is that the population mean is not equal to 100i.e., it may be more or less 100)

    Ha: > H0 (The alternative hypothesis is that the population mean is greater than100)

    Ha: < H0 (The alternative hypothesis is that the population mean is less than 100)

    The null hypotheses and the alternative hypotheses are chosen before the sample is drawn (the

    researcher must avoid the error of deriving hypotheses from the data he collects and testing thehypotheses from the same data). In the choice of null hypothesis, the following considerations areusually kept in view:a. The alternative hypothesis is usually the one, which is to be proved, and the null hypothesis isthe one that is to be disproved. Thus a null hypothesis represents the hypothesis we are trying toreject, while the alternative hypothesis represents all other possibilities.b. If the rejection of a certain hypothesis when it is actually true involves great risk, it is taken asnull hypothesis, because then the probability of rejecting it when it is true is (the level ofsignificance) which is chosen very small.c. The null hypothesis should always be a specific hypothesis i.e., it should not state anapproximate value.Generally, in hypothesis testing, we proceed on the basis of the null hypothesis, keeping thealternative hypothesis in view. Why so? The answer is that on the assumption that the null

    hypothesis is true, one can assign the probabilities to different possible sample results, but thiscannot be done if we proceed with alternative hypotheses. Hence the use of null hypotheses (attimes also known as statistical hypotheses) is quite frequent.2) The Level of Significance: This is a very important concept in the context of hypothesistesting. It is always some percentage (usually 5%), which should be chosen with great care,thought and reason. In case we take the significance level at 5%, then this implies that H0 will berejected when the sampling result (i.e., observed evidence) has a less than 0.05 probability ofoccurring if H0 is true. In other words, the 5% level of significance means that the researcher iswilling to take as much as 5% risk rejecting the null hypothesis when it (H0) happens to be true.

  • 8/6/2019 MB0034 set 1 & 2

    5/50

    Thus the significance level is the maximum value of the probability of rejecting H0 when it is trueand is usually determined in advance before testing the hypothesis.3) Decision Rule or Test of Hypotheses: Given a hypothesis Ha and an alternative hypothesisH0, we make a rule, which is known as a decision rule, according to which we accept H0 (i.e.,reject Ha) or reject H0 (i.e., accept Ha). For instance, if H0 is that a certain lot is good (there arevery few defective items in it), against Ha, that the lot is not good (there are many defective itemsin it), then we must decide the number of items to be tested and the criterion for accepting orrejecting the hypothesis. We might test 10 items in the lot and plan our decision saying that ifthere are none or only 1 defective item among the 10, we will accept H0; otherwise we will rejectH0 (or accept Ha). This sort of basis is known as a decision rule.4) Type I & II Errors: In the context of testing of hypotheses, there are basically two types oferrors that we can make. We may reject H0 when H0 is true and we may accept H0 when it is nottrue. The former is known as Type I and the latter is known as Type II. In other words, Type Ierror means rejection of hypotheses, which should have been accepted, and Type II error meansaccepting of hypotheses, which should have been rejected. Type I error is denoted by (alpha),also called as level of significance of test; and Type II error is denoted by (beta).

    Decision

    Accept H0 Reject H0

    H0 (true) Correct decision Type I error ( error)

    Ho (false) Type II error ( error) Correct decision

    The probability of Type I error is usually determined in advance and is understood as the level ofsignificance of testing the hypotheses. If type I error is fixed at 5%, it means there are about 5chances in 100 that we will reject H0 when H0 is true. We can control type I error just by fixing itat a lower level. For instance, if we fix it at 1%, we will say that the maximum probability ofcommitting type I error would only be 0.01.But with a fixed sample size n, when we try to reduce type I error, the probability of committingtype II error increases. Both types of errors cannot be reduced simultaneously, since there is atrade-off in business situations. Decision makers decide the appropriate level of type I error by

    examining the costs of penalties attached to both types of errors. If type I error involves time andtrouble of reworking a batch of chemicals that should have been accepted, whereas type II errormeans taking a chance that an entire group of users of this chemicals compound will bepoisoned, then in such a situation one should prefer a type I error to a type II error. As a result,one must set a very high level for type I error in ones testing techniques of a given hypothesis.Hence, in testing of hypotheses, one must make all possible efforts to strike an adequate balancebetween Type I & Type II error.

    15) Two Tailed Test & One Tailed Test: In the context of hypothesis testing, these two termsare quite important and must be clearly understood. A two-tailed test rejects the null hypothesis if,say, the sample mean is significantly higher or lower than the hypothesized value of the mean ofthe population. Such a test is inappropriate when we have H0: = H0 and Ha: H0 whichmay > H0 or

  • 8/6/2019 MB0034 set 1 & 2

    6/50

    and precise estimates. They are said to have more statistical power. However, if thoseassumptions are incorrect, parametric methods can be very misleading. For that reason they areoften not considered robust. On the other hand, parametric formulae are often simpler to writedown and faster to compute. In some, but definitely not all cases, their simplicity makes up fortheir non-robustness, especially if care is taken to examine diagnostic statistics.Because parametric statistics require a probability distribution, they are not distribution-free.Non-parametric models differ fromparametric models in that the model structure is notspecified a priori but is instead determined from data. The term nonparametric is not meant toimply that such models completely lack parameters but that the number and nature of theparameters are flexible and not fixed in advance.Kernel density estimation provides better estimates of the density than histograms.Nonparametric regression andsemi parametric regression methods have been developed basedon kernels, splines, and wavelets.Data Envelopment Analysisprovides efficiency coefficients similar to those obtainedbyMultivariate Analysiswithout any distributional assumption.

    Q 3. Explain the difference between a causal relationship and correlation, with an example

    of each. What are the possible reasons for a correlation between two variables?

    Ans.: Correlation: The correlation is knowing what the consumer wants, and providing it.Marketing research looks at trends in sales and studies all of the variables, i.e. price, color,availability, and styles, and the best way to give the customer what he or she wants. If you cangive the customer what they want, they will buy, and let friends and family know where they got it.Making them happy makes the money.

    Casual relationship Marketing was first defined as a form ofmarketing developed from directresponse marketing campaigns, which emphasizes customer retention and satisfaction, ratherthan a dominant focus on sales transactions.

    As a practice, Relationship Marketing differs from other forms of marketing in that it recognizesthe long term value of customer relationships and extends communication beyond intrusiveadvertising and sales promotional messages. With the growth of the internet and mobileplatforms, Relationship Marketing has continued to evolve and move forward as technologyopens more collaborative and social communication channels. This includes tools for managingrelationships with customers that goes beyond simple demographic and customer service data.Relationship Marketing extends to include Inbound Marketing efforts (a combination of searchoptimization and Strategic Content), PR, Social Media and Application Development. Just likeCustomer relationship management(CRM), Relationship Marketing is a broadly recognized,widely-implemented strategy for managing and nurturing a companys interactions with clientsand sales prospects. It also involves using technology to, organize, synchronize businessprocesses (principally sales and marketing activities) and most importantly, automate thosemarketing and communication activities on concrete marketing sequences that could run inautopilot (also known as marketing sequences). The overall goals are to find, attract, and win new

    clients, nurture and retain those the company already has, entice former clients back into the fold,and reduce the costs of marketing and client service. [1] Once simply a label for a category ofsoftware tools, today, it generally denotes a company-wide business strategy embracing all client-facing departments and even beyond. When an implementation is effective, people, processes,and technology work in synergy to increase profitability, and reduce operational costs.

    Reasons for a correlation between two variables: Chance association, (the relationship is dueto chance) or causative association (one variable causes the other).The information given by acorrelation coefficient is not enough to define the dependence structure between random

    http://en.wikipedia.org/wiki/Parametric_statisticshttp://en.wikipedia.org/wiki/Parametric_statisticshttp://en.wikipedia.org/wiki/Kernel_density_estimationhttp://en.wikipedia.org/wiki/Nonparametric_regressionhttp://en.wikipedia.org/wiki/Semiparametric_regressionhttp://en.wikipedia.org/wiki/Semiparametric_regressionhttp://en.wikipedia.org/wiki/Kernel_(statistics)http://en.wikipedia.org/wiki/Spline_(mathematics)http://en.wikipedia.org/wiki/Wavelethttp://en.wikipedia.org/wiki/Data_Envelopment_Analysishttp://en.wikipedia.org/wiki/Data_Envelopment_Analysishttp://en.wikipedia.org/wiki/Multivariate_Analysishttp://en.wikipedia.org/wiki/Multivariate_Analysishttp://en.wikipedia.org/wiki/Multivariate_Analysishttp://en.wikipedia.org/wiki/Marketinghttp://en.wikipedia.org/wiki/Kernel_density_estimationhttp://en.wikipedia.org/wiki/Nonparametric_regressionhttp://en.wikipedia.org/wiki/Semiparametric_regressionhttp://en.wikipedia.org/wiki/Kernel_(statistics)http://en.wikipedia.org/wiki/Spline_(mathematics)http://en.wikipedia.org/wiki/Wavelethttp://en.wikipedia.org/wiki/Data_Envelopment_Analysishttp://en.wikipedia.org/wiki/Multivariate_Analysishttp://en.wikipedia.org/wiki/Marketinghttp://en.wikipedia.org/wiki/Parametric_statistics
  • 8/6/2019 MB0034 set 1 & 2

    7/50

    variables. The correlation coefficient completely defines the dependence structure only in veryparticular cases, for example when the distribution is a multivariate normal distribution. (Seediagram above.) In the case of elliptic distributions it characterizes the (hyper-)ellipses of equaldensity, however, it does not completely characterize the dependence structure (for example, amultivariate t-distribution's degrees of freedom determine the level of tail dependence).

    Distance correlation and Brownian covariance / Brownian correlation[8][9]

    were introduced toaddress the deficiency of Pearson's correlation that it can be zero for dependent randomvariables; zero distance correlation and zero Brownian correlation imply independence. Thecorrelation ratio is able to detect almost any functional dependency, or the entropy-based mutualinformation/total correlation which is capable of detecting even more general dependencies. Thelatter are sometimes referred to as multi-moment correlation measures, in comparison to thosethat consider only 2nd moment (pairwise or quadratic) dependence. The polychoric correlation isanother correlation applied to ordinal data that aims to estimate the correlation between theorisedlatent variables. One way to capture a more complete view of dependence structure is to considera copula between them.

    Q 4. Briefly explain any two factors that affect the choice of a sampling technique. Whatare the characteristics of a good sample?

    Ans.: The difference between non-probability and probability sampling is that non-probabilitysampling does not involve random selection and probability sampling does. Does that mean thatnon-probability samples aren't representative of the population? Not necessarily. But it doesmean that non-probability samples cannot depend upon the rationale of probability theory. Atleast with a probabilistic sample, we know the odds or probability that we have represented thepopulation well. We are able to estimate confidence intervals for the statistic. With non-probabilitysamples, we may or may not represent the population well, and it will often be hard for us to knowhow well we've done so. In general, researchers prefer probabilistic or random sampling methodsover non probabilistic ones, and consider them to be more accurate and rigorous. However, inapplied social research there may be circumstances where it is not feasible, practical or

    theoretically sensible to do random sampling. Here, we consider a wide range of non-probabilisticalternatives.

    We can divide non-probability sampling methods into two broad types:Accidentalorpurposive.

    Most sampling methods are purposive in nature because we usually approach thesampling problem with a specific plan in mind. The most important distinctions among these typesof sampling methods are the ones between the different types of purposive sampling approaches.

    Accidental, Haphazard or Convenience Sampling

    One of the most common methods of sampling goes under the various titles listed here. Iwould include in this category the traditional "man on the street" (of course, now it's probably the"person on the street") interviews conducted frequently by television news programs to get a

    quick (although non representative) reading of public opinion. I would also argue that the typicaluse of college students in much psychological research is primarily a matter of convenience. (Youdon't really believe that psychologists use college students because they believe they'rerepresentative of the population at large, do you?). In clinical practice, we might use clients whoare available to us as our sample. In many research contexts, we sample simply by asking forvolunteers. Clearly, the problem with all of these types of samples is that we have no evidencethat they are representative of the populations we're interested in generalizing to -- and in manycases we would clearly suspect that they are not.

    http://en.wikipedia.org/wiki/Multivariate_normal_distributionhttp://en.wikipedia.org/wiki/Distance_correlationhttp://en.wikipedia.org/wiki/Brownian_covariancehttp://en.wikipedia.org/wiki/#cite_note-7http://en.wikipedia.org/wiki/#cite_note-8http://en.wikipedia.org/wiki/Correlation_ratiohttp://en.wikipedia.org/wiki/Information_entropyhttp://en.wikipedia.org/wiki/Mutual_informationhttp://en.wikipedia.org/wiki/Mutual_informationhttp://en.wikipedia.org/wiki/Total_correlationhttp://en.wikipedia.org/wiki/Polychoric_correlationhttp://en.wikipedia.org/wiki/Copula_(statistics)http://en.wikipedia.org/wiki/Multivariate_normal_distributionhttp://en.wikipedia.org/wiki/Distance_correlationhttp://en.wikipedia.org/wiki/Brownian_covariancehttp://en.wikipedia.org/wiki/#cite_note-7http://en.wikipedia.org/wiki/#cite_note-8http://en.wikipedia.org/wiki/Correlation_ratiohttp://en.wikipedia.org/wiki/Information_entropyhttp://en.wikipedia.org/wiki/Mutual_informationhttp://en.wikipedia.org/wiki/Mutual_informationhttp://en.wikipedia.org/wiki/Total_correlationhttp://en.wikipedia.org/wiki/Polychoric_correlationhttp://en.wikipedia.org/wiki/Copula_(statistics)
  • 8/6/2019 MB0034 set 1 & 2

    8/50

    Purposive Sampling

    In purposive sampling, we sample with apurpose in mind. We usually would have one ormore specific predefined groups we are seeking. For instance, have you ever run into people in amall or on the street who are carrying a clipboard and who are stopping various people andasking if they could interview them? Most likely they are conducting a purposive sample (andmost likely they are engaged in market research). They might be looking for Caucasian females

    between 30-40 years old. They size up the people passing by and anyone who looks to be in thatcategory they stop to ask if they will participate. One of the first things they're likely to do is verifythat the respondent does in fact meet the criteria for being in the sample. Purposive sampling canbe very useful for situations where you need to reach a targeted sample quickly and wheresampling for proportionality is not the primary concern. With a purposive sample, you are likely toget the opinions of your target population, but you are also likely to overweight subgroups in yourpopulation that are more readily accessible. All of the methods that follow can be consideredsubcategories of purposive sampling methods. We might sample for specific groups or types ofpeople as in modal instance, expert, or quota sampling. We might sample for diversity as inheterogeneity sampling. Or, we might capitalize on informal social networks to identify specificrespondents who are hard to locate otherwise, as in snowball sampling. In all of these methodswe know what we want -- we are sampling with a purpose.

    Modal Instance Sampling

    In statistics, the mode is the most frequently occurring value in a distribution. In sampling, whenwe do a modal instance sample, we are sampling the most frequent case, or the "typical" case. Ina lot of informal public opinion polls, for instance, they interview a "typical" voter. There are anumber of problems with this sampling approach. First, how do we know what the "typical" or"modal" case is? We could say that the modal voter is a person who is of average age,educational level, and income in the population. But, it's not clear that using the averages of theseis the fairest (consider the skewed distribution of income, for instance). And, how do you knowthat those three variables -- age, education, income -- are the only or even the most relevant forclassifying the typical voter? What if religion or ethnicity is an important discriminator? Clearly,modal instance sampling is only sensible for informal sampling contexts.

    Expert SamplingExpert sampling involves the assembling of a sample of persons with known or demonstrableexperience and expertise in some area. Often, we convene such a sample under the auspices ofa "panel of experts." There are actually two reasons you might do expert sampling. First, becauseit would be the best way to elicit the views of persons who have specific expertise. In this case,expert sampling is essentially just a specific sub case of purposive sampling. But the other reasonyou might use expert sampling is to provide evidence for the validity of another samplingapproach you've chosen. For instance, let's say you do modal instance sampling and areconcerned that the criteria you used for defining the modal instance are subject to criticism. Youmight convene an expert panel consisting of persons with acknowledged experience and insightinto that field or topic and ask them to examine your modal definitions and comment on theirappropriateness and validity. The advantage of doing this is that you aren't out on your own tryingto defend your decisions -- you have some acknowledged experts to back you. The disadvantageis that even the experts can be, and often are, wrong.

    Quota SamplingIn quota sampling, you select people non-randomly according to some fixed quota. There are twotypes of quota sampling:proportionaland non proportional. In proportional quota sampling you

    want to represent the major characteristics of the population by sampling a proportional amountof each. For instance, if you know the population has 40% women and 60% men, and that youwant a total sample size of 100, you will continue sampling until you get those percentages andthen you will stop. So, if you've already got the 40 women for your sample, but not the sixty men,you will continue to sample men but even if legitimate women respondents come along, you willnot sample them because you have already "met your quota." The problem here (as in muchpurposive sampling) is that you have to decide the specific characteristics on which you will basethe quota. Will it be by gender, age, education race, religion, etc.?

  • 8/6/2019 MB0034 set 1 & 2

    9/50

    Non-proportional quota sampling is a bit less restrictive. In this method, you specify theminimum number of sampled units you want in each category. Here, you're not concerned withhaving numbers that match the proportions in the population. Instead, you simply want to haveenough to assure that you will be able to talk about even small groups in the population. Thismethod is the non-probabilistic analogue of stratified random sampling in that it is typically usedto assure that smaller groups are adequately represented in your sample.

    Heterogeneity SamplingWe sample for heterogeneity when we want to include all opinions or views, and we aren'tconcerned about representing these views proportionately. Another term for this is sampling fordiversity. In many brainstorming or nominal group processes (including concept mapping), wewould use some form of heterogeneity sampling because our primary interest is in getting broadspectrum of ideas, not identifying the "average" or "modal instance" ones. In effect, what wewould like to be sampling is not people, but ideas. We imagine that there is a universe of allpossible ideas relevant to some topic and that we want to sample this population, not thepopulation of people who have the ideas. Clearly, in order to get all of the ideas, and especiallythe "outlier" or unusual ones, we have to include a broad and diverse range of participants.Heterogeneity sampling is, in this sense, almost the opposite of modal instance sampling.

    Snowball SamplingIn snowball sampling, you begin by identifying someone who meets the criteria for inclusion inyour study. You then ask them to recommend others who they may know who also meet thecriteria. Although this method would hardly lead to representative samples, there are times whenit may be the best method available. Snowball sampling is especially useful when you are tryingto reach populations that are inaccessible or hard to find. For instance, if you are studying thehomeless, you are not likely to be able to find good lists of homeless people within a specificgeographical area. However, if you go to that area and identify one or two, you may find that theyknow very well whom the other homeless people in their vicinity are and how you can find them.Characteristics of good Sample: The decision process is a complicated one. The researcherhas to first identify the limiting factor or factors and must judiciously balance the conflictingfactors. The various criteria governing the choice of the sampling technique are:1. Purpose of the Survey: What does the researcher aim at? If he intends to generalize thefindings based on the sample survey to the population, then an appropriate probability samplingmethod must be selected. The choice of a particular type of probability sampling depends on thegeographical area of the survey and the size and the nature of the population under study.

    2.Measurability: The application of statistical inference theory requires computation of thesampling error from the sample itself. Only probability samples allow such computation. Hence,where the research objective requires statistical inference, the sample should be drawn byapplying simple random sampling method or stratified random sampling method, depending onwhether the population is homogenous or heterogeneous.3.Degree of Precision: Should the results of the survey be very precise, or could even roughresults serve the purpose? The desired level of precision is one of the criteria for samplingmethod selection. Where a high degree of precision of results is desired, probability samplingshould be used. Where even crude results would serve the purpose (E.g., marketing surveys,readership surveys etc), any convenient non-random sampling like quota sampling would beenough.4. Information about Population: How much information is available about the population to bestudied? Where no list of population and no information about its nature are available, it is difficult

    to apply a probability sampling method. Then an exploratory study with non-probability samplingmay be done to gain a better idea of the population. After gaining sufficient knowledge about thepopulation through the exploratory study, an appropriate probability sampling design may beadopted.5. The Nature of the Population: In terms of the variables to be studied, is the populationhomogenous or heterogeneous? In the case of a homogenous population, even simple randomsampling will give a representative sample. If the population is heterogeneous, stratified randomsampling is appropriate.6. Geographical Area of the Study and the Size of the Population: If the area covered by asurvey is very large and the size of the population is quite large, multi-stage cluster sampling

  • 8/6/2019 MB0034 set 1 & 2

    10/50

  • 8/6/2019 MB0034 set 1 & 2

    11/50

    Primary data has to be gathered in cases where the available data is inappropriate, inadequate orobsolete. It includes: socio economic surveys, social anthropological studies of rural communitiesand tribal communities, sociological studies of social problems and social institutions, marketingresearch, leadership studies, opinion polls, attitudinal surveys, radio listening and T.V. viewingsurveys, knowledge-awareness practice (KAP) studies, farm management studies, businessmanagement studies etc. There are various methods of primary data collection, includingsurveys, audits and panels, observation and experiments.1 Survey ResearchA survey is a fact-finding study. It is a method of research involving collection of data directly froma population or a sample at a particular time. A survey has certain characteristics:1 It is always conducted in a natural setting. It is a field study.

    2 It seeks responses directly from the respondents.3 It can cover a very large population.4 It may include an extensive study or an intensive study5 It covers a definite geographical area.A survey involves the following steps -

    1 Selection of a problem and its formulation2 Preparation of the research design3 Operation concepts and construction of measuring indexes and scales

    4 Sampling

    5 Construction of tools for data collection6 Field work and collection of data7 Processing of data and tabulation

    8 Analysis of data9 ReportingThere are four basic survey methods, which include:1 Personal interview2 Telephone interview3 Mail survey and4 Fax surveyPersonal InterviewPersonal interviewing is one of the prominent methods of data collection. It may be defined as atwo-way systematic conversation between an investigator and an informant, initiated for obtaininginformation relevant to a specific study. It involves not only conversation, but also learning fromthe respondents gestures, facial expressions and pauses, and his environment.Interviewing may be used either as a main method or as a supplementary one in studies ofpersons. Interviewing is the only suitable method for gathering information from illiterate or lesseducated respondents. It is useful for collecting a wide range of data, from factual demographicdata to highly personal and intimate information relating to a persons opinions, attitudes, values,beliefs, experiences and future intentions. Interviewing is appropriate when qualitative informationis required, or probing is necessary to draw out the respondent fully. Where the area covered forthe survey is compact, or when a sufficient number of qualified interviewers are available,personal interview is feasible. Interview is often superior to other data-gathering methods. Peopleare usually more willing to talk than to write. Once rapport is established, even confidentialinformation may be obtained. It permits probing into the context and reasons for answers toquestions. Interview can add flesh to statistical information. It enables the investigator to grasp

    the behavioral context of the data furnished by the respondents. It permits the investigator to seekclarifications and brings to the forefront those questions, which for some reason or the other therespondents do not want to answer. Interviewing as a method of data collection has certaincharacteristics. They are:The participants the interviewer and the respondent are strangers; hence, the investigator hasto get himself/herself introduced to the respondent in an appropriate manner.The relationship between the participants is a transitory one. It has a fixed beginning andtermination points. The interview proper is a fleeting, momentary experience for them.

  • 8/6/2019 MB0034 set 1 & 2

    12/50

    The interview is not a mere casual conversational exchange, but a conversation with a specificpurpose, viz., obtaining information relevant to a study.The interview is a mode of obtaining verbal answers to questions put verbally.The interaction between the interviewer and the respondent need not necessarily be on a face-to-face basis, because the interview can also be conducted over the telephone.Although the interview is usually a conversation between two persons, it need not be limited to asingle respondent. It can also be conducted with a group of persons, such as family members, ora group of children, or a group of customers, depending on the requirements of the study.The interview is an interactive process. The interaction between the interviewer and therespondent depends upon how they perceive each other. The respondent reacts to theinterviewers appearance, behavior, gestures, facial expression and intonation, his perception ofthe thrust of the questions and his own personal needs. As far as possible, the interviewer shouldtry to be closer to the social-economic level of the respondents.The investigator records information furnished by the respondent in the interview. This poses aproblem of seeing that recording does not interfere with the tempo of conversation.Interviewing is not a standardized process like that of a chemical technician; it is rather a flexible,psychological process.3 Telephone Interviewing Telephone interviewing is a non-personal method of data collection. Itmay be used as a major method or as a supplementary method. It will be useful in the followingsituations:

    When the universe is composed of those persons whose names are listed in telephonedirectories, e.g. business houses, business executives, doctors and other professionals.When the study requires responses to five or six simple questions, e.g. a radio or televisionprogram survey.When the survey must be conducted in a very short period of time, provided the units of study arelisted in the telephone directory.When the subject is interesting or important to respondents, e.g. a survey relating to tradeconducted by a trade association or a chamber of commerce, a survey relating to a professionconducted by the concerned professional association.When the respondents are widely scattered and when there are many call backs to make.4 Group Interviews A group interview may be defined as a method of collecting primary data inwhich a number of individuals with a common interest interact with each other. In a personalinterview, the flow of information is multi dimensional. The group may consist of about six to eight

    individuals with a common interest. The interviewer acts as the discussion leader. Freediscussion is encouraged on some aspect of the subject under study. The discussion leaderstimulates the group members to interact with each other. The desired information may beobtained through self-administered questionnaire or interview, with the discussion serving as aguide to ensure consideration of the areas of concern. In particular, the interviewers look forevidence of common elements of attitudes, beliefs, intentions and opinions among individuals inthe group. At the same time, he must be aware that a single comment by a member can provideimportant insight. Samples for group interviews can be obtained through schools, clubs and otherorganized groups.5 Mail Survey The mail survey is another method of collecting primary data. This methodinvolves sending questionnaires to the respondents with a request to complete them and returnthem by post. This can be used in the case of educated respondents only. The mailquestionnaires should be simple so that the respondents can easily understand the questions and

    answer them. It should preferably contain mostly closed-ended and multiple choice questions, sothat it could be completed within a few minutes. The distinctive feature of the mail survey is thatthe questionnaire is self-administered by the respondents themselves and the responses arerecorded by them and not by the investigator, as in the case of personal interview method. It doesnot involve face-to-face conversation between the investigator and the respondent.Communication is carried out only in writing and this requires more cooperation from therespondents than verbal communication. The researcher should prepare a mailing list of theselected respondents, by collecting the addresses from the telephone directory of the associationor organization to which they belong. The following procedures should be followed - a covering letter should accompany a copy of the questionnaire. It must explain to the respondent the

  • 8/6/2019 MB0034 set 1 & 2

    13/50

    purpose of the study and the importance of his cooperation to the success of the project.Anonymity must be assured. The sponsors identity may be revealed. However, when such information may bias the result, it is not desirable to reveal it. In this case, a disguisedorganization name may be used. A self-addressed stamped envelope should be enclosed in the covering letter.

    1 After a few days from the date of mailing the questionnaires to the respondents, the

    researcher can expect the return of completed ones from them. The progress in return may bewatched and at the appropriate stage, follow-up efforts can be made.The response rate in mail surveys is generally very low in developing countries like India. Certaintechniques have to be adopted to increase the response rate. They are:1. Quality printing: The questionnaire may be neatly printed on quality light colored paper, so asto attract the attention of the respondent.2. Covering letter: The covering letter should be couched in a pleasant style, so as to attract andhold the interest of the respondent. It must anticipate objections and answer them briefly. It isdesirable to address the respondent by name.3. Advance information: Advance information can be provided to potential respondents by atelephone call, or advance notice in the newsletter of the concerned organization, or by a letter.Such preliminary contact with potential respondents is more successful than follow-up efforts.4. Incentives: Money, stamps for collection and other incentives are also used to inducerespondents to complete and return the mail questionnaire.5. Follow-up-contacts: In the case of respondents belonging to an organization, they may beapproached through someone in that organization known as the researcher.6. Larger sample size: A larger sample may be drawn than the estimated sample size. Forexample, if the required sample size is 1000, a sample of 1500 may be drawn. This may help theresearcher to secure an effective sample size closer to the required size.

    Q 6. Case Study: You are engaged to carry out a market survey on behalf of a leadingNewspaper that is keen to increase its circulation in Bangalore City, in order to ascertainreader habits and interests. Develop a title for the study; define the research problem andthe objectives or questions to be answered by the study.

    Ans.:Title: Newspaper reading choices

    Research problem: A research problem is the situation that causes the researcher to feelapprehensive, confused and ill at ease. It is the demarcation of a problem area within a certaincontext involving the WHO or WHAT, the WHERE, the WHEN and the WHY of the problemsituation.There are many problem situations that may give rise to research. Three sourcesusually contribute to problem identification. Own experience or the experience of others may bea source of problem supply. A second source could be scientific literature. You may read aboutcertain findings and notice that a certain field was not covered. This could lead to a researchproblem. Theories could be a third source. Shortcomings in theories could be researched.Research can thus be aimed at clarifying or substantiating an existing theory, at clarifyingcontradictory findings, at correcting a faulty methodology, at correcting the inadequate or

    unsuitable use of statistical techniques, at reconciling conflicting opinions, or at solving existingpractical problems

    Types of questions to be asked :For more than 35 years, the news about newspapers andyoung readers has been mostly bad for the newspaper industry. Long before any competitionfrom cable television or Nintendo, American newspaper publishers were worrying about decliningreadership among the young. As early as 1960, at least 20 years prior to Music Television (MTV)or the Internet, media research scholars1 began to focus their studies on young adult readers'decreasing interest in newspaper content. The concern over a declining youth market preceded

  • 8/6/2019 MB0034 set 1 & 2

    14/50

    and perhaps foreshadowed today's fretting over market penetration. Even where circulation hasgrown or stayed stable, there is rising concern over penetration, defined as the percentage ofoccupied households in a geographic market that are served by a newspaper.2 Simply put,population growth is occurring more rapidly than newspaper readership in most communities.This study looks at trends in newspaper readership among the 18-to-34 age group and examinessome of the choices young adults make when reading newspapers.One of the underlyingconcerns behind the decline in youth newspaper reading is the question of how young peopleview the newspaper. A number of studies explored how young readers evaluate and usenewspaper content. Comparing reader content preferences over a 10-year period, Gerald Stoneand Timothy Boudreau found differences between readers ages 18-34 and those 35-plus.16Younger readers showed increased interest in national news, weather, sports, and classifiedadvertisements over the decade between 1984 and 1994, while older readers ranked weather,editorials, and food advertisements higher. Interest in international news and letters to the editorwas less among younger readers, while older readers showed less interest in reports of births,obituaries, and marriages. David Atkin explored the influence of telecommunication technology onnewspaper readership among students in undergraduate media courses.17 He reported thatcomputer-related technologies, including electronic mail and computer networks, were unrelatedto newspaper readership. The study found that newspaper subscribers preferred print formatsover electronic. In a study of younger, school-age children, Brian Brooks and James Kropp foundthat electronic newspapers could persuade children to become news consumers, but that young

    readers would choose an electronic newspaper over a printed one.18 In an exploration of leisurereading among college students, Leo Jeffres and Atkin assessed dimensions of interest innewspapers, magazines, and books,19 exploring the influence of media use, non-media leisure,and academic major on newspaper content preferences. The study discovered that overallnewspaper readership was positively related to students' focus on entertainment, job / travelinformation, and public affairs. However, the students' preference for reading as a leisure-timeactivity was related only to a public affairs focus. Content preferences for newspapers and otherprint media were related. The researchers found no significant differences in readership amongvarious academic majors, or by gender, though there was a slight correlation between age andthe public affairs readership index, with older readers more interested in news about publicaffairs.

    Methodology : Sample : Participants in this study (N=267) were students enrolled in 100- and

    200-level English courses at a midwestern public university. Courses that comprise theframework for this sample were selected because they could fulfill basic studies requirements forall majors. A basic studies course is one that is listed within the core curriculum required for allstudents. The researcher obtained permission from seven professors to distribute questionnairesin the eight classes during regularly scheduled class periods. The students' participation wasvoluntary; two students declined. The goal of this sampling procedure was to reach a cross-section of students representing various fields of study. In all, 53 majors were represented. Ofthe 267 students who participated in the study, 65 (24.3 percent) were male and 177 (66.3percent) were female. A total of 25 participants chose not to divulge their genders. Ages rangedfrom 17 to 56, with a mean age of 23.6 years. This mean does not include the 32 respondentswho declined to give their ages. A total of 157 participants (58.8 percent) said they were of theCaucasian race, 59 (22.1 percent) African American, 10 (3.8 percent) Asian, five (1.9 percent)African/Native American, two (.8 percent) Hispanic, two (.8 percent) Native American, and one (.4

    percent) Arabic. Most (214) of the students were enrolled full time, whereas a few (28) were part-time students. The class rank breakdown was: freshmen, 45 (16.9 percent); sophomores, 15 (5.6percent); juniors, 33 (12.4 percent); seniors, 133 (49.8 percent); and graduate students, 16 (6percent).

    Procedure : After two pre-tests and revisions, questionnaires were distributed and collected bythe investigator. In each of the eight classes, the researcher introduced herself to the students asa journalism professor who was conducting a study on students' use of newspapers and othermedia. Each questionnaire included a cover letter with the researcher's name, address, and

  • 8/6/2019 MB0034 set 1 & 2

    15/50

    phone number. The researcher provided pencils and was available to answer questions if anyoneneeded further assistance. The average time spent on the questionnaires was 20 minutes, withsome individual students taking as long as an hour. Approximately six students asked to take thequestionnaires home to finish. They returned the questionnaires to the researcher's mailboxwithin a couple of day.

    Assignment Set- 2

    Q 1.Discuss the relative advantages and disadvantages of the different methods ofdistributing questionnaires to the respondents of a study.

    Ans.:There are some alternative methods of distributing questionnaires to the respondents.They are:1) Personal delivery,2) Attaching the questionnaire to a product,3) Advertising the questionnaire in a newspaper or magazine, and4) News-stand inserts.Personal delivery: The researcher or his assistant may deliver the questionnaires to thepotential respondents, with a request to complete them at their convenience. After a day or two,the completed questionnaires can be collected from them. Often referred to as the self-administered questionnaire method, it combines the advantages of the personal interview and themail survey. Alternatively, the questionnaires may be delivered in person and the respondentsmay return the completed questionnaires through mail.Attaching questionnaire to a product: A firm test marketing a product may attach aquestionnaire to a product and request the buyer to complete it and mail it back to the firm. A giftor a discount coupon usually rewards the respondent.Advertising the questionnaire: The questionnaire with the instructions for completion may beadvertised on a page of a magazine or in a section of newspapers. The potential respondentcompletes it, tears it out and mails it to the advertiser. For example, the committee of BanksCustomer Services used this method for collecting information from the customers of commercialbanks in India. This method may be useful for large-scale studies on topics of common interest.Newsstand inserts: This method involves inserting the covering letter, questionnaire and selfaddressed reply-paid envelope into a random sample of newsstand copies of a newspaper ormagazine.Advantages and Disadvantages:The advantages of Questionnaire are:

    this method facilitates collection of more accurate data for longitudinal studies than any other

    method, because under this method, the event or action is reported soon after its occurrence.this method makes it possible to have before and after designs made for field based studies.

    For example, the effect of public relations or advertising campaigns or welfare measures can bemeasured by collecting data before, during and after the campaign.

    the panel method offers a good way of studying trends in events, behavior or attitudes. For example, a panel enables a market researcher to study how brand preferences change frommonth to month; it enables an economics researcher to study how employment, income andexpenditure of agricultural laborers change from month to month; a political scientist can studythe shifts in inclinations of voters and the causative influential factors during an election. It is also

  • 8/6/2019 MB0034 set 1 & 2

    16/50

    possible to find out how the constituency of the various economic and social strata of societychanges through time and so on.

    A panel study also provides evidence on the causal relationship between variables. For example, a cross sectional study of employees may show an association between their attitude totheir jobs and their positions in the organization, but it does not indicate as to which comes first -favorable attitude or promotion. A panel study can provide data for finding an answer to thisquestion.

    It facilities depth interviewing, because panel members become well acquainted with the field workers and will be willing to allow probing interviews.

    The major limitations or problems of Questionnaire method are:this method is very expensive. The selection of panel members, the payment of premiums,

    periodic training of investigators and supervisors, and the costs involved in replacing dropouts, alladd to the expenditure.

    it is often difficult to set up a representative panel and to keep it representative. Many persons may be unwilling to participate in a panel study. In the course of the study, there may be frequentdropouts. Persons with similar characteristics may replace the dropouts. However, there is noguarantee that the emerging panel would be representative.

    A real danger with the panel method is panel conditioning i.e., the risk that repeated interviews may sensitize the panel members and they become untypical, as a result of being on

    the panel. For example, the members of a panel study of political opinions may try to appearconsistent in the views they express on consecutive occasions. In such cases, the panelbecomes untypical of the population it was selected to represent. One possible safeguard topanel conditioning is to give members of a panel only a limited panel life and then to replace themwith persons taken randomly from a reserve list.

    the quality of reporting may tend to decline, due to decreasing interest, after a panel has been in operation for some time. Cheating by panel members or investigators may be a problem insome cases.

    Q 2. In processing data, what is the difference between measures of central tendency andmeasures of dispersion? What is the most important measure of central tendency and

    dispersion?

    Ans.: Measures of Central tendency:Arithmetic MeanThe arithmetic mean is the most common measure of central tendency. It simply the sum of thenumbers divided by the number of numbers. The symbol m is used for the mean of a population.The symbol M is used for the mean of a sample. The formula for m is shown below: m=X

    N

    Where X is the sum of all the numbers in the numbers in the sample and N is the number ofnumbers in the sample. As an example, the mean of the numbers 1+2+3+6+8=20

    5

    =4 regardless of whether the numbers constitute the entire population or just a sample fromthe population.The table, Number of touchdown passes, shows the number of touchdown (TD) passes thrownby each of the 31 teams in the National Football League in the 2000 season. The mean numberof touchdown passes thrown is 20.4516 as shown below. m=X

    N

    =

    http://cnx.org/content/m11061/latest/#table1http://cnx.org/content/m11061/latest/#table1http://cnx.org/content/m11061/latest/#table1
  • 8/6/2019 MB0034 set 1 & 2

    17/50

    634

    31

    =20.451637 33 33 32 29 28 28 2322 22 22 21 21 21 20 2019 19 18 18 18 18 16 15

    14 14 14 12 12 9 6Table 1: Number of touchdown passes

    Although the arithmetic mean is not the only "mean" (there is also a geometric mean), it is by farthe most commonly used. Therefore, if the term "mean" is used without specifying whether it isthe arithmetic mean, the geometric mean, or some other mean, it is assumed to refer to thearithmetic mean.MedianThe median is also a frequently used measure of central tendency. The median is the midpoint ofa distribution: the same number of scores is above the median as below it. For the data in thetable, Number of touchdown passes, there are 31 scores. The 16th highest score (which equals20) is the median because there are 15 scores below the 16th score and 15 scores above the16th score. The median can also be thought of as the 50th percentile.Let's return to the made up example of the quiz on which you made a three discussed previously

    in the module Introduction to Central Tendency and shown inTable 2.Student Dataset 1 Dataset 2 Dataset 3

    You 3 3 3John's 3 4 2Maria's 3 4 2Shareecia's 3 4 2Luther's 3 5 1Table 2: Three possible datasets for the 5-point make-up quiz

    For Dataset 1, the median is three, the same as your score. For Dataset 2, the median is 4.Therefore, your score is below the median. This means you are in the lower half of the class.Finally for Dataset 3, the median is 2. For this dataset, your score is above the median andtherefore in the upper half of the distribution.Computation of the Median: When there is an odd number of numbers, the median is simply the

    middle number. For example, the median of 2, 4, and 7 is 4. When there is an even number ofnumbers, the median is the mean of the two middle numbers. Thus, the median of the numbers 2,4, 7, 12 is4+7

    2

    =5.5.ModeThe mode is the most frequently occurring value. For the data in the table, Number of touchdownpasses, the mode is 18 since more teams (4) had 18 touchdown passes than any other numberof touchdown passes. With continuous data such as response time measured to many decimals,the frequency of each value is one since no two scores will be exactly the same (see discussionofcontinuous variables). Therefore the mode of continuous data is normally computed from agrouped frequency distribution. The Grouped frequency distribution table shows a grouped

    frequency distribution for the target response time data. Since the interval with the highestfrequency is 600-700, the mode is the middle of that interval (650).

    Range Frequency

    500-600 3600-700 6700-800 5800-900 5900-1000 01000-1100 1

    http://cnx.org/content/m11061/latest/#table1http://cnx.org/content/m11061/latest/#table1http://cnx.org/content/m10805/latest/http://cnx.org/content/m10805/latest/http://cnx.org/content/m10942/latest/http://cnx.org/content/m11061/latest/#table2http://cnx.org/content/m11061/latest/#table2http://cnx.org/content/m11061/latest/#table1http://cnx.org/content/m11061/latest/#table1http://cnx.org/content/m10868/latest/http://cnx.org/content/m11061/latest/#table3http://cnx.org/content/m11061/latest/#table1http://cnx.org/content/m10805/latest/http://cnx.org/content/m10942/latest/http://cnx.org/content/m11061/latest/#table2http://cnx.org/content/m11061/latest/#table1http://cnx.org/content/m11061/latest/#table1http://cnx.org/content/m10868/latest/http://cnx.org/content/m11061/latest/#table3
  • 8/6/2019 MB0034 set 1 & 2

    18/50

    Range Frequency

    500-600 3Table 3: Grouped frequency distribution

    Measures of Dispersion: A measure of statistical dispersion is a real numberthat is zero if all

    the data are identical, and increases as the data becomes more diverse. It cannot be less thanzero.

    Most measures of dispersion have the same scale as the quantity being measured. In otherwords, if the measurements have units, such as metres or seconds, the measure of dispersionhas the same units. Such measures of dispersion include:

    Standard deviation Interquartile range Range Mean difference Median absolute deviation Average absolute deviation (or simply called average deviation) Distance standard deviation

    These are frequently used (together with scale factors) as estimators ofscale parameters, inwhich capacity they are called estimates of scale. All the above measures of statisticaldispersion have the useful property that they are location-invariant, as well as linear in scale. Soif a random variableXhas a dispersion ofSX then alinear transformationY= aX+ b forrealaand b should have dispersion SY= |a|SX.

    Other measures of dispersion are dimensionless (scale-free). In other words, they have nounits even if the variable itself has units. These include:

    Coefficient of variation

    Quartile coefficient of dispersion Relative mean difference, equal to twice the Gini coefficient

    There are other measures of dispersion:

    Variance (the square of the standard deviation) location-invariant but not linear inscale.

    Variance-to-mean ratio mostly used forcount data when the term coefficient ofdispersion is used and when this ratio is dimensionless, as count data are themselvesdimensionless: otherwise this is not scale-free.

    Some measures of dispersion have specialized purposes, among them the Allan variance and

    the Hadamard variance. Forcategorical variables, it is less common to measure dispersion by asingle number. See qualitative variation. One measure that does so is the discrete entropy.

    http://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Units_of_measurementhttp://en.wikipedia.org/wiki/Standard_deviationhttp://en.wikipedia.org/wiki/Interquartile_rangehttp://en.wikipedia.org/wiki/Range_(statistics)http://en.wikipedia.org/wiki/Mean_differencehttp://en.wikipedia.org/wiki/Median_absolute_deviationhttp://en.wikipedia.org/wiki/Average_absolute_deviationhttp://en.wikipedia.org/wiki/Distance_standard_deviationhttp://en.wikipedia.org/wiki/Scale_factorhttp://en.wikipedia.org/wiki/Estimatorhttp://en.wikipedia.org/wiki/Scale_parameterhttp://en.wikipedia.org/wiki/Random_variablehttp://en.wikipedia.org/wiki/Linear_transformationhttp://en.wikipedia.org/wiki/Linear_transformationhttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Dimensionlesshttp://en.wikipedia.org/wiki/Coefficient_of_variationhttp://en.wikipedia.org/wiki/Quartile_coefficient_of_dispersionhttp://en.wikipedia.org/wiki/Relative_mean_differencehttp://en.wikipedia.org/wiki/Gini_coefficienthttp://en.wikipedia.org/wiki/Variancehttp://en.wikipedia.org/wiki/Variance-to-mean_ratiohttp://en.wikipedia.org/wiki/Count_datahttp://en.wikipedia.org/wiki/Coefficient_of_dispersionhttp://en.wikipedia.org/wiki/Coefficient_of_dispersionhttp://en.wikipedia.org/wiki/Dimensionlesshttp://en.wikipedia.org/wiki/Allan_variancehttp://en.wikipedia.org/w/index.php?title=Hadamard_variance&action=edit&redlink=1http://en.wikipedia.org/wiki/Categorical_variablehttp://en.wikipedia.org/wiki/Categorical_variablehttp://en.wikipedia.org/wiki/Qualitative_variationhttp://en.wikipedia.org/wiki/Information_entropyhttp://en.wikipedia.org/wiki/Information_entropyhttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Units_of_measurementhttp://en.wikipedia.org/wiki/Standard_deviationhttp://en.wikipedia.org/wiki/Interquartile_rangehttp://en.wikipedia.org/wiki/Range_(statistics)http://en.wikipedia.org/wiki/Mean_differencehttp://en.wikipedia.org/wiki/Median_absolute_deviationhttp://en.wikipedia.org/wiki/Average_absolute_deviationhttp://en.wikipedia.org/wiki/Distance_standard_deviationhttp://en.wikipedia.org/wiki/Scale_factorhttp://en.wikipedia.org/wiki/Estimatorhttp://en.wikipedia.org/wiki/Scale_parameterhttp://en.wikipedia.org/wiki/Random_variablehttp://en.wikipedia.org/wiki/Linear_transformationhttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Dimensionlesshttp://en.wikipedia.org/wiki/Coefficient_of_variationhttp://en.wikipedia.org/wiki/Quartile_coefficient_of_dispersionhttp://en.wikipedia.org/wiki/Relative_mean_differencehttp://en.wikipedia.org/wiki/Gini_coefficienthttp://en.wikipedia.org/wiki/Variancehttp://en.wikipedia.org/wiki/Variance-to-mean_ratiohttp://en.wikipedia.org/wiki/Count_datahttp://en.wikipedia.org/wiki/Coefficient_of_dispersionhttp://en.wikipedia.org/wiki/Coefficient_of_dispersionhttp://en.wikipedia.org/wiki/Dimensionlesshttp://en.wikipedia.org/wiki/Allan_variancehttp://en.wikipedia.org/w/index.php?title=Hadamard_variance&action=edit&redlink=1http://en.wikipedia.org/wiki/Categorical_variablehttp://en.wikipedia.org/wiki/Qualitative_variationhttp://en.wikipedia.org/wiki/Information_entropy
  • 8/6/2019 MB0034 set 1 & 2

    19/50

    Sources of statistical dispersion : In the physical sciences, such variability may result onlyfrom random measurement errors: instrument measurements are often not perfectly

    precise, i.e., reproducible. One may assume that the quantity being measured isunchanging and stable, and that the variation between measurements is due toobservational error. In the biological sciences, this assumption is false: the variationobserved might be intrinsic to the phenomenon: distinct members of a population differgreatly. This is also seen in the arena of manufactured products; even there, themeticulous scientist finds variation.The simple model of a stable quantity is preferred whenit is tenable. Each phenomenon must be examined to see if it warrants such asimplification.

    Q 3. What are the characteristics of a good research design? Explain how the researchdesign for exploratory studies is different from the research design for descriptive anddiagnostic studies.

    Ans.: Good research design: Much contemporary social research is devoted to examiningwhether a program, treatment, or manipulation causes some outcome or result. For example, we

    might wish to know whether a new educational program causes subsequent achievement scoregains, whether a special work release program for prisoners causes lower recidivism rates,whether a novel drug causes a reduction in symptoms, and so on. Cook and Campbell (1979)argue that three conditions must be met before we can infer that such a cause-effect relationexists:

    Covariation. Changes in the presumed cause must be related to changes in the presumedeffect. Thus, if we introduce, remove, or change the level of a treatment or program, we shouldobserve some change in the outcome measures.

    Temporal Precedence. The presumed cause must occur prior to the presumed effect.

    No Plausible Alternative Explanations. The presumed cause must be the only reasonable

    explanation for changes in the outcome measures. If there are other factors, which could beresponsible for changes in the outcome measures, we cannot be confident that the presumedcause-effect relationship is correct.

    In most social research the third condition is the most difficult to meet. Any number of factorsother than the treatment or program could cause changes in outcome measures. Campbell andStanley (1966) and later, Cook and Campbell (1979) list a number of common plausiblealternative explanations (or, threats to internal validity). For example, it may be that somehistorical event which occurs at the same time that the program or treatment is instituted wasresponsible for the change in the outcome measures; or, changes in record keeping ormeasurement systems which occur at the same time as the program might be falsely attributed tothe program. The reader is referred to standard research methods texts for more detaileddiscussions of threats to validity. This paper is primarily heuristic in purpose. Standard social

    science methodology textbooks (Cook and Campbell 1979; Judd and Kenny, 1981) typicallypresent an array of research designs and the alternative explanations, which these designs ruleout or minimize. This tends to foster a "cookbook" approach to research design - an emphasis onthe selection of an available design rather than on the construction of an appropriate researchstrategy. While standard designs may sometimes fit real-life situations, it will often be necessaryto "tailor" a research design to minimize specific threats to validity. Furthermore, even if standardtextbook designs are used, an understanding of the logic of design construction in general willimprove the comprehension of these standard approaches. This paper takes a structural

    http://en.wikipedia.org/wiki/Accuracy_and_precisionhttp://en.wikipedia.org/wiki/Accuracy_and_precisionhttp://en.wikipedia.org/wiki/Observational_errorhttp://en.wikipedia.org/wiki/Accuracy_and_precisionhttp://en.wikipedia.org/wiki/Observational_error
  • 8/6/2019 MB0034 set 1 & 2

    20/50

    approach to research design. While this is by no means the only strategy for constructingresearch designs, it helps to clarify some of the basic principles of design logic.

    Minimizing Threats to Validity : Good research designs minimize the plausible alternativeexplanations for the hypothesized cause-effect relationship. But such explanations may beruled out or minimized in a number of ways other than by design. The discussion, which

    follows, outlines five ways to minimize threats to validity, one of which is by researchdesign:

    By Argument. The most straightforward way to rule out a potential threat to validity is to simplyargue that the threat in question is not a reasonable one. Such an argument may be made eithera prioriora posteriori, although the former will usually be more convincing than the latter. Forexample, depending on the situation, one might argue that an instrumentation threat is not likelybecause the same test is used for pre and post test measurements and did not involve observerswho might improve, or other such factors. In most cases, ruling out a potential threat to validity byargument alone will be weaker than the other approaches listed below. As a result, the mostplausible threats in a study should not, except in unusual cases, be ruled out by argument only.

    By Measurement or Observation. In some cases it will be possible to rule out a threat by

    measuring it and demonstrating that either it does not occur at all or occurs so minimally as to notbe a strong alternative explanation for the cause-effect relationship. Consider, for example, astudy of the effects of an advertising campaign on subsequent sales of a particular product. Insuch a study, history (i.e., the occurrence of other events which might lead to an increased desireto purchase the product) would be a plausible alternative explanation. For example, a change inthe local economy, the removal of a competing product from the market, or similar events couldcause an increase in product sales. One might attempt to minimize such threats by measuringlocal economic indicators and the availability and sales of competing products. If there is nochange in these measures coincident with the onset of the advertising campaign, these threatswould be considerably minimized. Similarly, if one is studying the effects of special mathematicstraining on math achievement scores of children, it might be useful to observe everydayclassroom behavior in order to verify that students were not receiving any additional math trainingto that provided in the study.

    By Design. Here, the major emphasis is on ruling out alternative explanations by addingtreatment or control groups, waves of measurement, and the like. This topic will be discussed inmore detail below.

    By Analysis. There are a number of ways to rule out alternative explanations using statisticalanalysis. One interesting example is provided by Jurs and Glass (1971). They suggest that onecould study the plausibility of an attrition or mortality threat by conducting a two-way analysis ofvariance. One factor in this study would be the original treatment group designations (i.e.,program vs. comparison group), while the other factor would be attrition (i.e., dropout vs. non-dropout group). The dependent measure could be the pretest or other available pre-programmeasures. A main effect on the attrition factor would be indicative of a threat to external validity orgeneralizability, while an interaction between group and attrition factors would point to a possible

    threat to internal validity. Where both effects occur, it is reasonable to infer that there is a threat toboth internal and external validity. The plausibility of alternative explanations might also beminimized using covariance analysis. For example, in a study of the effects of "workfare"programs on social welfare caseloads, one plausible alternative explanation might be the statusof local economic conditions. Here, it might be possible to construct a measure of economicconditions and include that measure as a covariate in the statistical analysis. One must be carefulwhen using covariance adjustments of this type -- "perfect" covariates do not exist in most socialresearch and the use of imperfect covariates will not completely adjust for potential alternativeexplanations. Nevertheless causal assertions are likely to be strengthened by demonstrating thattreatment effects occur even after adjusting on a number of good covariates.

  • 8/6/2019 MB0034 set 1 & 2

    21/50

    By Preventive Action. When potential threats are anticipated some type of preventive action canoften rule them out. For example, if the program is a desirable one, it is likely that the comparisongroup would feel jealous or demoralized. Several actions can be taken to minimize the effects ofthese attitudes including offering the program to the comparison group upon completion of thestudy or using program and comparison groups which have little opportunity for contact andcommunication. In addition, auditing methods and quality control can be used to track potentialexperimental dropouts or to insure the standardization of measurement. The five categories listedabove should not be considered mutually exclusive. The inclusion of measurements designed tominimize threats to validity will obviously be related to the design structure and is likely to be afactor in the analysis. A good research plan should, where possible. make use of multiplemethods for reducing threats. In general, reducing a particular threat by design or preventiveaction will probably be stronger than by using one of the other three approaches. The choice ofwhich strategy to use for any particular threat is complex and depends at least on the cost of thestrategy and on the potential seriousness of the threat.

    Design Construction

    Basic Design Elements. Most research designs can be constructed from four basic elements:

    Time. A causal relationship, by its very nature, implies that some time has elapsed between theoccurrence of the cause and the consequent effect. While for some phenomena the elapsed timemight be measured in microseconds and therefore might be unnoticeable to a casual observer,we normally assume that the cause and effect in social science arenas do not occursimultaneously, In design notation we indicate this temporal element horizontally - whateversymbol is used to indicate the presumed cause would be placed to the left of the symbolindicating measurement of the effect. Thus, as we read from left to right in design notation we arereading across time. Complex designs might involve a lengthy sequence of observations andprograms or treatments across time.

    Program(s) or Treatment(s). The presumed cause may be a program or treatment under theexplicit control of the researcher or the occurrence of some natural event or program not explicitlycontrolled. In design notation we usually depict a presumed cause with the symbol "X". When

    multiple programs or treatments are being studied using the same design, we can keep theprograms distinct by using subscripts such as "X1" or "X2". For a comparison group (i.e., onewhich does not receive the program under study) no "X" is used.

    Observation(s) or Measure(s). Measurements are typically depicted in design notation with thesymbol "O". If the same measurement or observation is taken at every point in time in a design,then this "O" will be sufficient. Similarly, if the same set of measures is given at every point in timein this study, the "O" can be used to depict the entire set of measures. However, if differentmeasures are given at different times it is useful to subscript the "O" to indicate whichmeasurement is being given at which point in time.

    Groups or Individuals. The final design element consists of the intact groups or the individualswho participate in various conditions. Typically, there will be one or more program and

    comparison groups. In design notation, each group is indicated on a separate line. Furthermore,the manner in which groups are assigned to the conditions can be indicated by an appropriatesymbol at the beginning of each line. Here, "R" will represent a group, which was randomlyassigned, "N" will depict a group, which was nonrandom assigned (i.e., a nonequivalent group orcohort) and a "C" will indicate that the group was assigned using a cutoff score on ameasurement.

  • 8/6/2019 MB0034 set 1 & 2

    22/50

  • 8/6/2019 MB0034 set 1 & 2

    23/50

    commonly faced issue; and concluding more generally allows the reader to understand how thesolution can also address their problem.Quantify benefits when possible. No single element in a case study is more compelling thanthe ability to tie quantitative benefits to the solution. For example, Using Solution X savedCustomer Y over $ZZZ, ZZZ after just 6 months of implementation; or, Thanks to Solution X,employees at Customer Y have realized a ZZ% increase in productivity as measured by standardperformance indicators. Quantifying benefits can be challenging, but not impossible. The key isto present imaginative ideas to the customer for ways to quantify the benefits, and remain flexibleduring this discussion. If benefits cannot be quantified, attempt to develop a range ofqualitativebenefits; the latter can be quite compelling to readers as well.Use photos. Ask the customer if they can provide shots of personnel, ideally using the solution.The shots need not be professionally done; in fact, homegrown digital photos sometimes lead tosurprisingly good results and often appear more genuine. Photos further personalize the storyand help form a connection to readers.Reward the customer. After receiving final customer approval and finalizing the case study,provide a pdf, as well as printed copies, to the customer. Another idea is to frame a copy of thecompleted case study and present it to the customer in appreciation for their efforts andcooperation. Writing a case study is not easy. Even with the best plan, a case study is doomed tofailure if the writer lacks the exceptional writing skills, technical savvy, and marketing experiencethat these documents require. In many cases, a talented writer can mean the difference between

    an ineffective case study and one that provides the greatest benefit. If a qualified internal writer isunavailable, consider outsourcing the task to professionals who specialize in case study writing.

    Q 5. What are the differences between observation and interviewing as methods of datacollection? Give two specific examples of situations where either observation orinterviewing would be more appropriate.

    Ans.: Observation means viewing or seeing. Observation may be defined as a systematicviewing of a specific phenomenon on its proper setting for the specific purpose of gathering data

    for a particular study. Observation is classical method of scientific study.

    The prerequisites of observation consist of:Observations must be done under conditions, which will permit accurate results. The observer

    must be in vantage point to see clearly the objects to be observed. The distance and the light

    must be satisfactory. The mechanical devices used must be in good working conditions and

    operated by skilled persons.

    Observation must cover a sufficient number of representative samples of the cases.

    Recording should be accurate and complete.

    The accuracy and completeness of recorded results must be checked. A certain number of cases

    can be observered again by another observer/another set of mechanical devices as the case maybe. If it is feasible two separate observers and set of instruments may be used in all or some of

    the original observations. The results could then be compared to determine their accuracy and

    completeness.

    Advantages of observation

  • 8/6/2019 MB0034 set 1 & 2

    24/50

    The main virtue of observation is its directness it makes it possible to study behavior as it occurs.

    The researcher needs to ask people about their behavior and interactions he can simply watch

    what they do and say.

    Data collected by observation may describe the observed phenomena as they occur in their

    natural settings. Other methods introduce elements or artificiality into the researched situation for

    instance in interview the respondent may not behave in a natural way. There is no suchartificiality in observational studies especially when the observed persons are not aware of their

    being observed.

    Observations in more suitable for studying subjects who are unable to articulate meaningfully e.g.

    studies of children, tribal animals, birds etc.

    Observations improve the opportunities for analyzing the contextual back ground of behavior.

    Furthermore verbal resorts can be validated and compared with behavior through observation.

    The validity of what men of position and authority say can be verified by observing what they

    actually do.

    Observations make it possible to capture the whole event as it occurs. For example onlyobservation can be providing an insight into all the aspects of the process of negotiation between

    union and management representatives.

    Observation is less demanding of the subjects and has less biasing effect on their conduct than

    questioning.

    It is easier to conduct disguised observation studies than disguised questioning.

    Mechanical devices may be used for recording data in order to secure more accurate data and

    also of making continuous observations over longer periods.

    Interviews are a crucial part of the recruitment process for all Organisations. Their purpose is to

    give the interviewer(s) a chance to assess your suitability for the role and for you to demonstrateyour abilities and personality. As this is a two-way process, it is also a good opportunity for you toask questions and to make sure the organisation and position are right for you.Interview format

    Interviews take many different forms. It is a good idea to ask the organisation in advance whatformat the interview will take.

    Competency/criteria based interviews - These are structured to reflect the competencies orqualities that an employer is seeking for a particular job, which will usually have been detailed inthe job specification or advert. The interviewer is looking for evidence of your skills and may asksuch things as: Give an example of a time you worked as part of a team to achieve a commongoal. The organisation determines the selection criteria based on the roles they are recruiting forand then, in an interview, examines whether or not you have evidence of possessing these.

    Recruitment Manager, The Cooperative GroupTechnical interviews - If you have applied for a job or course that