Quantitative marketing research

Quantitative marketing research

Quantitative marketing research is the application of quantitative research techniques to the field of marketing. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion.

As a social research method, it typically involves the construction of questionnaires and scales. People who respond (respondents) are asked to complete the survey. Marketers use the information so obtained to understand the needs of individuals in the marketplace, and to create strategies and marketing plans.

cope and requirements

Typical general procedure

Simply, there are five major and important steps involved in the research process:

#Defining the Problem.
#Research Design.
#Data Collection.
#Report Writing & presentation.

The brief discussion on each of these steps are:

# Problem audit and problem definition - What is the problem? What are the various aspects of the problem? What information is needed?
# Conceptualization and operationalization - How exactly do we define the concepts involved? How do we translate these concepts into observable and measurable behaviours?
# Hypothesis specification - What claim(s) do we want to test?
# Research design specification - What type of methodology to use? - examples: questionnaire, survey
# Question specification - What questions to ask? In what order?
# Scale specification - How will preferences be rated?
# Sampling design specification - What is the total population? What sample size is necessary for this population? What sampling method to use?- examples: Probability Sampling:- (cluster sampling, stratified sampling, simple random sampling, multistage sampling, systematic sampling) & Nonprobability sampling:- (Convenience Sampling,Judgement Sampling, Purposive Sampling, Quota Sampling, Snowball Sampling, etc. )
# Data collection - Use mail, telephone, internet, mall intercepts
# Codification and re-specification - Make adjustments to the raw data so it is compatible with statistical techniques and with the objectives of the research - examples: assigning numbers, consistency checks, substitutions, deletions, weighting, dummy variables, scale transformations, scale standardization
# Statistical analysis - Perform various descriptive and inferential techniques (see below) on the raw data. Make inferences from the sample to the whole population. Test the results for statistical significance.
# Interpret and integrate findings - What do the results mean? What conclusions can be drawn? How do these findings relate to similar research?
# Write the research report - Report usually has headings such as: 1) executive summary; 2) objectives; 3) methodology; 4) main findings; 5) detailed charts and diagrams. Present the report to the client in a 10 minute presentation. Be prepared for questions.

The design step may involve a pilot study to in order to discover any hidden issues. The codification and analysis steps are typically performed by computer, using software such as DAP or PSPP. The data collection steps, can in some instances be automated, but often require significant manpower to undertake. Interpretation is a skill mastered only by experience.

Descriptive techniques

The descriptive techniques that are commonly used include:
*Graphical description
**use graphs to summarize data
**examples: histograms, scattergrams, bar charts, pie charts
*Tabular description
**use tables to summarize data
**examples: frequency distribution schedule, cross tabs
*Parametric description
**estimate the values of certain parameters which summarize the data
***measures of location or central tendency
****arithmetic mean
****interquartile mean
***measures of statistical dispersion
****standard deviation
****range (statistics)
****interquartile range
****absolute deviation.
***measures of the shape of the distribution

Inferential techniques

Inferential techniques involve generalizing from a sample to the whole population. It also involves testing a hypothesis. A hypothesis must be stated in mathematical/statistical terms that make it possible to calculate the probability of possible samples assuming the hypothesis is correct. Then a test statistic must be chosen that will summarize the information in the sample that is relevant to the hypothesis. A null hypothesis is a hypothesis that is presumed true until a hypothesis test indicates otherwise. Typically it is a statement about a parameter that is a property of a population. The parameter is often a mean or a standard deviation.

Not unusually, such a hypothesis states that the parameters, or mathematical characteristics, of two or more populations are identical. For example, if we want to compare the test scores of two random samples of men and women, the null hypothesis would be that the mean score in the male population from which the first sample was drawn, was the same as the mean score in the female population from which the second sample was drawn:

:H_0: mu_1 = mu_2where::"H"0 = the null hypothesis:μ1 = the mean of population 1, and:μ2 = the mean of population 2.The equality operator makes this a two-tailed test. The alternative hypothesis can be either greater than or less than the null hypothesis. In a one-tailed test, the operator is an inequality, and the alternative hypothesis has directionality:

:H_0: mu_1 =or< mu_2These are sometimes called a hypothesis of significant difference because you are testing the difference between two groups with respect to one variable.

Alternatively, the null hypothesis can postulate that the two samples are drawn from the same population:

:H_0: mu_1 - mu_2 = 0

A hypothesis of association is where there is one population, but two traits being measured. It is a test of association of two traits within one group.

The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). Among all the sets of possible values, we must choose one that we think represents the most extreme evidence against the hypothesis. That is called the critical region of the test statistic. The probability of the test statistic falling in the critical region when the hypothesis is correct is called the alpha value of the test. After the data is available, the test statistic is calculated and we determine whether it is inside the critical region. If the test statistic is inside the critical region, then our conclusion is either the hypothesis is incorrect, or an event of probability less than or equal to "alpha" has occurred. If the test statistic is outside the critical region, the conclusion is that there is not enough evidence to reject the hypothesis.

The significance level of a test is the maximum probability of accidentally rejecting a "true" null hypothesis (a decision known as a Type I error).For example, one may choose a significance level of, say, 5%, and calculate a "critical value" of a statistic (such as the mean) so that the probability of it exceeding that value, given the truth of the null hypothesis, would be 5%. If the actual, calculated statistic value exceeds the critical value, then it is significant "at the 5% level".

Types of hypothesis tests

* Parametric tests of a single sample:
** t test
** z test
* Parametric tests of two independent samples:
** two-group t test
** z test
* Parametric tests of paired samples:
** paired t test
* Nominal/ordinal level test of a single sample:
** chi-square
** Kolmogorov-Smirnov one sample test
** runs test
** binomial test
* Nominal/ordinal level test of two independent samples:
** chi-square
** Mann-Whitney U
** Median
** Kolmogorov-Smirnov two sample test
* Nominal/ordinal level test for paired samples:
** Wilcoxon test
** McNemar test

Point to remember:
** If a Variable (e.g. "preference" of the respondences on color of a product ) is interval/ ratio scaled and meet some statistical assumption (e.g. Normality), then it is eligible for Parametric test.
** If a Variable (e.g. "gender" or "rank order of few products on their certain attributes") is Nominal/ Ordinal scaled and/ or does not meet some statistical assumption (e.g. Normality), then it is not eligible for Parametric test. In this situation we have to use Non-parametric test.

We should use non-parametric test only if sample/ variable is not eligible for parametric test.Remember that, the non-parametric test is mostly used and misused technique in the world.

Reliability and validity

Research should be tested for reliability, generalizability, and validity. Generalizability is the ability to make inferences from a sample to the population.

Reliability is the extent to which a measure will produce consistent results. Test-retest reliability checks how similar the results are if the research is repeated under similar circumstances. Stability over repeated measures is assessed with the Pearson coefficient. Alternative forms reliability checks how similar the results are if the research is repeated using different forms. Internal consistency reliability checks how well the individual measures included in the research are converted into a composite measure. Internal consistency may be assessed by correlating performance on two halves of a test (split-half reliability). The value of the Pearson product-moment correlation coefficient is adjusted with the Spearman-Brown prediction formula to correspond to the correlation between two full-length tests. A commonly used measure is Cronbach's α, which is equivalent to the mean of all possible split-half coefficients. Reliability may be improved by increasing the sample size.

Validity asks whether the research measured what it intended to. Content validation (also called face validity) checks how well the content of the research are related to the variables to be studied. Are the research questions representative of the variables being researched. It is a demonstration that the items of a test are drawn from the domain being measured. Criterion validation checks how meaningful the research criteria are relative to other possible criteria. When the criterion is collected later the goal is to establish predictive validity. Construct validation checks what underlying construct is being measured. There are three variants of construct validity. They are convergent validity (how well the research relates to other measures of the same construct), discriminant validity (how poorly the research relates to measures of opposing constructs), and nomological validity (how well the research relates to other variables as required by theory) .

Internal validation, used primarily in experimental research designs, checks the relation between the dependent and independent variables. Did the experimental manipulation of the independent variable actually cause the observed results? External validation checks whether the experimental results can be generalized.

Validity implies reliability : a valid measure must be reliable. But reliability does not necessarily imply validity :a reliable measure need not be valid.

Types of errors

Random sampling errors:
*sample too small
*sample not representative
*inappropriate sampling method used
*random errorsResearch design errors:
*bias introduced
*measurement error
*data analysis error
*sampling frame error
*population definition error
*scaling error
*question construction errorInterviewer errors:
*recording errors
*cheating errors
*questioning errors
*respondent selection errorRespondent errors:
*non-response error
*inability error
*falsification errorHypothesis errors:
*type I error (also called alpha error)
**the study results lead to the rejection of the null hypothesis even though it is actually true
*type II error (also called beta error)
**the study results lead to the acceptance (non-rejection) of the null hypothesis even though it is actually false


* Bradburn, Norman M. and Seymour Sudman. "Polls and Surveys: Understanding What They Tell Us" (1988)
* Converse, Jean M. "Survey Research in the United States: Roots and Emergence 1890-1960" (1987), the standard history
* [http://www.questia.com/PM.qst?a=o&d=100501261 Glynn, Carroll J., Susan Herbst, Garrett J. O'Keefe, and Robert Y. Shapiro. "Public Opinion" (1999)] textbook
* [http://www.questia.com/PM.qst?a=o&d=104829752 Oskamp, Stuart and P. Wesley Schultz; "Attitudes and Opinions" (2004)]
* James G. Webster, Patricia F. Phalen, Lawrence W. Lichty; "Ratings Analysis: The Theory and Practice of Audience Research" Lawrence Erlbaum Associates, 2000
* [http://www.questia.com/PM.qst?a=o&d=59669912 Young, Michael L. "Dictionary of Polling: The Language of Contemporary Opinion Research" (1992)]

ee also

* Choice Modelling
* Quantitative research
* Qualitative research
* Enterprise Feedback Management
* Marketing research
* mTAB
* Qualtrics
* Online panel
* Statistical survey
* Rating scale
* Master of Marketing Research
* Maximum Difference Preference Scaling

List of related topics

* List of marketing topics
* List of management topics
* List of economics topics
* List of finance topics
* List of accounting topics

Wikimedia Foundation. 2010.

Look at other dictionaries:

  • quantitative marketing research — Marketing research techniques that use large samples of respondents to quantify behaviour and reactions to marketing activities. Typically, a structured questionnaire is used to obtain data that quantifies the numbers and proportions of… …   Big dictionary of business and management

  • Marketing research — Marketing Key concepts Product marketing · Pricing …   Wikipedia

  • Marketing research mix — Marketing Key concepts Product marketing · Pricing …   Wikipedia

  • Marketing research process — is a set of six steps which defines the tasks to be accomplished in conducting a marketing research study. These include problem definition, developing an approach to problem, research design formulation, field work, data preparation and analysis …   Wikipedia

  • marketing research — The systematic collection and analysis of data to resolve problems concerning marketing, undertaken to reduce the risk of inappropriate marketing activity. Data is almost always collected from a sample of the target market, by such methods as… …   Big dictionary of business and management

  • qualitative marketing research — Marketing research techniques that use small samples of respondents to gain an impression of their beliefs, motivations, perceptions, and opinions. Such unstructured methods of data collection as depth interviews and group discussions are used to …   Big dictionary of business and management

  • Qualitative marketing research — is a set of research techniques, used in marketing and the social sciences, in which data is obtained from a relatively small group of respondents and not analyzed with inferential statistics. This differentiates it from quantitative analyzed for …   Wikipedia

  • Master of Marketing Research — (MMR or M.M.R.) is a graduate degree program that may be from one to three years in length. About 75 percent of the fulltime programs can be studied in about one year while part time Masters in Marketing are studied besides the job and usually… …   Wikipedia

  • Quantitative research — is the systematic scientific investigation of quantitative properties and phenomena and their relationships. The objective of quantitative research is to develop and employ mathematical models, theories and/or hypotheses pertaining to natural… …   Wikipedia

  • Marketing management — Marketing Key concepts Product marketing · Pricing …   Wikipedia