40
Experimental Methods in Management Accounting Research Critiquing Your Own and Others’ Experiments

Experimental Methods in Management Accounting Research Critiquing Your Own and Others’ Experiments

Embed Size (px)

Citation preview

Experimental Methodsin Management Accounting

Research

Critiquing Your Own and

Others’ Experiments

2

What is the purpose of an experiment?

• To test theory under highly controlled conditions, and thus . . .

• . . . to decide among alternative explanations for observed phenomena.– “Real world” data often does not support

decisive tests of plausible competing theories

3

Critiques of experiments should be based on . . .

• Quality of theory on which experiment is based

• Quality of operationalization of theory

• Quality of experimental control and analysis

4

Critiques of experiments often are based on . . .

• Observations of differences between experimental setting and real world.

• Such criticism may be– (a) devastating, or– (b) vacuous

5

Re-creating the real world

• Someone asked an American poet: “Should literature try to re-create the real world?”

• Answer:

– “No. One of the damned thing is enough.”

6

What features of the real management-decision environment must be captured in experiments?

7

Shoes don’t matter . . .

• How do we know what does matter?

• What factors should be present, manipulated, measured, controlled for . . . .?

• Answer must be theory-based.

8

Management accounting research draws on two experimental traditions

• Experimental economics

• Experimental psychology

• Different traditions of “theory” and “control” – but both stress value of artificiality for control

purposes.

9

Common errors to avoid . . .

• Don’t “work with underspecified, vague, or nonexistent theories and try to generalize anyway by applying findings directly.” – Swieringa and Weick, JAR 1982

10

Example

• The purpose of an experiment is to test theory. Statements like this are not theories:

– “More accurate information is better.”

– “Teamwork is better.”

– “Fair reward systems are better.”

11

By contrast . . . . Two theory-based experimental studies on cost information quality

12

1. Callahan and Gabriel, CAR 1998– Accuracy: less noise in reports of marginal cost

(expected marginal cost is known)

– Subjects choose production quantity (price) in Cournot (Bertrand) duopolies.

– Greater accuracy increased profits in Cournot markets but decreased profits in Bertrand markets

13

2. Drake 1999• Cost information = breakdown of overhead costs by

activity/ feature of product (vs lump-sum overhead for product)

• Subjects negotiate contract for sale of components (price and component features).

• More detailed cost info ==> increase in buyer/ seller common surplus when data shared -- but but also less information sharing.

14

Callahan & Gabriel study is unrealistic . . . .

• Nineteen-year olds interacting via computer networks are not representative of important interactions in the economy.

• Small sums of money involved.• Individual not institutional decisions.• Expected marginal costs known and identical

across firms• One-period world.

15

Does the lack of realism matter?

• Does the experiment misrepresent the underlying theory by (e.g.) using nineteen-year olds and small sums of money?

• Does the theory misrepresent important real-world phenomena?

16

Theories• Economics

– A theory of the domain– Other social sciences don’t tell us much about

production functions, cost functions, market structures . . .

• Psychology– A theory of the people in the domain– Economics does not have very plausible theories of

cognition or preferences.

17

Control

• Experimental Economics: control through specification – If it’s not in the theory, find ways of keeping it out

of the lab!

• Experimental Psychology: control through comparisons– If it’s not in the theory, find ways of holding it

constant!

18

Experimental Economics

• Start with a model that specifies (e.g.)– Actions– Outcomes & probability distributions of

outcomes conditional on actions– Payoffs– Utility functions– Information & communication structure– Resulting equilibrium

19

Control through specification

• If it’s in the model, operationalize it in the lab!

– Induce utility functions in model

• If it’s not in the model keep it out of the lab!

– Keep task and information abstract

20

Example: concrete vs. abstract

• U = U(w, a)• where w = wealth and a = effort,

• U’(w) > 0, U’(a) < 0

• Some experiments use monetary payoffs to represent both w and a. Is this a problem?

– Operationalize the math, not the words.

21

A few problems . . .

• Most models have simple utility functions, U = U(w,a). People bring other preferences to the lab with them.

• Experimental econ solution: be sure monetary payoffs dominate other considerations.

22

How easy (possible) is it to achieve dominance in the lab?

– Baiman and Lewis 1989• Easy

– Evans et al. 1999• Not easy.

– Kachelmeier and Towry 1999• Depends on context.

– What game do people think they’re playing? This probably matters in both lab and ‘real world.’

23

A few problems, cont. . . .

• We want to test interesting models

• Interesting models often have

– Surprising (unintuitive) solutions, or

– Solutions that are too hard to work out intuitively

24

This means . . .

• People in the lab will not (at first) do what interesting models say they will do.

• They may do so eventually, with appropriate incentives and learning opportunities, but . . .

25

Lab tests must allow for

• Difficulty of problem, time & practice to figure it out, “costs of thinking . . .”

– Multiple trials used to test one-period models.

• But these are auxiliary assumptions tacked onto the model, with little theoretical basis.

26

Experimental Psychology: Control through Comparisons

• Economics-based experiments can in principle have just one experimental condition and test for equilibrium in the given model.

• Psychology doesn’t do one-condition experiments. Structured designs:– 1 x 4, 3 x 3,– 2 x 2 x 2 x 2 . . . . .

27

Assumption: we can’t convincingly specify everything . . .

• We won’t succeed in restricting people’s utility functions to one argument and clearing everything out of their brains except the conditional probability distribution of payoff exactly as represented with the bingo cage.

• So . . . .

28

Control through comparisons

• Rather than try to eliminate nuisance factors, hold them constant. Create multiple experimental conditions that differ only on the variables of interest.

• Test for differences between conditions--and differences in differences (interactions) to deal with factors that cannot be held constant.

29

Example: Vera-Muñoz, TAR 1998

• Is “thinking accounting” different from “thinking management”? Is a financial-statement-based model of the firm a poor basis for management decision-making?

• Too much accounting training ingrains in people mental models of business based on financial statements, leading them to think (e.g.) in historical-cost not opportunity-cost terms.

30

Experimental Design

• Task. Make a recommendation about when to relocate a store that will lose its lease next year.

• Difference: Subjects who have taken more accounting courses omit more opportunity costs.

31

• Problem!– People who have taken more accounting courses

(M.S. students vs. MBA’s) might not only be more financial-statement-minded.

– They might be stupider, less motivated, have less understanding of business . . . .

• Solution– Differences in differences: an interaction test.

32

Interaction: 2 x 2 design

• Two identically structured tasks given to subjects with high or low accounting training– Recommend when a store that will lose its lease

should move (business context)– Recommend when an individual who will lose

his data processing job should move (personal context)

33

Differences in differences

• Subjects pick up most of the opportunity costs in the personal context, regardless of how many accounting courses they’ve taken, but--

• Subjects with many accounting courses omit opportunity costs in the business context.

34

Control through comparisons . . .• . . . Often requires the inclusion of conditions

that are not ‘realistic’ or interesting in themselves.

• Criticisms like this are inappropriate:– “18,000-a-year data clerks don’t go to professional

accountants or consultants for advice about when to leave a job. This condition is unrealistic and shouldn’t be in the experiment.”

35

The value of artificiality (again) . . . .

• “Situations which are rare in the natural world are often ideal to test theoretical derivations.”– Swieringa and Weick 1982

36

Another interaction example . . .• Luft & Shields 1999

– Cognitive value of nonfinancial reporting of quality measures (vs. cost of quality)

– Field-based literature sometimes argues that relations of NF measures are more transparent, easier for ordinary employees to understand.

37

Experiment

– Subjects examine data on % defects (rework & spoilage expense) and profits.

– r(defects, profit) = r(rework $$, profit)

– Ss detect the relation between past quality and future profits more accurately with the nonfinancial measure.

– Profit prediction task used to measure detection of relation.

38

But . . . .

• Maybe people bring different priors about % defects and rework $$ to the task, and so they make different profit predictions even though they see the same relations in the sample data provided.

• Maybe the problem isn’t what people can or can’t ‘see’ in eyeballing the data. Other factors are affecting their judgments.

39

So there’s an additional experimental condition . . .

• . . . In which Ss are given a statistical analysis of the defects (NF) - profit, or rework (F) - profit relation in the sample data.

• If people make worse profit predictions in the rework condition because they can’t see the correlation in the raw data as well in this condition, then providing the stats should solve the problem. If people have other reasons for predicting differently in this condition, providing the stats shouldn’t solve the problem.

40

Experiments, models, and the ‘real world’

• Empirical research outside the lab defines important problems, documents prevailing practices, and provides limited evidence for or against theory.

• Analytical modeling develops theories about how key variables affect each other.

• Experimental research tests (competing) theories under highly controlled conditions.