22
Hayek, Go ¨del, and the case for methodological dualism Ludwig M.P. van den Hauwe* Avenue Van Volxem, 326 Bus 3, 1190 Brussels, Belgium On a few occasions F.A. Hayek made reference to the famous Go ¨del theorems in mathematical logic in the context of expounding his cognitive and social theory. The exact meaning of the supposed relationship between Go ¨ del’s theorems and the essential proposition of Hayek’s theory of mind remains subject to interpretation, however. The author of this article argues that the relationship between Hayek’s thesis that the human brain can never fully explain itself and the essential insight provided by Go ¨del’s theorems in mathematical logic has the character of an analogy, or a metaphor. Furthermore the anti-mechanistic interpretation of Hayek’s theory of mind is revealed as highly questionable. Implications for the Socialist Calculation Debate are highlighted. It is in particular concluded that Hayek’s arguments for methodological dualism, when compared with those of Ludwig von Mises, actually amount to a strengthening of the case for methodological dualism. Keywords: Hayek; theory of mind; Austrian methodology; Go ¨del; incompleteness theorems; methodological dualism; Socialist Calculation Debate JEL Codes: B0; B4; B53 F.A. Hayek was not only a Nobel-prize-winning economist who made important contributions to monetary, capital, and business cycle theory. Pursuing an interest he had cultivated since his student days, he also made important contributions to neural science and to the theory of mind. These can be found in his book The Sensory Order which was published in 1952, and the essentials of which were already contained in a manuscript entitled ‘Beitra ¨ge zur Theorie der Entwicklung des Bewusstseins’ which Hayek wrote as a young man at the age of 21. It has been acknowledged, however, that The Sensory Order should not be considered as a mere aside, isolated from Hayek’s main preoccupations (Aimar 2008, p. 25). His work in the Austrian tradition in economics, his defense of political liberalism, and his work in theoretical psychology constitute a unified and integrated theoretical perspective (Horwitz 2000). 1 The work of the mathematical logician Kurt Go ¨del, in particular his famous incompleteness theorems, will appear to some as far removed from Hayek’s main concerns in social and political theory and in the theory of mind. According to one author, however, ‘Hayek may have anticipated by a decade Go ¨ del’s own proof’ (Tuerck 1995, p. 287). Since claims like these are somewhat remarkable, the relationship between Hayek’s theory of mind, and to some extent also his social theory and his methodology, on the one hand, and Go ¨del’s theorems, on the other, will be examined more closely in this article. ISSN 1350-178X print/ISSN 1469-9427 online q 2011 Taylor & Francis http://dx.doi.org/10.1080/1350178X.2011.628045 http://www.tandfonline.com *Email: [email protected]; [email protected]; ludwigvandenhauwe@ hotmail.com Journal of Economic Methodology, Vol. 18, No. 4, December 2011, 387–407

Hayek, Gödel, and the case for methodological dualism. .pdf

  • Upload
    ap8938

  • View
    18

  • Download
    1

Embed Size (px)

Citation preview

  • Hayek, Godel, and the case for methodological dualism

    Ludwig M.P. van den Hauwe*

    Avenue Van Volxem, 326 Bus 3, 1190 Brussels, Belgium

    On a few occasions F.A. Hayek made reference to the famous Godel theorems inmathematical logic in the context of expounding his cognitive and social theory. Theexact meaning of the supposed relationship between Godels theorems and the essentialproposition of Hayeks theory of mind remains subject to interpretation, however. Theauthor of this article argues that the relationship between Hayeks thesis that the humanbrain can never fully explain itself and the essential insight provided by Godelstheorems in mathematical logic has the character of an analogy, or a metaphor.Furthermore the anti-mechanistic interpretation of Hayeks theory of mind is revealedas highly questionable. Implications for the Socialist Calculation Debate arehighlighted. It is in particular concluded that Hayeks arguments for methodologicaldualism, when compared with those of Ludwig von Mises, actually amount to astrengthening of the case for methodological dualism.

    Keywords: Hayek; theory of mind; Austrian methodology; Godel; incompletenesstheorems; methodological dualism; Socialist Calculation Debate

    JEL Codes: B0; B4; B53

    F.A. Hayek was not only a Nobel-prize-winning economist who made important

    contributions to monetary, capital, and business cycle theory. Pursuing an interest he had

    cultivated since his student days, he also made important contributions to neural science

    and to the theory of mind. These can be found in his book The Sensory Order which was

    published in 1952, and the essentials of which were already contained in a manuscript

    entitled Beitrage zur Theorie der Entwicklung des Bewusstseins which Hayek wrote as a

    young man at the age of 21. It has been acknowledged, however, that The Sensory Order

    should not be considered as a mere aside, isolated from Hayeks main preoccupations

    (Aimar 2008, p. 25). His work in the Austrian tradition in economics, his defense of

    political liberalism, and his work in theoretical psychology constitute a unified and

    integrated theoretical perspective (Horwitz 2000).1

    The work of the mathematical logician Kurt Godel, in particular his famous

    incompleteness theorems, will appear to some as far removed from Hayeks main concerns

    in social and political theory and in the theory of mind. According to one author, however,

    Hayek may have anticipated by a decade Godels own proof (Tuerck 1995, p. 287). Since

    claims like these are somewhat remarkable, the relationship between Hayeks theory of

    mind, and to some extent also his social theory and his methodology, on the one hand, and

    Godels theorems, on the other, will be examined more closely in this article.

    ISSN 1350-178X print/ISSN 1469-9427 online

    q 2011 Taylor & Francis

    http://dx.doi.org/10.1080/1350178X.2011.628045

    http://www.tandfonline.com

    *Email: [email protected]; [email protected]; [email protected]

    Journal of Economic Methodology,

    Vol. 18, No. 4, December 2011, 387407

  • 1 Introduction: Tacit knowledge and mechanism

    A recurring theme in writings within the Austrian School of economics relates to the role

    and function of tacit knowledge. Practical knowledge of the kind that is relevant to the

    exercise of entrepreneurship is mainly tacit, inarticulable knowledge, so this argument

    goes. This means that the actor knows how to perform certain actions (know how), but

    cannot identify the elements or parts of what is being done, nor whether they are true or

    false (know that) (Huerta de Soto 2008, p. 20).

    Much of what Hayek has to say about the role and function of tacit knowledge was

    already implicitly contained in his The Sensory Order (Hayek [1952] 1976). The main

    conclusion of The Sensory Order was that in discussing mental processes we will never be

    able to dispense with the use of mental terms, and that we shall have permanently to be

    content with a practical dualism since (i)n the study of human action ( . . . ) our starting

    point will always have to be our direct knowledge of the different kinds of mental events,

    which to us must remain irreducible entities (Hayek [1952] 1976, p. 191). This conclusion

    was based on the fact that we shall never be able to achieve more than an explanation of

    the principle by which the order of mental events is determined, or, stated differently, on

    the demonstrable limitations of the powers of our own mind fully to comprehend itself.

    Hayeks conclusion thus was that to us mind must remain forever a realm of its own

    which we can know only through directly experiencing it, but which we shall never be able

    fully to explain or to reduce to something else (ibid., p. 194). Despite a certain

    parallelism of language, Hayeks conclusions were thus markedly different from those of

    Ludwig von Mises, who seems to have believed that at least the conceptual possibility of

    such an ultimate reduction of the mental to the physical could not be excluded. In his

    subsequent papers Hayek also referred on a few occasions to the contribution of Michael

    Polanyi, in particular his Personal Knowledge2 (Polanyi 1958). Polanyi goes so far as to

    assert that tacit knowledge is in fact the dominant principle of all knowledge (Polanyi

    1959, pp. 2425). Even the most highly formalized and scientific knowledge invariably

    follows from an intuition or an act of creation, which are simply manifestations of tacit

    knowledge.

    Both Polanyi and Hayek refer to particular limitative meta-mathematical results, in

    particular Godels theorems, in developing the tacit knowledge thesis.3 As will be argued

    in this article, however, their positions are subtly although not insignificantly different.

    Polanyi generally concluded that (t)he proliferation of axioms discovered by Godel ( . . . )

    proves that the powers of the mind exceed those of a logical inference machine ( . . . )

    (1958, p. 261) and seems to have rejected Turings thesis in concluding that neither a

    machine, nor a neurological model, nor an equivalent robot, can be said to think, feel,

    imagine, desire, mean, believe or judge something ( . . . ) (ibid., p. 263). As will be

    illustrated further in this article, Hayeks position actually departs from this view and is

    consistent with the thesis that it is possible to build a machine that passes the Turing test.4

    In recent times the debate over the wider philosophical implications of Godels

    theorems has sometimes been framed in terms of mechanism versus anti-mechanism.5

    While Polanyi clearly seems to belong to the anti-mechanist camp, we should certainly

    guard ourselves against characterizing Hayeks position simply as mechanist or

    mechanistic, however. The term mechanism seems to have no uniform or fixed meaning,

    although it has often been regarded as a term of abuse. The role of mechanism in human

    cognition was much discussed in the seventeenth century, in particular by Descartes,

    Hobbes, and La Mettrie (Davis 2004, p. 208).6

    L.M.P. van den Hauwe388

  • In the terms that are familiar from these classical mechanist/vitalist debates, however,

    Hayeks position cannot be characterized as either mechanist or vitalist. Hayeks approach

    is actually more akin to that of an author like Ludwig von Bertalanffy whose contributions

    are cited approvingly in The Sensory Order.7 Von Bertalanffy contends that neither

    classical mechanism nor vitalism provides an adequate model for understanding organic

    phenomena, and his work in the interdisciplinary field called General System Theory can

    actually be seen as an attempt to transcend the classical dichotomy between mechanism

    and vitalism (von Bertalanffy [1969] 2009).8 The question has been the subject of renewed

    interest in the context of the possibility of machine intelligence. There is every reason to

    believe that one of the things our brains do is to execute algorithms although it is unknown

    and actually subject to controversy whether that is all that they do (Davis 2004, p. 208).

    Mechanism in the philosophy of nature was originally associated with determinism but

    eventually parted company with it upon the introduction of probability laws. Its essential

    feature was then seen to be, in general, a finitistic approach to the description of nature

    (Webb 1980, p. 30).

    The foundation for modern mechanism is Turings thesis: a procedure (function) is

    effective just in case it can be simulated (computed) by a Turing machine. This is a very

    strong thesis, for it says that any effective procedure whatever, using whatever higher

    cognitive processes one can imagine, is after all finitely mechanizable (Webb 1980, pp. 9,

    30). In this context it will be worth reminding that Godels theorems have been cited in

    support of mechanism, contrarily to what is sometimes supposed. Actually this seems to

    have been the viewpoint of professional logicians generally (Webb ibid., passim).

    While Hayek does not refer explicitly to the work of Turing, it remains a remarkable

    and somewhat paradoxical fact that although Hayeks theoretical psychology culminates

    in an argument for the indispensability of verstehende psychology thus strengthening

    previous arguments for methodological dualism, in particular those of Ludwig von

    Mises this conclusion is arrived at without requiring or without any appeal to any anti-

    mechanistic or nonmechanistic line of argumentation.

    2 Hayeks biologically based trans-mechanistic conception of mind

    2.1 The sensory order

    Clearly the few explicit references to Godels theorems which can be found in Hayeks

    writings (see further Section 3.2.2) lead us back to the significant contribution provided in

    his The Sensory Order (Hayek [1952] 1976). This important work in theoretical

    psychology has evoked some illuminating comments and elaborations but also some

    misinterpretations.9

    Credit for the resuscitation of The Sensory Order should probably be given to Weimer

    (1982). Hayeks contribution to cognitive theory has also received explicit recognition

    from professional neuroscientists such as Gerald Edelman and Joaquin Fuster (Edelman

    1982, 1987, 1989; Fuster 2005). An illuminating and recent review of Hayeks theory of

    mind from an economic perspective is contained in Butos and Koppl (2006). According to

    these authors, The Sensory Order provides an account of a particular adaptive classifier

    system the central nervous system that produces a classification over a field of sensory

    inputs. The specific form and character of this classification depends in Hayeks theory on

    the configuration of the pathways and sorting mechanisms by which the brain organizes

    itself. But this classificatory structure enjoys a certain plasticity or mutability that reflects

    the capacity for adaptive responses by the individual in the face of the perceived external

    environment. Positive and negative feedback helps to maintain a rough consistency

    Journal of Economic Methodology 389

  • between behavior and the actual environment. The way an individual responds to external

    conditions is fully dependent upon the particular classifications he generates, which is to

    say that for Hayek individual knowledge is the adaptive response of an individual based on

    the classification the brain has generated (ibid., p. 31).10

    From a more strictly neuroscientific perspective, The Sensory Order has been cited as

    an important scholarly contribution to the understanding of the cerebral foundation of

    perception and memory (Edelman 1982, p. 24ff., 1987, passim; 1989, p. 281; Fuster 2005,

    p. 8).

    The essence of Hayeks theory from this perspective is the proposition that all of an

    organisms experience is stored in network-like systems (maps) of connections between

    the neurons of its cerebral cortex. Those connections have been formed by the temporal

    coincidence of inputs from various sources (including the organism itself). In their

    strength, those connections record the frequency and probability with which those inputs

    have occurred together in the history of the organism or of the species. A key point, in

    terms of the representational properties of Hayeks model, is that there is no basic core of

    elementary sensation. Each sensation derives from experience and from other sensations

    with which it has been temporally associated in the past, including the past of the species.

    To postulate in the human cortex representation, networks as broad as those envisioned by

    Hayek presuppose extensive and intricate systems of connections between distant cortical

    neurons. It was an insightful supposition that he made long before such systems were

    anatomically demonstrated in the brain of the primate (Fuster ibid., p. 8). Hayeks

    contribution has sometimes been cited in conjunction with that of Hebb (1949), of which it

    was independent, however.11

    2.2 The mind as machine

    Since the publication of the important papers of Godel and Turing,12 various positions in

    the philosophy of mind have been categorized according to whether it is believed that

    machines can (at least potentially) think, that is, whether artificial intelligence is possible,

    or conversely, whether the human mind can plausibly be conceived of as some sort of

    machine.13

    It is beyond dispute that Hayeks conception of mind has mechanistic traits. Hayek

    repeatedly uses machine examples in order to illustrate his theory of the human mind. Thus

    similar to the classification mechanism of the mind are certain statistical machines for

    sorting cards on which punched holes represent statistical data (Hayek [1952] 1976,

    p. 49). When Hayek discusses the differences and analogies between mechanical and

    purposeful behavior (Hayek [1952] 1976, p. 122ff.) we can still read that similar to the

    models by which the mind reproduces, and experimentally tries out, the possibilities

    offered by a given situation are machines like antiaircraft guns and the automatic pilots in

    airplanes, which show all the characteristics of purposive behavior. Although such

    machines cannot yet be described as brains, with regard to purposiveness they differ from

    a brain merely in degree and not in kind (ibid., p. 126). The provisional conclusion that

    thus suggests itself is that in the grand debate for and against the possibility of AI, it

    seems more likely that Hayek is to be put in the for camp.

    Hayeks theory of mind has been characterized as connectionist (Smith 1997). The

    older and more orthodox symbol-processing paradigm sees intelligence as a matter of the

    sequential manipulation of meaningful units (terms, concepts, ideas) of roughly the sort

    with which we are familiar in reasoned introspection. In contrast to this, the common

    feature of many species of information processing systems all covered by the general term

    L.M.P. van den Hauwe390

  • connectionism is that they are conceptualized as massively parallel processing devices,

    made up of many simple units. A units activity is regulated by the activity of neighboring

    units, connected to it by inhibitory or excitatory links whose strength can vary according to

    design and/or learning (Boden 1990b, p. 14).14

    A strongly oppositional history of the two branches of AI was provided by Dreyfus and

    Dreyfus (1988) explaining how, in particular, traditional AI failed to capture holistic

    perception, context sensitivity, and the recognition of family resemblances and relevance

    each better handled by connectionism. In defending their position, Dreyfus and Dreyfus

    (1988) related AI work to a wide range of philosophical literature, contrasting the Western

    rationalist tradition with Continental phenomenology and the later Wittgenstein. According

    to this skeptical view about AI, people do not use a theory about the everyday world, because

    there is no set of context-free primitives of understanding. Our knowledge is skilled know-

    how, as contrasted with procedural rules, representations, and so forth, or knowledge that.

    This issue is also related to the so-called frame problem of AI (Dennett [1984] 1990)

    which, philosophically speaking, is reminiscent of the eminently Hayekian themes of the

    tacit domain and the contextual nature of knowledge (Boettke and Subrick 2002, p. 56). The

    frame problem relates to the question of unconscious information appreciation that we all

    engage in when making choices. For instance, everyday thinking about the material world

    does not employ theoretical physics, but rather nave physics (Hayes 1979), our untutored

    and largely unconscious knowledge of the environment, which is involved in sensorimotor

    skills and linguistic understanding. Likewise, our practical and linguistic grasp of social life

    depends on nave psychology, consisting not of empirical generalizations about how

    people behave but of the fundamental concepts and inference-patterns defining everyday

    psychological competence.15

    Also mentioned in Dreyfus paper was D.O. Hebbs inspiring (1949) contribution

    which suggested that a mass of neurones could learn if when neurone A and neurone B

    were simultaneously excited, that excitation increased the strength of the connection

    between them (ibid., pp. 311312).

    Adepts of the physical symbol system hypothesis have remained unconvinced,

    however, while the debate has gone on. As Simon explains, [u]ntil connectionism has

    demonstrated, which it has not yet done, that complex thinking and problem-solving

    processes can be modeled as well with parallel connectionist architectures as they have

    been with serial architectures, and that the experimentally observed limits on concurrent

    cognitive activity can be represented in the connectionist models, the case for massive

    parallelism outside the sensory functions remains dubious (Simon 1996, p. 82).

    One might also object that connectionism, too, studies computational systems, whose

    units compute by rigorously specified processes or rules (Boden ibid., p. 17).

    Thus Edelman considers with respect to connectionist systems that [u]nlike classical

    work in artificial intelligence ( . . . ) these models use distributed processes in networks, and

    changes in connections occur in part without strict programming. Nonetheless,

    connectionist systems need a programmer or operator to specify their inputs and their

    outputs, and they use algorithms to achieve such specification. While the systems allow for

    alterations as a result of experience, the mechanism of this learning is instructional not

    selectional ( . . . ). The architectures of neural networks are removed from biological

    reality, and the networks function in a manner quite unlike the nervous system.

    (Edelman 1992, p. 227) This author intends to develop a biologically based epistemology

    and to dispel the notion that the mind can be understood in the absence of biology (ibid.,

    p. 211).

    Journal of Economic Methodology 391

  • However, Hayeks theory also has a solid grounding in evolutionary theory. Hayek

    summarizes the central thesis by saying that we do not first have sensations which are then

    preserved by memory, but it is as a result of physiological memory that the physiological

    impulses are converted into sensations. The connexions between the physiological

    elements are thus the primary phenomenon which creates the mental phenomena ( . . . )

    (Hayek [1952] 1976, p. 53), a phrase already contained in the early German draft of what

    would later become The Sensory Order.

    Evolution establishes certain connections. Many properties of the set of connections

    (and perhaps many specific connections) are determined by the history of the organisms

    species. The history of the individual then operates on these connections at, as it were, a

    higher level to form higher order classes of connections among nerve fibers. Evolution

    may also establish a set of possible patterns of connection, implementing one rather than

    the others on the basis of the organisms personal history (Butos and Koppl ibid., p. 13).

    The theory is thus also consistent with the hypothesis of a certain variability and plasticity

    of the classificatory apparatus at the individual level.

    Hayek sees a distinction between the phenomenal order of the mind and the physical

    order of the external world. Hayek is no vitalist, however, and he clearly sees both

    phenomenal and physical orders in quasi-mechanistic terms. Nevertheless, as Butos and

    Koppl explain, [f]or human cognitive functioning, ( . . . ) this process ( . . . ) has the

    capacity for self-conscious and reflective activity, thus providing substantial scope for

    critical, argumentative, and self-reordering properties. Thus individuals are not mere

    processors of information, passively responding to stimuli. Human cognitive activity,

    despite being constrained by rules and its own physiology, should be understood as an

    active, input-transforming, knowledge-generating adaptive system (ibid., p. 31).

    3 The computational legacy: Hayek and Godels incompleteness theorems

    3.1 Hayek as a precursor of modern complex systems theory

    The recent literature on the role and functioning of markets as complex adaptive

    systems (CAS) has acknowledged the significance of certain developments in the

    foundations of mathematics permitting definitive formulations of CAS in terms of what

    cannot be formally and practically computed (NP-hard problems)16 and hence needs to

    emerge or self-organize. In particular the GodelTuring results on incompleteness and

    algorithmically unsolvable problems are perceived as having established, for the first

    time, the logical impossibility limits to formalistic calculation or deductive methods.

    In the absence of these limits on computation there is in principle no reason why all

    observed patterns and new forms cannot be achieved by central command (Markose

    2005, p. F160). Hayeks early contributions are invariably cited in this context. Thus

    according to Markose (2005, p. F165) Hayek ( . . . ) was one of the first economists

    who explicitly espoused the Godelian formalist incompleteness and impossibility limits

    on calculation which he referred to as the limits of constructivist reason. This led

    Hayek to the necessity of experientially driven adaptive solutions with the abiding

    premise of his large oeuvre of work being that market institutions which co-evolved

    with human reason enable us to solve problems which are impossible to do so by direct

    rational calculation. The precursory role of the classical eighteenth-century political

    economy of the Scottish Enlightenment is also frequently cited in this context. As

    Markose (ibid., pp. F159F160) explains [i]t has been held that order in market

    systems is spontaneous or emergent: it is the result of human action and not the

    execution of human design. This early observation, well known also from the

    L.M.P. van den Hauwe392

  • Adam Smith metaphor of the invisible hand, premises a disjunction between system

    wide outcomes and the design capabilities of individuals at a micro level and the

    distinct absence of an external organizing force.17

    Another towering figure of the twentieth century to be mentioned here is John von

    Neumann. According to Markose (ibid., p. F162), it is the work of John von Neumann in

    the 1940s on self-reproducing machines as models for biological systems and self-

    organized complexity which provides a landmark transformation of dynamic systems

    theory based on motion, force, and energy to the capabilities and constraints of

    information processors modeled as computing machines. Indeed the von Neumann models

    based on cellular automata have laid the ground rules of modern complex systems theory

    regarding:

    (a) The use of large ensembles of micro-level computational entities or automata

    following simple rules of local interaction and connectivity,

    (b) The capacity of these computational entities to self-reproduce and also to produce

    automata of greater complexity than themselves, and

    (c) Use of the principles of computing machines to explain diverse system-wide or

    global dynamics.

    Although at least one author has attempted to make the case that [s]ociety is for Hayek a

    complex automaton in the sense of Von Neumann ( . . . ) (Dupuy 1992, p. 39) and that

    Hayeks critique of constructivist rationalism in social science can therefore be seen as

    having anticipated von Neumanns critique of McCullochs artificialist philosophy

    (Dupuy 2009, p. 140) the relationship between Hayeks contribution and that of von

    Neumann is perhaps somewhat less clear.18

    3.2 Hayek on the significance of Godels theorems

    3.2.1 The meaning of Godels theorems

    Hayek indeed refers to Godels theorems on a few occasions see the next section but

    let us first remind of the meaning and content of Godels theorems.19 Considered the

    greatest mathematical logician of the twentieth century, Godel is renowned for his proofs

    of several fundamental results, one of which established the syntactic incompleteness of

    formal number theory.

    Godels idea was to construct a statement S that, in effect, asserts There is no proof of

    me, or, I am not provable, of the constructed statement S. Godels remarkable

    achievement was to manage to encode such a statement in the language of number theory.

    The first theorem states the following:

    Any consistent formal system S within which a certain amount of elementary arithmetic canbe carried out is incomplete with regard to statements of elementary arithmetic: there are suchstatements which can neither be proved, nor disproved in S (Franzen 2005, p. 16).

    Godels proof is based on two simple ideas. The first is Godel numbering, which is a

    means of encoding each formula or sequence of formulas as a natural number in a

    systematic and mechanical way. The second idea of the proof is self-reference: the proof is

    based on the construction of a formula which is carefully devised so that it asserts its own

    unprovability.

    Godels second theorem can be stated as follows:

    For any consistent formal system S within which a certain amount of elementary arithmeticcan be carried out, the consistency of S cannot be proved in S itself.

    Journal of Economic Methodology 393

  • The proofs of these theorems apply to any (consistent) system of mechanically recognizable

    axioms that is powerful enough to describe the natural numbers. Thus, completeness cannot

    be restored simply by adding a true but unprovable statement as a new axiom, for the

    resulting system is still strong enough for Godels theorem to apply to it.

    It was generally believed that these two remarkable incompleteness theorems proved

    by Godel destroyed the hope that it would possible, at least in principle, to fulfill the

    program set out by Hilbert.

    Hilberts program had two main goals concerning the foundations of mathematics. The

    first was descriptive, the second justificatory. The descriptive goal was to be achieved by

    means of the complete formalization of mathematics. The justificatory goal was to be

    achieved by means of a finitary (and hence epistemologically acceptable) proof of the

    reliability of those essential but nonfinitary (and hence epistemologically more suspect)

    parts of mathematics. Work by both formalists and logicists during the first two decades of

    the last century had effectively accomplished the former of these two goals. Ideally a

    finitary consistency proof would accomplish the latter (Irvine 1996, p. 27).

    It is precisely the satisfaction of this requirement of finitarily demonstrable

    consistency that is thought to have been called into question in particular by Godels

    second incompleteness theorem. One of the philosophically significant corollaries of

    the second incompleteness theorem is that any consistency proof for a theory of which the

    second incompleteness theorem holds will have to rely upon methods logically more

    powerful than those of that theory itself.

    3.2.2 Hayek on Godel

    On at least two occasions Hayek explicitly mentions and comments upon Godels

    incompleteness theorem. In his Rules, Perception and Intelligibility (Hayek 1967a),

    Hayek considers the possibility of an inherent limitation of our possible explicit

    knowledge and, in particular, the impossibility of ever fully explaining a mind of the

    complexity of our own (ibid., p. 60). He states that there will always be some rules

    governing a mind which that mind in its then prevailing state cannot communicate, and

    that, if it ever were to acquire the capacity of communicating these rules, this would

    presuppose that it had acquired further higher rules which make the communication of the

    former possible but which themselves will still be incommunicable (ibid., p. 62).

    He then pursues:

    To those familiar with the celebrated theorem due to Kurt Godel it will probably be obviousthat these conclusions are closely related to those Godel has shown to prevail in formalizedarithmetical systems. It would thus appear that Godels theorem is but a special case of amore general principle applying to all conscious and particularly all rational processes,namely the principle that among their determinants there must always be some rules whichcannot be stated or even be conscious. At least all we can talk about and probably all we canconsciously think about presupposes the existence of a framework which determines itsmeaning, i.e., a system of rules which operate us but which we can neither state nor form animage of and which we can merely evoke in others in so far as they already possess them(ibid., p. 62).

    The second occasion occurred during the Symposium that was held in 1968 in Alpbach in

    the Austrian Tyrol, and was a major event in the history of systems theory. Participating

    were, besides Hayek, among others, Paul Weiss, Ludwig von Bertalanffy, and C.H.

    Waddington.

    The notion of self-organization was the real theme of the conference, even if it was not

    called by this name (Dupuy 2009, p. 76).

    L.M.P. van den Hauwe394

  • The conference proceedings (containing the papers presented during this symposium

    as well as most of the content of the discussions) were edited and published by Arthur

    Koestler and J.R. Smythies under the title Beyond Reductionism New Perspectives in the

    Life Sciences (Koestler and Smythies 1969). It was during this symposium that Hayek

    presented a paper entitled The primacy of the abstract (ibid., pp. 309333) in which he

    expounded and defended a thesis already implicitly contained in his earlier book The

    Sensory Order, namely that all the conscious experience that we regard as relatively

    concrete and primary, in particular all sensations, perceptions and images, are the product

    of a super-imposition of many classifications of the events perceived according to their

    significance in many respects, and that [t]hese classifications are to us difficult or

    impossible to disentangle because they happen simultaneously, but are nevertheless the

    constituents of the richer experiences which are built up from these abstract elements

    (ibid., p. 310).

    During the lively discussion that followed, this theme of the mind eluding full self-

    awareness or self-consciousness was further elaborated upon, and Hayek further

    explained:

    The example is the thesis that on no adding machine with an upper limit to the sum it can showis it possible to compute the number of different operations this machine can perform (if anycombination of different figures to be added is regarded as a different operation). ( . . . ) Itseems to me that this can be extended to show that any apparatus for mechanical classificationof objects will be able to sort out such objects only with regard to a number of propertieswhich must be smaller than the relevant properties which it must itself possess; or, expresseddifferently, that such an apparatus for classifying according to mechanical complexity mustalways be of greater complexity than the object it classifies. If, as I believe it to be the case, themind can be interpreted as a classifying machine, this would imply that the mind can neverclassify (and therefore never explain) another mind of the same degree of complexity. It seemsto me that if one follows up this idea it turns out to be a special case of the famous Goedeltheorem about the impossibility of stating, within a formalized mathematical system, all therules which determine that system (ibid., p. 332).20

    It is nevertheless important not to be mistaken about the strength of the relationship

    between Hayeks thesis that any explanation which always rests on classification is

    limited by the fact that any apparatus of classification must be of a higher degree of

    complexity than what it classifies, and that therefore the human brain can never fully

    explain itself (Weimer and Palermo ibid., p. 292), on the one hand, and Godels theorem

    in mathematical logic, on the other. This relationship is of the nature of an analogy, or even

    merely metaphorical; it is not inferential, nor of a strictly logical character.21

    4 Hayeks theory of mind and the philosophy of AI

    As highlighted already, Hayek saw Godels proof as a special case of the more general

    argument offered in The Sensory Order about the inability of the brain to explain itself.

    Just as there are statements about the brain that are true but that cannot be explained in

    terms of the logic of the brain, there are statements about a formal system that are true but

    that cannot be explained in terms of the logic of that system. In reflections such as these, it

    is commonly also the second incompleteness theorem that is implicitly or explicitly

    referred to. The inability of a formal system, say S, to prove its own consistency is

    interpreted as an inability of S to sufficiently analyze and justify itself, or as a kind of blind

    spot. The system does not understand itself (also Franzen 2005, p. 125).

    As mentioned previously, one author concludes that, [w]ith this insight, Hayek may

    have anticipated by a decade Godels own proof (Tuerck 287). However, as also pointed

    Journal of Economic Methodology 395

  • out already, one should not be mistaken about the strength of the relationship between

    Godels proof and Hayeks insight. While Godels proof may be thought of as providing a

    metaphor for or an analogy with a true statement about the brain, the proposition that

    according to Godels Incompleteness Theorem, understanding our own minds is

    impossible, is, when taken literally, clearly mistaken if according to means as stated or

    implied by, since of course Godels Incompleteness Theorem neither states nor implies

    that understanding our own minds is impossible.

    That the relationship is a matter of inspiration rather than implication is also brought

    out by Hofstadter who writes (Hofstadter 1980, p. 697):

    The other metaphorical analogue to Godels theorem which I find provocative suggests thatultimately, we cannot understand our own minds/brains. This is such a loaded, many-leveledidea that one must be extremely cautious in proposing it. ( . . . ) All the limitative Theorems ofmeta-mathematics and the theory of computation suggest that once the ability to representyour own structure has reached a certain critical point, that is the kiss of death: it guaranteesthat you can never represent yourself totally.

    Godels theorems have stimulated many philosophical speculations outside the philosophy

    of mathematics. In particular, Godels theorem has been a battleground for philosophers

    arguing about whether the human brain is a machine. One has repeatedly attempted to

    apply Godels theorems and demonstrate that the powers of the human mind outrun any

    mechanism or formal system. Such a Godelian argument against mechanism was

    considered, if only in order to refute it, already by Turing ([1950] 1990, p. 52) who,

    proposing the imitation game, concluded that [w]e too often give wrong answers to

    questions ourselves to be justified in being very pleased at such evidence of fallibility on

    the part of the machines.22

    J.R. Lucas (1961, p. 43) famously proclaimed that Godels incompleteness theorem

    proves that Mechanism is false, that is, that minds cannot be explained as machines. He

    stated that a machine cannot be a complete and adequate model of the mind. It cannot do

    everything that a mind can do, since however much it can do, there is always something

    which it cannot do, and a mind can (ibid., p. 47). More recently, very similar claims have

    been put forward by Roger Penrose (1995, 1999).23

    All these arguments insist that Godels theorems imply, without qualifications, that the

    human mind infinitely surpasses the power of any finite machine. It is now mostly

    accepted, however, that these Godelian anti-mechanist arguments are generally flawed

    (Raatikainen 2005, p. 522ff.). They cannot be used to support the erroneous claim that

    Godels theorem can be used to show that mathematical insight cannot be algorithmic

    (Davis 1993).

    The basic error of such an argument can actually be pointed out rather simply, based on

    an objection going back to Putnam (1960). The argument assumes that for any formalized

    system, or a finite machine, there exists the Godel sentence (saying that it is not provable in

    that system) which is unprovable in that system, but which the human mind can see to be

    true. Yet Godels theorem has in reality the conditional form, and the alleged truth of the

    Godel sentence of a system depends on the assumption of the consistency of the system.

    Putnams argument is still quite illuminating and worth being quoted in full:

    It has sometimes been contended ( . . . ) that the theorem (i.e., Godels theorem) does indicatethat the structure and power of the human mind are far more complex and subtle than anynonliving machine yet envisaged ( . . . ), and hence that a Turing machine cannot serve as amodel for the human mind, but this is simply a mistake.

    Putnam further explains why this is a mistake:

    L.M.P. van den Hauwe396

  • Let T be a Turing machine which represents me in the sense that T can prove themathematical statements I can prove. Then the argument ( . . . ) is that by using Godelstechnique I can discover a proposition that T cannot prove, and moreover I can prove thisproposition. This refutes the assumption that T represents me, hence I am not a Turingmachine. The fallacy is a misapplication of Godels theorem, pure and simple. Given anarbitrary machine T, all I can do is find a proposition U such that I can prove:

    (a) If T is consistent, U is true,

    where U is undecidable by T if T is in fact consistent. However, T can perfectly well prove (a)too! And the statement U, which T cannot prove (assuming consistency), I cannot prove either(unless I can prove that T is consistent, which is unlikely if T is very complicated)! (Putnam1960, p. 77).

    The anti-mechanist argument thus requires that the human mind can always see

    whether the formalized theory in question is consistent, which is highly implausible.

    In his Shadows of the Mind (Penrose 1995, Chapter 3), Penrose in particular also

    considers the logical possibilities that mathematical belief might be the result of an

    unknown unconscious algorithm, or possibly of a knowable algorithm that cannot be

    known to be or firmly believed to be one that underlies mathematical belief, and rejects

    these as not at all plausible ones (ibid., p. 127 ff.). As Putnam points out, however,

    Penrose, who limits his discussion to rules which are simple enough ( . . . ) to appreciate in

    a perfectly conscious way (ibid., p. 132), completely misses the possibility ( . . . ) that each

    of the rules that a human mathematician explicitly relies on, or can be rationally persuaded

    to rely on, can be known to be sound and that the program generates all and only these rules

    but that the program itself cannot be rendered sufficiently perspicuous for us to know that

    that is what it does (Putnam 1995, p. 371).

    While Lucas, Penrose, and others have thus certainly attempted to reply to such

    criticism, and have made some further moves, the fact remains that they have never really

    managed to get over the fundamental problem stated above. According to the view that

    now prevails among professional logicians, they have at best changed the subject

    (Raatikainen ibid., p. 523).

    5 Hayeks position: From mechanistic to trans-mechanistic thinking

    Lucas critique was only one of several lines of attack on Turings position, arguing that,

    contrary to his assumption, it is not actually possible in principle or in practice to get

    computers to perform in a way that matches the depth, range, and flexibility of human

    minds. According to this view, technological AI is not outlawed useful AI systems have

    already been produced but the Holy Grail of AI and computational psychology a

    detailed computer model of human mental processes are impossible and/or infeasible

    (Boden 1990b, p. 6).

    We were unable to locate any evidence, however, that Hayek actually embraces the

    anti-mechanistic thesis. Surely this fact has not been acknowledged invariably (e.g.

    Boettke and Subrick 2002, p. 54; see further).

    Whereas arguments against the possibility of AI in the style of Lucas and Penrose tend

    to suggest that our self-knowledge proves that we are better than machines, one could, as

    the Hayekian recognizes, equally well use the fact that formal systems cannot know

    themselves to claim that human self-knowledge is not possible (also Tuerck ibid., p. 288).

    Recently one author has taken the fact of individual self-ignorance as the starting point for

    a proposed extension of the Austrian paradigm, explaining that [t]he existence of a zone

    of tacit knowledge within the mind of actors gives rise to a phenomenon of internal

    Journal of Economic Methodology 397

  • ignorance which is itself at the origin of a problem of infra-individual dis-coordination.

    It is in reaction to this situation that a process of awareness is implemented via an

    intropreneurial activity, ( . . . ) but this very endeavor is faced with boundaries beyond

    which the mind cannot go (Aimar 2008, p. 41).

    Still any dichotomy between mechanist(ic) and anti-mechanist(ic) needs to be

    treated with great delicacy and it would be misleading to characterize Hayeks position

    simply as mechanistic or mechanist. As pointed out already, Hayeks approach is

    actually more akin to that of Ludwig von Bertalanffy. Von Bertalanffy proposed an

    organismic model on which a well-grounded explanatory theory of life can be built. His

    model represents organisms as a whole or systems that have unique system properties and

    conform to irreducible system laws. From this viewpoint which is better characterized as

    trans-mechanistic the traditional dichotomy of mechanistic versus anti-mechanistic must

    be rejected as hopelessly simplistic.

    6 From the mind to the market

    Some authors within the Austrian School go beyond the mere recognition of the existence

    of certain functional similarities between the brain and markets in particular the fact that

    both exhibit a spontaneous, polycentric order by arguing that there also exist certain

    analogies and similarities between certain arguments against the possibility or viability of

    strong versions of AI, in particular Searles and Hayeks critique of socialist planning.

    Thus Boettke and Subrick (2002, p. 54) argue that the sort of criticisms that John

    Searle ( . . . ) raises against hard AI concerning the distinction between syntax and

    semantics is analogous to the criticism that one finds in Hayek ( . . . ) about the knowledge

    problem that socialist modes of economic organization would have to confront. Moreover,

    Hayeks ( . . . ) own work on theoretical psychology raises the same Godelesque critique of

    the study of the mind that Roger Penrose (1999) offers against overzealous claims of the

    computational theory of the mind.24

    The implications of this interpretation seem to be: first, that Hayek belongs to the anti-

    mechanist camp in the philosophy of AI, together with Penrose and other similarly minded

    thinkers, and second, that in the grand debate for and against the possibility of AI, Hayek

    belongs to the against camp, or at least to the group of those who have skeptical doubts

    about the possibility of AI.

    On a closer reading, and as pointed out already, both these theses are highly

    questionable. As David Tuerck concludes, [m]arkets, like the brain (and potentially

    computers), exhibit a spontaneous, polycentric order. Godels proof, correctly understood,

    reminds us that there are limits to knowledge, human and mechanical, while permitting us

    to consider the possibility that knowledge can, indeed, be mechanized. The supposed

    distinction between mechanistic and economic thinking stems from a failure to understand

    that thinking can be both mechanistic and spontaneous (Tuerck 1995, p. 290).

    There can be no doubt that in The Sensory Order Hayek explicitly envisions the

    possibility of a simulation of the human brain by a machine or computer, a thesis which at

    first sight may certainly seem paradoxical. Indeed, according to Hayek, the fact that the

    human brain cannot provide an explanation of itself does not exclude the logical

    possibility that the knowledge of the principle on which the brain operates might enable us

    to build a machine fully reproducing the action of the brain and capable of predicting how

    the brain will act in different circumstances (Hayek [1952] 1976, p. 189).

    Hayek pursues that [s]uch a machine, designed by the human mind yet capable of

    explaining what the human mind is incapable of explaining without its help, is not a

    L.M.P. van den Hauwe398

  • self-contradictory conception in the sense in which the idea of the mind directly explaining

    its own operations involves a contradiction. The achievement of constructing such a

    machine would not differ in principle from that of constructing a calculating machine

    which enables us to solve problems which have not been solved before, and the results of

    whose operations we cannot, strictly speaking, predict beyond saying that they will be in

    accord with the principles built into the machine. In both instances our knowledge merely

    of the principle on which the machine operates will enable us to bring about results of

    which, before the machine produces them, we know only that they will satisfy certain

    conditions (ibid., p. 189).

    It is thus fairly clear that Hayeks account implies the possibility of building a machine

    that passes the Turing test.

    Apparently the explanation lies in the distinction between what the apparatus of

    classification mind or machine produces in the form of results and the principles

    according to which it generates those results. According to one author, a machine whose

    principles of operation we understand but that nevertheless generates results that

    reproduce the actions of the human brain would be somehow functionally equivalent to

    such a brain (Tuerck ibid., p. 284). But actually a machine that could predict our thoughts

    would not be equivalent to a human mind, but superior to it. The machine Hayek describes

    in The Sensory Order has super-human capabilities.25

    This conclusion will perhaps seem to be at odds with a widespread perception among

    Austrian economists that there is in economics no room for any thinking of a mechanistic

    kind. Ludwig von Mises ([1949] 1996, p. 25) distinguishes between two principles

    available for a mental grasp of reality, namely, those of teleology and causality. The study

    of economics is aimed at those aspects of mind and of human action that are purposeful.

    Mechanism is the bailiwick of other sciences such as physics. Buchanan carries the

    dichotomy between thinking of a mechanistic kind and economic thinking even further,

    rejecting as sterile the whole means-end characterization of the economic problem, and

    arguing that [t]he market or market organization is not a means toward the

    accomplishment of anything. It is, instead, the institutional embodiment of the voluntary

    exchange processes that are entered into by individuals in their several capacities

    (Buchanan 1964, pp. 3031).

    Nevertheless the market organization to which Buchanan refers, when it is conceived

    of as a spontaneous or polycentric social order that is, as a decentralized price system

    rather than as planned or monocentric, presents a clear analogy with the human brain.26

    An explanation is provided by Hayek in the following terms:

    In both cases we have complex phenomena in which there is a need for a method of utilizingwidely dispersed knowledge. The essential point is that each member (neuron, or buyer, orseller) is induced to do what in the total circumstances benefits the system. Each member canbe used to serve needs of which it doesnt know anything at all. Now that means that in thelarger (say, economic) order, knowledge is utilized that is not planned or centralized or evenconscious. The essential knowledge is possessed by literally millions of people, largelyutilizing their capacity of acquiring knowledge that, in their temporary position, seems to beuseful. Now the possibility of conveying to any kind of central authority all the knowledge anauthority must use, including what people know they can find out if it becomes useful to them,is beyond human capacity ( . . . ). In our whole system of actions, we are individually steeredby local information information about more particular facts than any other person orauthority can possibly possess. And the price and market system is in that sense a system ofcommunication, which passes on (in the form of prices, determined only on the competitivemarket) the available information that each individual needs to act, and to act rationally(Weimer and Hayek 1982, pp. 325326).

    Journal of Economic Methodology 399

  • Hayek acknowledges the fact that the similarity is only a partial one: whereas for the

    individual brain, decisions are made by first modeling the question of what action to take

    and then sending the results to a central authority for execution, for a decentralized price

    system, decisions take place directly without the necessity of communicating them first to

    any central authority, thus making the use of more information possible than could be

    transmitted to, and digested by, a centre (Hayek 1967b, pp. 7374).27

    A decentralized price system is superior over central planning, despite the fact that the

    actions of every organism that makes up a spontaneous or polycentric social order are

    themselves centrally directed by a brain. This superiority rests precisely on the brains

    ability to model alternative courses of action before selecting the course of action to be

    taken by the organism that it directs. Purposive behavior takes place when the organism

    selects from these alternative courses of action the one that it identifies as having the most

    desired consequences.

    The inferiority of central planning, by contrast, rests on the fact that it wastes

    information, not only because of the limited capacity of the planners to receive and digest

    the information communicated to them, but also because of the inability of individual

    economic agents to communicate all of the information that they have. This inability stems

    from their more general inability to state or communicate all the various rules that govern

    their actions and perceptions, which brings us back to the Godelian metaphor.

    7 The socialist calculation debate

    The foregoing remarks, in particular concerning the absence of any anti-mechanistic line

    of argumentation in Hayeks reasoning, could thus never throw into doubt the ongoing

    relevance of the Austrian critique of socialist economic planning. In particular it seems

    that economists who thought that the model of market socialism effectively answered the

    critique of socialism proposed by Ludwig von Mises and F.A. Hayek were misled by

    the general equilibrium model. General equilibrium theory captures in abstract form the

    interconnectedness of all markets in an economic system, but it does so at the cost of

    assuming away the processes through which the division of knowledge in society is

    coordinated so that the interconnectedness can be realized.

    Interconnectedness of economic behavior was coordinated through conscious design

    by hypothesis but the question of how in actuality agents would indeed acquire and utilize

    the information needed to realize efficient solutions was left unexamined (Boettke and

    Subrick ibid., pp. 5556).

    A particular dimension has been added to this ongoing debate since the more recent

    development of the research program of computable economics.28

    Computational economics can be seen as a discipline that encompasses three different

    ways of looking at economic and social systems: (1) Can we computationally predict the

    behavior of some (economic, social) phenomenon? (2) Can we formulate in a constructive

    way the main results from mathematical economics? (3) Finally, can we look at economic

    (and social) processes as computing devices? (Bartholo, Cosenza, Doria, and de Lesa

    2009, pp. 7273).

    With respect to the last question, several of the main contributions have been made by

    A.A. Lewis and K.V. Velupillai. One of the most striking results in the meta-mathematics

    of economic models was the proof by Lewis that the (formal) theory of Walrasian models

    with a computable presentation is an undecidable theory. Lewis chief result on the

    undecidability of game theory was that recursively presented games have a nonrecursive

    arithmetic degree (Lewis 1992a, b).

    L.M.P. van den Hauwe400

  • The central point emphasized by these authors is well summarized by Bartholo et al.

    (2009, p. 73) who state that once you build your argument within a framework that

    includes enough arithmetic to develop recursive function theory in it, then you get Godel

    incompleteness everywhere, you get undecidable sentences of all sorts, including those

    that deal with interesting or pressing theoretical questions. One of these questions relates

    to the presumed possibility of planned economies.

    Reference in this context can be made to the pioneering attempts of Newton de

    Costa and Francisco Doria who obtained some quite general undecidability and

    incompleteness results/theorems of consequence to the sciences that use mathematics as

    their main language and predictive tool. These results show some of the limits that come

    to the forefront when one tries to look at some of the central questions of every science

    from the perspective of the mathematician, such as: What can we know about the

    future? What can we know about the world through a formal language? Which are the

    limitations imposed on our empirical, everyday knowledge when we try to describe the

    world around us with the help of a formalized language? (da Costa and Doria 1994, p. 1,

    2005, pp. 1617).

    One implication of their results is that the main argument by Lange in favor of a

    planned economy clearly breaks down. As these authors conclude, Lange thought that

    given the (possibly many) equations defining an economy, a huge and immensely

    powerful computer would always be able to figure out the equilibrium prices, therefore

    allowing (at least theoretically) the existence of an efficient global policy maker (da Costa

    and Doria 1994, p. 13).

    Generally the results obtained within the field of computational economics disprove the

    once believed conjecture that given the equations defining an economy, some gigantic

    supercomputer would always be able to calculate the equilibrium prices, therefore allowing

    (at least theoretically) the existence of an efficient global policy maker. For general

    mathematical models, the matter is algorithmically unsolvable (Bartholo et al. 2009, p. 78).

    Clearly undecidability and incompleteness not only do matter for mathematical economics,

    but also have important practical consequences.29

    This authors book Computable Economics sketches the main research lines in the field.

    A full chapter (chap. 3, pp. 2843) is devoted to the approach that identifies the rational

    behavior of an economic agent with the behavior of a Turing machine. A central result,

    which is proved as a direct consequence of the unsolvability of the halting problem for

    Turing machines, is that, if rational agents are identified with Turing machines, preference

    orderings are undecidable. There is no effective procedure to generate preference orderings

    (ibid., p. 38).

    Later chapters in the same book introduce some intriguing new explorations in

    computable economics. In this context the author also reconsiders the status of the

    socialist calculation debate in the light of algorithmic and computational complexity

    theories.

    As this author aptly concludes, [i]t is as if the ghosts of Lange and Taylor have never

    been properly exorcised, in spite of the powerful empirical reasons put forward by von

    Mises, Hayek, and Robbins. ( . . . ) I conjecture that the tools of algorithmic and

    computational complexity theory can be used to indicate the computational infeasibility of

    the institutions of so-called socialist market economies based on Lange-Taylor type

    arguments (Velupillai 2000, p. 164).

    Journal of Economic Methodology 401

  • 8 The case for methodological dualism and concluding remarks

    Hayeks theoretical psychology culminates in an argument for methodological dualism

    and the inevitability of a verstehende psychology. He concluded that [w]hile our theory

    leads us to deny any ultimate dualism of the forces governing the realms of mind and that

    of the physical world respectively, it forces us at the same time to recognize that for

    practical purposes we shall always have to adopt a dualistic view (Hayek [1952] 1976,

    p. 179).30

    While from a practical methodological perspective, the methodological theses and

    conclusions of Hayek and Mises may be considered largely congruent, Hayeks arguments

    in favor of methodological dualism are clearly distinct from those of Mises. In particular

    Hayeks argument is of a more principled nature than that of Mises.

    Hayek rejects the possibility of a reduction in the sense of a complete and detailed

    explanation of mental processes in physical terms as a matter of principle. The fact that

    any apparatus of classification must possess a structure of a higher degree of complexity

    than is possessed by the objects which it classifies puts an absolute limit to what the

    human brain can ever accomplish by way of explanation a limit which is determined by

    the nature of the instrument of explanation itself ( . . . ). The capacity of any explaining

    agent must be limited to objects with a structure possessing a degree of complexity lower

    than its own (Hayek [1952] 1976, p. 185).

    Mises recognizes that the mindbody problem has not been solved satisfactorily in the

    sense that [w]e do not know why identical external events result sometimes in different

    human responses, and different external events produce sometimes the same human

    response but he consistently adds qualifications such as as far as we can see today or at

    least under present conditions, and so forth, which means that he does not exclude a

    reduction of the mental to the physical as a matter of absolute impossibility. In fact he

    seems to adopt an agnostic stance with respect to this issue since he argues that [w]e may

    or may not believe that the natural sciences will succeed one day in explaining the

    production of definite ideas, judgments of value, and actions in the same way in which they

    explain the production of a chemical compound as the necessary and unavoidable outcome

    of a certain combination of elements. In the meantime we are bound to acquiesce in a

    methodological dualism (Mises [1949] 1996, p. 18).

    Let us summarize and conclude. On a few occasions Hayek referred to the famous

    Godel theorems in mathematical logic while expounding his cognitive theory. The exact

    meaning of the supposed relationship between Godels theorems, on the one hand, and the

    essential proposition of Hayeks theory of mind, on the other, remains subject to

    interpretation, however. In this article I have argued that the relationship between Hayeks

    thesis that the human brain can never fully explain itself, on the one hand, and the essential

    insight provided by Godels Incompleteness theorems in mathematical logic, on the other,

    has the character of an apt analogy or an illuminating metaphor. Thus, the anti-mechanistic

    interpretation of Hayeks theory of mind has been revealed as highly questionable and in

    fact incorrect. It has also been concluded that Hayeks arguments for methodological

    dualism, when compared with those of Ludwig von Mises, amount to a strengthening of

    the case for methodological dualism.

    Acknowledgements

    The author would like to thank K. Vela Velupillai as well as an anonymous reviewer for usefuladvice and comments on a previous version of this paper.

    L.M.P. van den Hauwe402

  • Notes

    1. On the place of cognitive psychology in the work of F.A. Hayek, see also Birner (1995).2. See Hayek (1967a, p. 44; 1978, p. 38).3. For Hayeks references to Godel, see Hayek (1967a, p. 62; 1969, p. 332); for Polanyis

    references to Godel, see Polanyi (1958, passim).4. Alan Turing ([1950] 1990) considered the question Can machines think? The strategy he

    adopted is eminently practical. Turing introduced his now-famous Imitation Game, in whic.machine is deemed to be intelligent if an observer is unable to distinguish its behavior from thatof an agent (in this case, a human being) who is assumed a priori to behave intelligently.Polanyi clearly rejects this type of Turing test since he writes: I dissent therefore from thespeculations of A.M. Turing ( . . . ), who equates the problem: Can machines think? with theexperimental question, whether a computing machine could be constructed to deceive us as toits own nature as successfully as a human being could deceive us in the same respect (Polanyi1958, p. 263). For a collection of papers exploring Turings various contributions, see inparticular also Teuscher (2004).

    5. See further under Section 4.6. On the classical mechanist/vitalist debates, see in particular also Shanker (1996).7. See in particular Hayek ([1952] 1976, pp. 47, 83).8. In view of these considerations Hayeks position can probably be better characterized as trans-

    mechanistic. Hayek shows that the dichotomy of mechanistic versus anti-mechanistic, astraditionally understood, is deceptively simplistic. I thank an anonymous referee for havingmade this suggestion to me.

    9. For a survey of some of these misinterpretations, see Butos and Koppl (2006).10. Other critical summaries of Hayeks theory of mind can be found in, among others, the already

    mentioned papers of Weimer (1982), Birner (1995), Tuerck (1995), and Horwitz (2000).11. As Hayek clarified in the Preface of The Sensory Order: It seems as if the problems discussed

    here were coming back into favour and some recent contributions have come to my knowledgetoo late to make full use of them. This applies particularly to Professor D.O. HebbsOrganization of Behaviour which appeared when the final version of the present book waspractically finished (ibid., p. viii).

    12. See now Petzold (2008); also Turing ([1950] 1990) and Godel ([1931] 1992).13. Philosophers use the term weak AI for the hypothesis that machines could possibly behave

    intelligently, and strong AI for the hypothesis that such machines would count as having actualminds (as opposed to simulated minds). See Russell and Norvig (2010, Chapter 26).

    14. For some time the definitive formulation of the connectionist paradigm was contained inSmolensky (1988).

    15. The frame problem did not announce the end of AI, however, nor did it lead to a completeloss of faith in the formalizability and/or axiomatizability of our basic common senseknowledge. See also Hayes (1979) and McDermott (1987), reprinted in Boden (1990a).For a contemporary textbook treatment of the frame problem, see Russell and Norvig (2010,p. 266 ff.).

    16. A NP-hard problem is a mathematical problem for which, even in theory, no shortcut or smartalgorithm is possible that would lead to a simple or rapid solution. Instead, the only way to findan optimal solution is a computationally intensive, exhaustive analysis in which all possibleoutcomes are tested.

    17. See in particular also Hayek (1967c).18. As Hayek himself explains, he was not aware of Von Neumanns work at the time he was

    writing The Sensory Order: No, I wasnt aware of his work, which stemmed from hisinvolvement with the first computers. ( . . . ) At the time his research on automata came out, itwas too abstract for me to relate it to psychology, so I really couldnt profit from it; but I did seethat we had been thinking on very similar lines. See Weimer and Hayek (1982, p. 322). On theorigins of cognitive science, see also Dupuy (2009).

    19. The economic methodologist interested in more complete accounts and/or discussions of thesetheorems can take a look at the following literature. Godels original contribution is containedin Godel ([1931] 1992). Other excellent discussions of these fundamental results can be foundin, among others, Hintikka (2000), Smith (2007), and more formally Smullyan (1992). Thebook by Peter Smith builds up the proofs of the theorems in a gradual and systematic manner.

    Journal of Economic Methodology 403

  • A very short account of Godels theorems is provided by Cameron (2008). Franzen (2005) is aparticularly illuminating and thoughtful guide to uses and abuses of Godels theorems.

    20. For a similar example, see also Hayek ([1952] 1976, p. 188).21. For a critique of the idea itself by a logician, see Franzen (2005, p. 126) where this author

    concludes that [i]nspired by this impressive ability of PA to understand itself, we conclude, inthe spirit of metaphorical applications of the incompleteness theorem, that if the human mindhas anything like the powers of profound self-analysis of PA or ZFC, we can expect to be ableto understand ourselves perfectly. Still this does of course not refute Hayeks conclusionabout the absolute limit to our powers of explanation which is of considerable importance forsome of the basic problems of what used to be called the mind-body relationship and the tasksof the mental and moral sciences generally (Weimer and Palermo, ibid.). But rigorouslyspeaking, Godels theorem relates only to formal systems. In particular the incompletenesstheorem pinpoints a specific incompleteness in any formal system that encompasses some basicarithmetic: it does not decide every arithmetical statement. Unfortunately for the applicabilityof the incompleteness theorem outside mathematics, this also means that we learn nothing fromthe incompleteness theorem about the completeness or incompleteness of formal systems withregard to nonarithmetical or nonmathematical statements (Franzen 2005, p. 27).

    22. The theoretical groundwork of both traditional and connectionist approaches to AI wasprovided by Alan Turings (1936) paper on computable numbers, which defined computationas the formal manipulation of (uninterpretsymbols by the application of formal rules. Thegeneral notion of an effective procedure a strictly definable computational process wasillustrated by examples of mathematical calculation. It implied, however, that if intelligence isin general explicable in terms of effective procedures implemented in the brain, then it could besimulated by a universal Turing Machine or by some actual machine approximating it. In hisComputing Machinery and Intelligence (Turing [1950] 1990) he specifically asked whethersuch machines can think and argued that this should be decided, not on the basis of a prior (andpossibly question-begging) definition of thinking but by enquiring whether some conceivablecomputer could play the imitation game. Could a computer reply to an interrogator in a wayindistinguishable from the way a human being might reply, whether adding numbers orscanning sonnets? (see also Boden 1990b, p. 4).

    23. For a detailed criticism of Penroses thesis, see also Shapiro (1998, 2003).24. See Searles argument (see Searle 1980) involving the imaginary Chinese room, which

    assumes that AI programs and computer models are purely formal-syntactic (as is a Turingmachine) and claims, on this basis, that no system could understand purely in virtue of carryingout computations.

    25. I thank an anonymous referee for having drawn my attention to this point.26. Di Iorio provides a schematic summary of the analogies which exist between mind and market;

    see Di Iorio (2010, pp. 197199).27. Hayek writes: The unique attribute of the brain is that it can produce a representative model on

    which the alternative actions and their consequences can be tried out beforehand. ( . . . ) In so faras the self-organizing forces of a structure as a whole lead at once to the right kind of action( . . . ), such a single-stage order need not be inferior to a hierarchic one in which the wholemerely carries out what has first been tried out in a part (Hayek 1967b, pp. 7374).

    28. Computable economics, according to one notable contributor, is about basing economicformalisms on recursion-theoretic fundamentals. This means that economic entities, economicactions, and economic institutions have to be viewed as computable objects or algorithms(Velupillai 2000, p. 2). This development is mentioned here because of its links both with theHayekian theme of the impossibility of socialist central economic planning (the requirementthat computation processes be decentralized) and with the implications of Godelian and/orrelated meta-mathematical limitative results.

    29. See, among others, Velupillai (2000, 2005a, 2007).30. As is pointed out in Koppl (2008) Hayek essentially showed that scientific and humanistic

    approaches to social science can and should be compatible and complementary. As Kopplsummarizes this thesis: Hayek was a methodological dualist and a hermeneut, but not anti-science. He was a scientific hermeneut (ibid., p. 117). However, in Koppl (2010, p. 22) it isalso argued that Hayeks diagonal argument is a direct consequence of the celebratedCantors Diagonal theorem, but a more moderate and correct interpretation seems to be thatthe relation is at most one of analogy rather than one of strict logical inference or consequence.

    L.M.P. van den Hauwe404

  • References

    Aimar, T. (2008), Self-Ignorance: Towards an Extension of the Austrian Paradigm, The Review ofAustrian Economics, 21, 2343.

    Anderson, A.R. (1964), Minds and Machines, Englewood Cliffs, NJ: Prentice-Hall.Bartholo, R.S., Cosenza, C.A.N., Doria, F.A., and de Lesa, C.T.R. (2009), Can Economic Systems

    be seen as Computing Devices?, Journal of Economic Behavior & Organization, 70, 7280.Birner, J. (1995), The Surprising Place of Cognitive Psychology in the Work of F.A. Hayek,

    Maastricht: METEOR, University of Limburg.Boden, M.A. (1990a), The Philosophy of Artificial Intelligence, Oxford: Oxford University Press. (1990b), Introduction, in The Philosophy of Artificial Intelligence, ed. M.A. Boden,

    Oxford: Oxford University Press, pp. 121.Boettke, P.J., and Subrick, J.R. (2002), From the Philosophy of Mind to the Philosophy of the

    Market, Journal of Economic Methodology, 9(1), 5364.Buchanan, J.M. (1964), What Should Economists Do?, Southern Economic Journal, 30(January

    1964), 213222.Butos, W.N., and Koppl, R.G. (2006), Does the Sensory Order have a Useful Economic Future?

    Downloaded Version, also, in Cognition and Economics (Advances in Austrian Economics),eds. R. Koppl and S. Horwitz, (Vol. 9), Bingley, UK: Emerald Group Publishing Limited,pp. 1950.

    Cameron, P.J. (2008), Godels Theorem, in The Princeton Companion to Mathematics, eds.T. Gowers, J. Barrow-Green and I. Leader, Princeton, NJ: Princeton University Press,pp. 700702.

    da Costa, N.C.A., and Doria, F.A. (1994), Godel Incompleteness in Analysis with an Application tothe Forecasting Problem in the Social Sciences, Philosophia Naturalis, 31, 124.

    (2005), Computing the Future, in Computability, Complexity and Construtivity inEconomic Analysis, ed. K.V. Velupillai, Oxford: Blackwell Publishing, pp. 1550.

    Davis, M. (1993), How Subtle is Godels Theorem? More on Roger Penrose, Behavioral and BrainSciences, 16, 611612.

    (2004), The Myth of Hypercomputation, in Alan Turing: Life and Legacy of a GreatThinker, ed. C. Teuscher, Berlin & Heidelberg: Springer-Verlag, pp. 195211.

    Dennett, D.C. ([1984] 1990), Cognitive Wheels: The Frame Problem of AI, reprinted in Boden 1990app. 147170.

    Di Iorio, F. (2010), The Sensory Order and the Neurophysiological Basics of MethodologicalIndividualism, in The Social Science of Hayeks The Sensory Order, Advances in AustrianEconomics, ed. W.N. Butos, (Vol. 13), Bingley, UK: Emerald Group Publishing Limited,pp. 179209.

    Dreyfus, H.L., and Dreyfus, S.E. (1988), Making a Mind Versus Modelling the Brain: ArtificialIntelligence Back at a Branch-Point, reprinted in Boden 1990a pp. 309333.

    Dupuy, J.-P. (1992), Introduction Aux Sciences Sociales Logique des phenome`nes collectives,Paris: ellipses.

    (2009), On the Origins of Cognitive Science The Mechanization of the Mind, London: TheMIT Press.

    Edelman, G.M. (1982), Through a Computer Darkly: Group Selection and Higher Brain Function,Bulletin of the American Academy of Arts and Sciences, 36(1), 1849.

    (1987), Neural Darwinism The Theory of Neuronal Group Selection, New York: BasicBooks.

    (1989), The Remembered Present A Biological Theory of Consciousness, New York:Basic Books.

    (1992), Bright Air, Brilliant Fire On the Matter of the Mind, New York: Basic Books.Franzen, T. (2005), Godels Theorem An Incomplete Guide to its Use and Abuse, Wellesley, MA:

    A K Peters, Ltd.Fuster, J.M. (2005), Cortex and Mind: Unifying Cognition, Oxford: Oxford University Press.Godel, K. ([1931] 1992), On Formally Undecidable Propositions of Principia Mathematica and

    Related Systems, New York: Dover Publications.Hayek, F.A. ([1952] 1976), The Sensory Order An Inquiry into the Foundations of Theoretical

    Psychology, Chicago: The University of Chicago Press. (1967a), Rules, Perception and Intelligibility, in Studies in Philosophy, Politics and

    Economics, ed. F.A. Hayek, London: Routledge & Kegan Paul, pp. 4365.

    Journal of Economic Methodology 405

  • (1967b), Notes on the Evolution of Systems of Rules of Conduct, in Studies in Philosophy,Politics and Economics, ed. F.A. Hayek, London: Routledge & Kegan Paul, pp. 6681.

    (1967c), The Results of Human Action but not of Human Design, in Studies in Philosophy,Politics and Economics, ed. F.A. Hayek, London: Routledge & Kegan Paul, pp. 96105.

    (1969), The Primacy of the Abstract, followed by Discussion, in Beyond Reductionism New Perspectives in the Life Sciences, eds. A. Koestler and J.R. Smythies, London: Hutchinson,pp. 309333.

    (1978), The Primacy of the Abstract, in New Studies in Philosophy, Politics, Economicsand the History of Ideas, ed. F.A. Hayek, London: Routledge & Kegan Paul, pp. 3549.

    Hayes, P.J. (1979), The Nave Physics Manifesto, reprinted in Boden 1990a, pp. 171205.Hebb, D.O. (1949), The Organization of Behavior: A Neuropsychological Theory, New York: Wiley.Hintikka, J. (2000), On Godel, Belmont, CA: Wadsworth.Hofstadter, D.R. (1980), Godel, Escher, Bach: An Eternal Golden Braid, New York: Vintage Books.Horwitz, S. (2000), From the Sensory Order to the Liberal Order: Hayeks Non-rationalist

    Liberalism, Review of Austrian Economics, 13, 2340.Huerta de Soto, J. (2008), The Austrian School Market Order and Entrepreneurial Creativity,

    Cheltenham: Edward Elgar.Irvine, A.D. (1996), Philosophy of Logic, Chapter I, in Philosophy of Science, Logic and

    Mathematics in the 20th Century, Routledge History of Philosophy, (Vol. IX), London:Routledge, pp. 949.

    Koestler, A., and Smythies, J.R. (1969), Beyond Reductionism New Perspectives in the LifeSciences, London: Hutchinson.

    Koppl, R. (2008), Scientific Hermeneutics: A Tale of Two Hayeks, in Explorations in AustrianEconomics, Advances in Austrian Economics, ed. R. Koppl, (Vol. 11), Emerald GroupPublishing Limited, pp. 99122.

    (2010), Some Epistemological Implications of Economic Complexity, Journal ofEconomic Behavior and Organization, doi:10.1016/j.jebo.2010,09.012.

    Lewis, A.A. (1992a), On Turing Degrees of Walrasian Models and a General Impossibility Result inthe Theory of Decision-Making, Mathematical Social Sciences, 24, 141171.

    (1992b), Some Aspects of Effectively Constructive Mathematics that are Relevant to theFoundations of Neoclassical Mathematical Economics and the Theory of Games, MathematicalSocial Sciences, 24, 209235.

    Lucas, J.R. (1961), Minds, Machines and Godel, reprinted in Anderson 1964, pp. 4359.Markose, S.M. (2005), Computability and Evolutionary Complexity: Markets as Complex Adaptive

    Systems (CAS), The Economic Journal, 115(June), F159F192.McDermott, D. (1987), A Critique of Pure Reason, reprinted in Boden 1990a, pp. 206230.von Mises, L. ([1949] 1996), Human Action A Treatise on Economics, New York: Foundation for

    Economic Education.Penrose, R. (1995), Shadows of the Mind, London: Vintage. (1999), The Emperors New Mind, Oxford: Oxford University Press.Petzold, C. (2008), The Annotated Turing, Indianapolis: Wiley.Polanyi, M. (1958), Personal Knowledge Towards a Post-Critical Philosophy, London:

    Routledge. (1959), The Study of Man, Chicago: University of Chicago Press.Putnam, H. (1960), Minds and Machines, reprinted in Anderson 1964, pp. 7297. (1995), Review of Shadows of the Mind by Roger Penrose, Bulletin (New Series) of the

    American Mathematical Society, 32(3), 370373.Raatikainen, P. (2005), On the Philosophical Relevance of Godels Incompleteness Theorems,

    Revue Internationale de Philosophie, 59(234), 513534.Russell, S.J., and Norvig, P. (2010), Artificial Intelligence A Modern Approach, New York:

    Pearson.Searle, J.R. (1980), Minds, Brains, and Programs, reprinted in Boden 1990a, pp. 6788.Shanker, S.G. (1996), Descartes Legacy: The Mechanist/Vitalist Debates, Chapter 10, in

    Philosophy of Science, Logic and Mathematics in the 20th Century, Routledge History ofPhilosophy, (Vol. IX), London: Routledge, pp. 315375.

    Shapiro, S. (1998), Incompleteness, Mechanism, and Optimism, The Bulletin of Symbolic Logic,4(3), 273302.

    L.M.P. van den Hauwe406

  • (2003), Mechanism, Truth, and Penroses New Argument, Journal of Philosophical Logic,32, 1942.

    Simon, H.A. (1996), Sciences of the Artificial (3rd ed.), London: The MIT Press.Smith, B. (1997), The Connectionist Mind: A Study of Hayekian Psychology, in Hayek

    Economist and Social Philosopher A Critical Retrospect, ed. S.F. Frowen, London:Macmillan Press Ltd, pp. 929.

    Smith, P. (2007), An Introduction to Godels Theorems, New York: Cambridge University Press.Smolensky, P. (1988), On the Proper Treatment of Connectionism, Behavioral and Brain Sciences,

    11, 174.Smullyan, R.M. (1992), Godels Incompleteness Theorems, New York: Oxford University Press.Teuscher C. (ed.) (2004), Alan Turing: Life and Legacy of a Great Thinker, Berlin & Heidelberg:

    Springer-Verlag.Tuerck, D.G. (1995), Economics as Mechanism: The Mind as Machine in Hayeks Sensory Order,

    Constitutional Political Economy, 6, 281292.Turing, A. ([1950] 1990), Computing Machinery and Intelligence, reprinted in Boden 1990a,

    pp. 4066.Velupillai, K.V. (2000), Computable Economics, Oxford: Oxford University Press. (2005a), The Unreasonable Ineffectiveness of Mathematics in Economics, Cambridge

    Journal of Economics, 29, 849872.Velupillai, K.V. (2007), The Impossibility of an Effective Theory of Policy in a Complex

    Economy, in Complexity Hints for Economic Policy, eds. M. Salzano and D. Colander, Part VIMilan: Springer, pp. 273290.

    von Bertalanffy, L. ([1969] 2009), General System Theory Foundations, Development,Applications (Revised Edition), New York: G. Braziller.

    Webb, J.C. (1980), Mechanism, Mentalism, and Metamathematics An Essay on Finitism,Dordrecht: Reidel Publishing Company.

    Weimer, W.B. (1982), Hayeks Approach to the Problems of Complex Phenomena: An Introductionto the Theoretical Psychology of The Sensory Order, Chapter 12, in Cognition and the SymbolicProcesses, eds. W.B. Weimer and D.S. Palermo, (Vol. 2), Hillsdale, NJ: Hillsdale, pp. 241285.

    Weimer, W.B., and Hayek, F.A. (1982), Weimer-Hayek Discussion, Chapter 15, in Cognition andthe Symbolic Processes, eds. W.B. Weimer and D.S. Palermo, (Vol. 2), Hillsdale, NJ: Hillsdale,pp. 321329.

    Journal of Economic Methodology 407

  • Copyright of Journal of Economic Methodology is the property of Routledge and its content may not be copiedor emailed to multiple sites or posted to a listserv without the copyright holder's express written permission.However, users may print, download, or email articles for individual use.