Why there will always be an anthropology

Ae_logo In the Wall Street Journal today, the book review opens this way.

Consider Linda, a 31-year-old woman, single and bright.  As a student, she was deeply concerned with discrimination and social justice and also participated in antinuclear protests.  Which is more probable?  (a) Linda is today a bank teller; (b) Linda is a bank teller and active in the feminist movement.

[Psychologists Daniel Kahneman and Amos Tversky determined] that most respondents picked "b," even though this was the narrower choice and hence the less likely one. 

Shaywitz, the reviewer, says that Kahneman and colleagues have

reshap[ed] the study of economics by challenging the assumption that a person, when faced with a choice, can be counted on to make a rational decision.

I would argue that "b" is the rational decision.  It shows us the respondent working with what he knows.  We have given him a little information and he is working this information into an intelligent choice. 

Except of course the economist will not accept a choice as intelligent unless it meets his narrow definition of the rational.  For the economist, the rational choice is the broader choice. "A" is more likely because less constrained.  From a better’s point of view, this is the right choice.  But it is not, I submit, the more rational one.  Because it forces the respondent to forget what he knows, to forgo the opportunity we have given him to make an "informed" choice.

We could do the ethnography here.  If we asked the the respondent how he thought this problem through, he would give us an account of his "rationality."  He would demonstrate that he satisfies the definition of the term according to Princeton Wordnet.  It would be easy enough to show that he occupied "the state of having good sense and sound judgment." 

Economics continues to insist on its notion of rationality when we know that this rationality is always embedded in a social context and a cultural one.  Rationality is only sometimes about calculating odds.  It’s also about working with a set of parameters and bodies of knowledge.  Rationality is almost always profoundly social and culture event. 

In the experiment reported in the WSJ, Kahneman was effectively asking the respondent to "forget what he knew" to make the rational choice.  Funny how often economics seems to ask us to do the same. 

References

Shaywitz, David A.  2008.  Free to choose but often wrong.  Wall Street Journal.  June 24, 2008.

16 thoughts on “Why there will always be an anthropology”

  1. I have to take the bait here. Your definition of rationality–“working with what you know”–should include the fact that there are more bank tellers than there are bank tellers with red hair or than bank tellers active in the feminist movement. Prior information–common sense–is exactly what people are not displaying when they commit this error in probabilstic reasoning.

    No one has to forget anything he or she knows to answer this question (or bet) correctly. To the contrary, individuals have to remember the cultural context in order to get it right (activist bank tellers are a subset of all bank tellers). A process that ignores this context, is also logically inconsistent, and would lead to money-losing betting behavior can be called many things, but “rational” is a poor adjective. “Intelligible” or “coherently heuristic” might work.

    As a mild skeptic about the relevance of the sorts of experiments Kahneman and company have pursued, I am a poor defender of them. For example, I would worry that the respondents here were assuming that the experimenter knew about a specific individual named Linda and was trying to give them a hint (in which case the prior distribution in the population would not be relevant and the conclusion would be correct). But the notion of rationality employed in this critique is the same as the one used by Kahneman and any economist–correctly drawing conclusions from what you already believe.

  2. Go get him Steve! 🙂 For what it’s worth, I don’t think the respondents are working with what they “know.” Instead, they are working with what they “feel” is correct, based on how the question was framed.

    It reminds me of the two valley girls chatting on the beach one night. One girl asks, “Which do you think is farther away, the moon or Italy.” To which her friend replies, matter-of-factly, “Duh? Italy. You can see the moon.”

  3. Grant, it’s an odd definition of rationality that allows it’s practitioners to consistently and predictably make obviously wrong decisions; but if you really want to play definitional hide and seek here, I’d pick irrationally right over rationally wrong any day.

  4. I intend to comment shortly on the substance of your fine post, Grant, but let me just add this now:

    The word rational had a perfectly good meaning in philosophy for about 2300 years before it was stolen by economists in the mid-20th century. Its perfectly good meaning, which is still perfectly good, and which is still in current use in philosophy and now in parts of computer science, is as follows: endorsement of some proposition is rational if it can be justified with a reason. Economic rationality may be seen as a special case of this meaning, if economic actors are able to explain their behaviours.

    Interested a few years ago to see precisely who stole it and when, I managed to trace its use in the economics literature back to a paper by economist Oskar Lange, published in 1945-46 (see reference below). Lange defined rationality as follows: “A unit of economic decision is said to act rationally when its objective is the maximization of a magnitude.”

    Words fail me at both the utter narrowness of human culture this definition displays, and at the egregiousness of the theft involved. I think “theft” is not too strong a word for the re-alignment of the semantic moorings of the word which economists have undertaken, as if a ship had been taken out of a harbour and tied up somewhere else, a ship, let me add, that did not belong to economists.

    Perhaps more important for a concerned citizen to ask is: Who benefits from this theft? By labeling some human behaviours as “rational” and others as “irrational”, status is given to the first group and taken away from the second. Economists cannot legitimately say this word is neutral in its effects, for if it was, we are entitled to ask why they choose “rationality” for their concept of maximization rather than inventing some new word. No, the theft of this word, and its continued use by economists, was part of a political project to label activities and people in a such a way as to empower some activities and people, and to disempower others. That many economists lack sufficient awareness of social forces to even realize these can be the social consequences of their use of words merely adds to the crime here, IMHO.

    Let me add, since economists I meet often seem to think this is some sort of joke, that I am completely serious here.

    Reference:
    O. Lange [1945-46]: “The scope and method of economics”. The Review of Economic Studies, 13 (1): 19-32.

  5. Wow, the opportunity to have this kind of conversation on an anthropological blog—as opposed to the thousands of economic ones out there—is just too good to pass up. I’ve got two things to note, call them classic lessons from anthropology that adds to the conversation, I hope.

    One is the “formalist vs. substantivist” argument in economic anthropology. I believe it was Polanyi who first noted the difference between a formalist definition of “economic” which means to rationalize economic choice, while the substantivist definition of economic simply looks at all of the various behaviors that happen in the realm/area of economic activity, whether they be “rational” maximization or not. These are separate ways of looking at economics, but far too often, the economists only consider the formalist route (which leads them to criticize people who are not “rational.”)

    The second issue that Grant raises concerns whether or not people are or need to be or even can be “rational.” Tom Asacker brings up a point that , “I don’t think the respondents are working with what they “know.” Instead, they are working with what they “feel” is correct, based on how the question was framed.” But are we really so sure that humans can “know” in the way that we want to imagine them to? If you read Sahlins (or know him really well), then you might agree that this notion/idea of knowing is more or less a metaphor for how we think.

    Slight tangent, but it’s funny that for so long we have been wondering if scientists can create computers that can think…but the real question for anthropologists continues to be: can humans think?

  6. I would say that a simpler way to resolve the irrationality is simply that when you present both (a) and (b) as choices, people interpret (a) as “Linda is a bank teller and *not* active* and (b) as “Linda is a bank teller and active”. That’s not unreasonable, and in that case, it’s perfectly rational to say that (b) is more likely. If they had simply phrased (a) as “Linda is a bank teller and might or might not be active in the movement”, they’ll probably get their “rational” answer. It’s not a question of “rational”, it’s just assumptions and interpretations.

  7. Certainly gek’s interpretation is also reasonable and subject to experimental test. The point, however, is that that interpretation uses the same notion of rationality-as-logical-consistency as that to which I appealed above.

    Peter: The notion of rationality employed by Kahneman here is weaker than Lange’s definition (which really isn’t that strong itself if you think about it). There is no suggestion of choice in this problem, just probability assessment. (I suppose you could say that the agent is picking beliefs to maximize his accuracy, but that just shows how weak, malleable, and nonrestrictive the maximization definition is.) All we ask of the agent is that he know the difference in size between a set and a proper subset of that set. It’s hard to see how any definition of rationality (or common sense) worth the name wouldn’t include such a principle. Otherwise we’re in the realm of Doestoyevsky’s Underground Man who rebelled against the proposition that twice two makes four–hardly a paragon of rationality by his own lights.

  8. I don’t really understand why you’re having a crack at Kahneman for this – this is the orthodox way to use “rational” in economics. If anything, his point the same as yours: that economists who assume that people act “rationally” are daft. He’s only using the term in order to undermine it.

    I guess that over the next few years this will get sorted out. Either people like DK will force other economists to change the way they use the term or we will all get used to the idea that economists don’t mean the same thing that we do.

  9. James,

    that could be a problem! If economists don’t mean the same thing we do, how are we going to understand them?

    How can they be sure they understand each other?

    Public understanding can’t ever be served by disciplines using their own jargon. The situation is surely made worse when the jargon involves hijacking words we often use and giving them substantially different meanings.

  10. Yeah, but if that is the point why tie it to DK (who, if anything, is one of the good guys challenging the economist establishment)? Grant could just write a post saying that he discovered that economists don’t mean the same thing that we do when they say rational. He could go on to point out that defining what anyone might mean by “rational” is front and centre in a lot of philosphical debate of the last hundred years.

    Personally, I try not to use the word. I’m not at all convinced by an account of the world that separates something “rational” called “thought” from something called “emotion” – that doesn’t sound anything like my experience of the world.

  11. I can assure you that Daniel Kahneman agrees with the economists’ definition of rationality–I’ve heard him say it. He simply believes that people aren’t rational or are subject to the equivalent of “optical illusions” that interfere with their rationality. As he defines himself as a psychologist studying economics rather than an economist, this is not a matter of guild behavior, obscurantist jargon, or any other claim that the anti-economists want to advance.

    I note that none of the anti-economists here has proposed any definition of rationality that encompasses logical inconsistency–probably because it would sound and would be inconsistent with how ordinary people understand the term. Maybe anthropologists have their own peculiar jargon where “rational” people don’t know that the set of all bank tellers is bigger than the set of bank tellers with red hair, or where people ignore relevant facts in coming to conclusions, but it’s peculiar to imagine these terms as corresponding to the ordinary meaning of the word.

  12. I have some specific comments on this example, but I feel it necessary to make some contextual statements first. I apologize for hogging the microphone so much at your party, Grant.

    The first point — which should be obvious to anyone who deals professionally with probability, but often seems not — is that the answer to a problem involving uncertainty depends very crucially on its mathematical formulation. We are given a situation expressed in ordinary English words and asked to use it to make a judgement. The probability theorists have arrived at a way of translating such situations from natural human language into a formal mathematical language, and using this formalism, to arrive at an answer to the situation which they deem correct. However, natural language may be imprecise (as in the example, as gek notes). Imprecision of natural language is a key reason for attempting a translation into a formal language, since doing so can clarify what is vague or ambiguous. But imprecision also means that there may be more than one reasonable translation of the same problem situation, even if we all agreed on what formal language to use and on how to do the translation. There may in fact be more than one correct answer.

    There is much of background relevance here that may not be known to everyone, First, note that it took about 250 years from the first mathematical formulations of uncertainty using probability (in the 1660s) to reach a sort-of consensus on a set of mathematical axioms for probability theory (the standard axioms, due to Andrei Kolmogorov, in the 1920s). By contrast, the differential calculus, invented about the same time, was already rigorously formalized by the mid-19th century. Dealing formally with uncertainty is hard, and intuitions differ greatly, even for the mathematically adept.

    Second, even now, the Kolmogorov axioms are not uncontested. Although it often comes as a suprise to statisticians and mathematicians, there is a whole community of intelligent, mathematically-adept, people in Artificial Intelligence who prefer to use alternative formalisms to probability theory, at least for some problem domains. These alternatives (such as Dempster-Shafer theory and possibility theory) are preferred to probability theory because they more are expressive (more situations can be adequately represented) and because they are easier to manipulate for some types of problems than probability theory. Let no one believe, then, that probability theory is accepted by every mathematically-adept expert who works with uncertainty.

    Historical aside: In fact, ever since the 1660s, there has been a consistent minority of people dissenting from the standard view of probability theory, a minority which has mostly been erased from the textbooks. Typically, these dissidents have tried unsuccessfully to apply probability theory to real-world problems, such as those encountered by judges and juries (eg, Leibniz in the 17th century), doctors (eg, von Kries in the 19th), business investors (eg, Shackle in the 20th), and now intelligent computer systems (since the 1970s). One can have an entire university education in mathematical statistics, as I did, and never hear mention of this dissenting stream. A science that was confident of its own foundations would surely not need to suppress alternative views.

    Third, intelligent, expert, mathematically-adept people who work with uncertainty do not even yet agree on what the notion of “probability” means, or to what it may validly apply. Donald Gillies, a professor of philosophy at the University of London, wrote a nice book called, “Philosophical Theories of Probability” (Routledge, London, 2000), which outlines the main alternative interpretations. A key difference of opinion concerns the scope of probability expressions (eg, over which types of natural language statements may one validly apply the translation mechanism). Note that Gillies wrote his book 70-some years after Kolmogorov’s axioms.

    In addition, there are other social or cultural factors, usually ignored by mathematically-adept experts, which may inform one’s interpretations of uncertainty and probability. A view that the universe is deterministic, or that one’s spiritual fate is pre-determined before birth, may be inconsistent with any of these interpretations of uncertainty, for instance. I have yet to see a Taoist theory of uncertainty, but I am sure it would differ from anything developed so far.

    I write this comment to give some context to our discussion. Mainstream economists and statisticians are fond of castigating ordinary people for being confused or for acting irrationally when faced with situations involving uncertainty, merely because the judgements of ordinary people do not always conform to the Kolmogorov axioms and the deductive consequences of these axioms. It is surely unreasonable to cast such aspersions when experts themselves disagree on what probability is, to what statements probabilities may be validly applied, and on how uncertainty should be formally represented.

  13. (also blogged at: http://sensemaya.org/maya/2008/06/29/careless-pursuit-trivial-things)

    That’s a completely ridiculous conclusion to draw from the experiment – even statistically. You could replace ‘bank teller’ with ‘bartender’ or ‘lawyer’ and the result would be the same. Having primed somebody with the suggestion that Linda is socially conscious, it is actually more likely for them to think that she’s active in the feminist movement than that she’s a bank teller or school bus driver, because, statistically, someone likely to be be concerned with discrimination is also likely to be concerned with feminism. (This is the ‘prior information’ that steve postrel talks about).

    Consider this re-wording of the question:

    “Li Shen is 19 and earns passing grades in college. As a child, he used to draw with crayons, and played with his classmates. Which is more probable? (a) Li is studying law; (b) Li is studying law and has friends”

    This is a question that is disconnected from the premise supplied (okay, that is debatable, but it is still a tenuous connection). Given that the intent of the experiment is to determine whether people will pick the more probable (read larger) set of two, it must be demonstrable that (a) the conclusion does not follow from the premise; and (b) that one of the sets is larger than the other (presumably there are law students somewhere without any friends – but this requires interpretation). Any other experimental design induces interpretation on the behalf of the subject. Let’s try again:

    “Which is more probable? (a) Li is studying law; (b) Li is studying law and has brown hair”

    Better. But completely pointless. In fact, the experiment could have been written with the choices like this:

    “Which is more probable? (a) Linda is active in the feminist movement and a bank teller; (b) Linda is a bank teller”

    or even:

    “Which is more probable? (a) Linda is active in the feminist movement; (b) Linda is active in the feminist movement and has red hair”

    I’m willing to bet that with the last formulation, the subjects would have picked option (a) – the correct, ‘rational’ answer. One does not have to resort to a discussion of the context of definitions to see that this is an experiment that proves absolutely nothing.

    Do Kahneman & Tversky have any excuse for having wasted perfectly good grant money on such a badly designed experiment?

  14. And now I must apologize for going on as well.

    You have to be pretty careful when looking at popular accounts of scientific work. The authors usually employ more controls than journalists have the space (or audiences have the patience) to process. While I would love to be able to say that all the behavioral econ experiments are bunk, they aren’t. Recurring patterns of bounded rationality (cognitive biases, framing effects, and computational limits) at the individual level have been established over a vast number of experiments using a wide range of techniques. Even professional decision analysts, trained in probability and decision theory and primed to be careful, are subject to some of these patterns. So the doubters about this particular experiment ought to keep their powder dry–framing effects are pretty robust and can be shown to occur even when there is nothing remotely ambiguous about the question. Whether and when these limits on rationality have real impacts on economic phenomena is a harder question and one that is just beginning to be addressed by empiricists.

    Peter and Grant are asking a different question: How do we know that the described behavior is not “rational?” I am well aware of the problems associated with applying Bayesian rationality (also very popular with artificial intelligence types, I should note) to situations with “large worlds” where the set of alternatives is too big to describe and map or to situations where persistent disagreement across agents seems natural. Neither of those problems are at issue in the example given, however.

    I can find philosophers who will tell you that measurements are problematic, that common sense is problematic, and that not being a solipsist is problematic. David Hume made a very compelling argument against belief in causality. Making things problematic is what philosophers do, but this enterprise often has little to do with the progress of understanding. When you can reverse people’s choices about lotteries by stating the exact same proposition in terms of losses rather than gains (as has been shown many times), we are well beyond the niceties of philosophical definitions–we are faced with the equivalent of optical illusions. And when we can see how someone could exploit these inconsistencies to take money from the agent in question, no philosopher can rescue these illusions from the category of “mistakes” or “errors.”

    As for the problems of going from natural language to formal language, I heartily agree–that’s why we pay people who know how to structure problems formally (it is a learned skill for most). To the extent that the process of formalization reveals ambiguities in the natural language presentation of a problem, that is usually an important benefit of formalization. Even a statement like “you can fool some of the people all of the time” admits at least two valid interpretations–either there is a fixed set of gullible people who are always fooled or there is a different set of people fooled with each instance of deception–and a formal analysis may help tease such things out. Such ambiguities in experimental questions posed to test subjects are errors by the experimenter, and while this sort of bad technique does occur and even slips past peer review sometimes, the literature as whole is not shot through with it.

    In short, if the experiment cited by Grant were performed correctly with due concern for eliminating ambiguities and included appropriate controls and checks to make sure that subjects’ errors were not due to such ambiguities, then there is nothing a philosopher could say to credibly rescue the responses from a claim of irrationality. Maybe an anthropologist has a different sort of life preserver, however, and I await a description of that with some interest.

Comments are closed.