Link Search Menu Expand Document Copy Check Copy

We Need a Shared Understanding

What anchors our understanding, as researchers, of action, belief, desire and the rest? We examine potential sources for such an understanding—folk psychology and philosophy and find that they cannot anchor a shared understanding. This is where decision theory may be useful: it promises to a formal framework which can elucidate notions of belief and desire, thereby providing a shared understanding.

If the slides are not working, or you prefer them full screen, please try this link.

Notes

What anchors our understanding, as researchers, of action, belief, knowledge and the rest?

The Argument

  1. We as researchers need a shared understanding of belief and desire.

  2. There are three potential sources of shared understanding: folk psychology, philosophy and decision theory.

  3. Folk psychology does not provide a shared understanding.

  4. Nor does philosophy.

    Therefore:

  5. We need decision theory to provide a shared understanding.

To evaluate this argument we need to check premises 3 and 4.

Premise 3: Folk Psychology

Could our everyday expertise as mindreaders anchor our understanding? Consider Lewis (1972). He postulates that there are a set of platitudes concerning mental states which are common knowledge among us all. He also claims that if we assembled these platitudes, we could use them to define mental state terms like ‘intention’ and ‘knowledge’.

If this were true it would mean that we can rely on our everyday expertise as mindreaders to anchor our understanding, as researchers, of knowledge, intention and the rest. But is it true?

To illustrate how his view works, Lewis imagines that some important platitudes have this form:

‘When someone is in so-and-so combination of mental states and receives sensory stimuli of so-and-so kind, he tends with so-and-so probability to be caused thereby to go into so-and-so mental states and produce so-and-so motor responses.’ (Lewis, 1972, p. 256)

But what are these platitudes that are supposed to be common knowledge? Heider (1958, p. 12) offered what is probably still, more than half a century later, the most sustained, carefully developed attempt to ‘make explicit the system of concepts that underlies interpersonal behavior’.[1] There isn’t much in Heider’s work that looks useful for defining ‘intention’ or ‘knowledge’.

It is also striking that not very much of Heider’s construction could plausibly be regarded as common knowledge among ordinary mindreaders. Heider relies on a mix of informal observation, imagination, guesswork as well as philosophers’ ideas (Ryle and Satre, for example), my guess is that we should regard the principles he identifies not as articulating an understanding that we all share but rather as an imaginative take on possible strategies for everyday mindreading. In fact, Heider’s approach is not that different from philosophers like Bratman or Alvarez.

But if Lewis were right about common knowledge of platitudes anchoring mental state terms, either Heider’s work should have turned out very differently or else there should be a lot less disagreement among the philosophers. This is why I think Lewis must be wrong.

(We might be able to use theories to specify models that help us characterise the expertise of ordinary mindreaders. But of course this presupposes a shared understanding of belief, desire and the rest in the first place.)

Premise 4: Philosophy

Could philosophical accounts provide a shared understanding of notions like belief and desire?

One obstacle is widespread apparent disagreement among philosophers.

Take intention, for example. It is not just that philosophers disagree on whether intentions are beliefs about the future (Velleman, 1989), or belief-desire pairs (Sinhababu, 2013), or something entirely distinct from both beliefs and desires (Bratman, 1987). Nor is it just that some think of intentions as essentially components of plans (Bratman, 1987 again) whereas others do not connect intentions with plans at all (Searle, 1983). Nor is it even that there is much disagreement about how intentions relation to action, to knowledge and to belief. Philosophers even disagree on whether intentions are mental states at all.[2]

There is similar radical disagreement concerning knowledge, and concerning emotions.

The radical disagreement is an obstacle to using philosophical accounts to anchor understanding just because there is such deep and widespread disagreement among the philosophers.

But there is another, deeper reason for thinking that we cannot use philosophical accounts to anchor our understanding, as researchers, of knowledge, intention and the rest.

Philosophers have different, mostly unarticulated aims. Some philosophers seem to be proposing new ways of thinking in the hope that we adopt them. Others appear to be attempting to make explicit principles that are implicit in a particular tradition of law or in the activities of a particular historical culture. And of course some are trying to make systematic things that seem so obviously true that we can accept them without having any reason to do so (e.g. Lewis, 1969).

A further problem is that philosophical analysis often appear to rely on researchers’ own ideas about folk psychology (returning us to the problems of the previous section). Or so Nagel argues:

‘Unless there is a special reason to think that knowledge attributions work quite differently when we are reading philosophy papers—and [there is] evidence against that sort of exceptionalism—we should expect to find that epistemic case intuitions [which are among the things that inform philosophers’ views about what knowledge is] are generated by the natural capacity responsible for our everyday attributions of states of knowledge, belief and desire. This capacity has been given various labels, including 'folk psychology', 'mindreading', and 'theory of mind’’ (Nagel, 2012, p. 510).

If Nagel is right, appealing to philosophy to ground a shared understanding also inherits the problems of relying on folk psychology; so it is not even a genuine alternative.

Limit

To be clear, let me distinguish two claims:

  1. We could (mis)use philosophical accounts of minds and actions to characterise various models of mind.

  2. Philosophical accounts of minds and actions anchor a shared understanding of what knowledge, belief, joy and the rest are.

I am rejecting the second claim only. (The first claim has been very good to me, and I hope to keep misusing philosophical accounts of minds and actions.)

Conclusion

The argument works: we lack a shared understanding of belief and desire. This is where decision theory may be useful: it promises to a formal framework which can elucidate notions of belief and desire, thereby providing a shared understanding (Jeffrey, 1983).

Optional Extra: Operationalization

A third potential source of shared understanding (in addition to folk psychology and philosophy) is operationalisation. A variety of scientists have offered operationalisations of abilities to attribute mental states. Perhaps we could use these to anchor a shared understanding?

This is a tempting idea, but it faces two problems. First, we do not really know much about the structure of Theory of Mind, and different researchers use different taxonomies (Happé, Cook, & Bird, 2017; Beaudoin, Leblanc, Gagner, & Beauchamp, 2020).[3] Second, while there is some evidence that a wide range of false belief tasks appear to test for a single underlying competence (Flynn, 2006, p. 650; Wellman, Cross, & Watson, 2001), when we turn to theory of mind tasks more broadly we find that different theory of mind tasks appear to test for different things in the sense that an exploratory factor analysis fails to find that they load on a single factor (Warnell & Redcay, 2019).[4]

This means that when faced with the question of what anchors our understanding, as researchers, of action, belief, knowledge and the rest it is not enough simply to point to an operationalisation.

This does not mean, of course, that operationalisations are irrelevant, only that they cannot currently provide a shared understanding of notions like belief and desire.

References

Beaudoin, C., Leblanc, É., Gagner, C., & Beauchamp, M. H. (2020). Systematic Review and Inventory of Theory of Mind Measures for Young Children. Frontiers in Psychology, 10.
Bratman, M. E. (1987). Intentions, plans, and practical reasoning. Cambridge, MA: Harvard University Press.
Burge, T. (2009). Primitive Agency and Natural Norms. Philosophy and Phenomenological Research, 79(2), 251–278. https://doi.org/10.1111/j.1933-1592.2009.00278.x
Davidson, D. (1974). Belief and the basis of meaning. In Inquiries into truth and interpretation (pp. 141–154). Oxford: Oxford University Press.
Davidson, D. (1987). Problems in the explanation of action. In P. Pettit, R. Sylvan, & J. Norman (Eds.), Metaphysics and morality: Essays in honour of j. J. C. smart (pp. 35–49). Oxford: Blackwell.
Dixson, H. G. W., Komugabe-Dixson, A. F., Dixson, B. J., & Low, J. (2018). Scaling Theory of Mind in a Small-Scale Society: A Case Study From Vanuatu. Child Development, 89(6), 2157–2175. https://doi.org/10.1111/cdev.12919
Flynn, E. (2006). A microgenetic investigation of stability and continuity in theory of mind development. British Journal of Developmental Psychology, 24(3), 631–654. https://doi.org/10.1348/026151005X57422
Happé, F., Cook, J. L., & Bird, G. (2017). The Structure of Social Cognition: In(ter)dependence of Sociocognitive Processes. Annual Review of Psychology, 68(1), 243–267. https://doi.org/10.1146/annurev-psych-010416-044046
Heider, F. (1958). The psychology of interpersonal relations. Hillsdale, N.J: Lawrence Erlbaum Associates.
Hyman, J. (1999). How knowledge works. Philosophical Quarterly, 49(197), 433–451.
Jeffrey, R. C. (1983). The logic of decision, second edition. Chicago: University of Chicago Press.
Kaplan, M. (1985). It’s Not What You Know That Counts. Journal of Philosophy, 82(7), 350–363. https://doi.org/jphil198582748
Keramati, M., Smittenaar, P., Dolan, R. J., & Dayan, P. (2016). Adaptive integration of habits into depth-limited planning defines a habitual-goaldirected spectrum. Proceedings of the National Academy of Sciences, 113(45), 12868–12873. https://doi.org/10.1073/pnas.1609094113
Lewis, D. K. (1969). Convention : A philosophical study. Cambridge, MA.: Harvard University Press.
Lewis, D. K. (1972). Psychophysical and theoretical identifications. Australasian Journal of Philosophy, 50(3), 249–258. https://doi.org/10.1080/00048407212341301
Lewis, D. K. (1996). Elusive Knowledge. Australasian Journal of Philosophy, 74(4), 549–567. https://doi.org/10.1080/00048409612347521
Nagel, J. (2012). Intuitions and Experiments: A Defense of the Case Method in Epistemology. Philosophy and Phenomenological Research, 85(3), 495–527.
O’Brien, L., & Soteriou, M. (Eds.). (2009). Mental actions. Oxford: Oxford University Press. Retrieved from http://0-doi.org.pugwash.lib.warwick.ac.uk/10.1093/acprof:oso/9780199225989.001.0001
Phillips, J., Buckwalter, W., Cushman, F., Friedman, O., Martin, A., Turri, J., … Knobe, J. (2020). Knowledge before Belief. Behavioral and Brain Sciences, X, 1–37. https://doi.org/10.1017/S0140525X20000618
Radford, C. (1966). Knowledge: By Examples. Analysis, 27(1), 1–11. https://doi.org/10.2307/3326979
Rose, D., & Schaffer, J. (2013). Knowledge entails dispositional belief. Philosophical Studies, 166(1), 19–50. https://doi.org/10.1007/s11098-012-0052-z
Searle, J. R. (1983). Intentionality: An essay in the philosophy of mind. Cambridge: Cambridge University Press.
Setiya, K. (2014). Intention. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Spring 2014). Metaphysics Research Lab, Stanford University. Retrieved from http://plato.stanford.edu/archives/spr2014/entries/intention/
Sinhababu, N. (2013). The Desire-Belief Account of Intention Explains Everything. Noûs, 47(4), 680–696. https://doi.org/10.1111/j.1468-0068.2012.00864.x
Sosa, E. (2007). A virtue epistemology: Apt belief and reflective knowledge. Volume I. Oxford: Oxford University Press. Retrieved from http://webcat.warwick.ac.uk/record=b3489766~S1
Steward, H. (2009). Animal Agency. Inquiry, 52(3), 217–231. https://doi.org/10.1080/00201740902917119
Unger, P. K. (1975). Ignorance: A case for scepticism. Oxford: Clarendon. Retrieved from http://webcat.warwick.ac.uk/record=b2663127~S1
Velleman, D. (1989). Practical reflection. Princeton: Princeton University Press.
Velleman, D. (2000). The possibility of practical reason. Oxford: Oxford University Press.
Warnell, K. R., & Redcay, E. (2019). Minimal coherence among varied theory of mind measures in childhood and adulthood. Cognition, 191, 103997. https://doi.org/10.1016/j.cognition.2019.06.009
Wellman, H., Cross, D., & Watson, J. (2001). Meta-analysis of theory of mind development: The truth about false-belief. Child Development, 72(3), 655–684.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

Endnotes

  1. Heider did not share Lewis’ assumption about being able to rely on common knowledge of platitudes alone. On Heider’s view, ‘If people were asked about these conditions they probably would not be able to make a complete list of them. Nevertheless, these assumptions are a necessary part of interpersonal relations; if we probe the events of everyday behavior, they can be brought to the surface and formulated in more precise terms’ (Heider, 1958, p. 60). ↩︎

  2. ‘There is a deep opposition here between accounts that take intention to be a mental state in terms of which we can explain intentional action, and those that do not’ (Setiya, 2014). ↩︎

  3. See Beaudoin et al. (2020, p. 15): ‘The lack of theoretical structure and shared taxonomy in ToM definitions and its underlying composition impedes our ability to fully integrate ToM in a coherent and comprehensive framework linking it to various socio-cognitive abilities, a pervasive issue observed across the domain of social cognition.’ ↩︎

  4. It is important to be clear about why this is a problem. It is not a problem that Theory of Mind may involve a variety of different processes and models, so that no single factor will explain performance across a sufficiently diverse set of tasks. But if you want to say, independently of answering the question about models, that we have a solid operationalization of Theory of Mind, then you need statistics to show that your operationalization has some kind of internal coherence. And that is what appears to be missing. ↩︎