This lecture introduces two (of many) interface problems. These are problems which arise when actions are controlled by two or more representations that are not inferentially integrated. How is it possible that the two representations non-accidentally match?
If the slides are not working, or you prefer them full screen, please try this link.
The representations’ influence on a single action indicates that the outcomes they represent must at least sometimes non-accidentally match. (Unless there is to be nothing at all to coordinate the representations’ influence.)
The lack of inferential integration rules out the most straightforward way of explaining how non-accidental matches occur—namely, through processes of inference.
We must therefore ask, How are non-accidental matches possible? The question is an interface problem.
In this lecture we will identify two interface problems. These involve:
- motor representations and intentions (see The Interface Problem: Motor Representation vs Intention)
- primary motivational states and preferences (see Preference vs Aversion: A Dissociation)
In addition to being a philosophical topic in their own right, interface problems are important for both philosophical and psychological theories of action.
Prerequisites and What to Skip
This lecture depends on you having studied some sections from a previous lecture:
None of this lecture is required for the minimum course of study.
Ask a Question
Your question will normally be answered in the question session of the next lecture.
To illustrate, one way of matching is for the B-outcomes to be the A-outcomes. Another way of matching is for the B-outcomes to stand to the A-outcomes as elements of a more detailed plan stand to those of a less detailed one.
[of plan-like structures] In the simplest case, plan-like hierarchies of motor representations match if they are identical. More generally, plan-like hierarchies match if the differences between them do not matter in the following sense. For a plan-like hierarchy in an agent, let the self part be those motor representations concerning the agent's own actions and let the other part be the other motor representations. First consider what would happen if, for a particular agent, the other part of her plan-like hierarchy were as nearly identical to the self part (or parts) of the other's plan-like hierarchy (or others' plan-like hierarchies) as psychologically possible. Would the agent's self part be different? If not, let us say that any differences between her plan-like hierarchy and the other's (or others') are not relevant for her. Finally, if for some agents' plan-like hierarchies of motor representations the differences between them are not relevant for any of the agents, then let us say that the differences do not matter.