Press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (no equivalent if you don't have a keyboard)

Press m or double tap to see a menu of slides

## Pure Goal Ascription: the Teleological Stance

These two questions are closely related:
1. What model of action underpins six- or twelve-month-old infants’ abilities to track the goals of actions?
2. How could infants identify goals without ascribing intentions?
The first concerns the model of action, the second the process of ascription. As we'll see, we can find an answer to the first question by thinking about the second. (Compare the physical case: the question of which model of the physical you are adopting, e.g. impetus vs Newtonian mechanics, is closely related to the question of how you make predictions about objects' movements and interactions.)
Although the first is out current question, we'll approach it by asking the second.

How could pure goal ascription work?

\newcommand{\dfGoalAscription}{\emph{Goal ascription} is the process of identifying outcomes to which purposive actions are directed as outcomes to which those actions are directed.}
\dfGoalAscription{}
Pure goal ascription is goal ascription which occurs independently of any knowledge of mental states.
In looking for an alternative model of action we are also asking, How could pure goal ascription work? Could you maybe think about the goal and not the intention and that would be enough? Well no because a goal is just an outcome. Let me try to explain with an example ...
Earlier I said that \dfGoalAscription{} Given this definition, goal ascription involves three things: \begin{enumerate} \item representing an action \item representing an outcome \end{enumerate} and \begin{enumerate}[resume] \item capturing the directedness of the action to the outcome. \end{enumerate}
It is important to see that the third item---capturing directedness---is necessary. This is quite simple but very important, so let me slowly explain why goal ascription requires representing the directedness of an action to an outcome. Imagine two people, Ayesha and Beatrice, who each intend to break an egg. Acting on her intention, Ayesha breaks her egg. But Beatrice accidentally drops her egg while carrying it to the kitchen.
So Ayesha and Beatrice perform visually similar actions which result in the same type of outcome, the breaking of an egg; but Beatrice's action is not directed to the outcome of her action whereas Ayesha's is.
Goal ascription requires the ability to distinguish between Ayesha's action and Beatrice's action. This requires representing not only actions and outcomes but also the directedness of actions to outcomes.
This is why I say that goal ascription requires capturing the directedness of an action to an outcome, and not just representing the action and the outcome.
So how could pure goal ascription work? How could we represent the directedness of an action to an outcome without representing an intention?

How could pure goal ascription work?

To explain the possibility of pure goal ascription we need to find a relation, $R$, such that: \begin{enumerate} \item reliably, $R(a,G)$ when and only when $a$ is directed\footnotemark to $G$; \item $R(a,G)$ is readily detectable; and \item $R(a,G)$ is readily detectable independently of any knowledge of mental states. \end{enumerate}
\footnotetext{
We want this to be true whether $a$’s being directed to $G$ involves intention, function or motor representation. }
We can make progress in explaining how pure goal ascription could work by identifying one or more values of $R$. What could $R$ be?

Three requirements ...

1. reliably, R(a,G) when and only when a is directed to G
2. R(a,G) is readily detectable ...
3. ... without any knowledge of mental states
What could this relation be?
 R(a,G) =df a causes G ? R(a,G) =df G is a teleological function of a ? R(a,G) =df ???a is the most justifiable action towards G available within the constraints of reality =df (1)-(5) are true
How about taking $R$ to be causation? That is, how about defining $R(a,G)$ as $a$ causes $G$?
This proposal does meet the second requirement: causal relations are readily detectable, even by infants, as we saw.
This proposal also meets the third requirement: causal relations do not generally require identifying mental states
But this proposal does not meet the first criterion, (1), above. (This is the requirement that reliably, R(a,G) when and only when a is directed to G.) We can see this by mentioning two problems.
First problem: actions typically have side-effects which are not goals. For example, suppose that I walk over here with the goal of being next to you. This action has lots of side-effects: \begin{itemize} \item I will be at this location. \item I will expend some energy. \item I will be further away from the front \end{itemize} These are all causal consequence of my action. But they are not goals to which my action is directed. So this version of $R$ will massively over-generate goals.
Second problem: actions can fail. [...] So this version of $R$ will under-generate goals.
Why not define $R$ in terms of teleological function?

aside: what is a teleological function?

What do we mean by teleological function?
Here is an example: % \begin{quote}

Atta ants cut leaves in order to fertilize their fungus crops (not to thatch the entrances to their homes) \citep{Schultz:1999ps}

\end{quote}
What does it mean to say that the ants’ grass cutting has this goal rather than some other? According to Wright: \begin{quote}

‘S does B for the sake of G iff: (i) B tends to bring about G; (ii) B occurs because (i.e. is brought about by the fact that) it tends to bring about G.’ (Wright 1976: 39)

\end{quote}
For instance: % \begin{quote}

The Atta ant cuts leaves in order to fertilize iff: (i) cutting leaves tends to bring about fertilizing; (ii) cutting leaves occurs because it tends to bring about fertilizing.

\end{quote}
So, to return to the idea, why not define $R$ in terms of teleological function?
I do not think this idea will enable us to meet the second condition. How could we tell whether an action happens \emph{because} it brought about a particular outcome in the past? This might be done with insects. But it can's so easily be done with primates, who have a much broader repertoire of actions and wider range of motivations.
There is a related problem with the first requirement. The problem is that we primates can perform old actions in order to achieve novel goals. In such cases there will be a mismatch between the goals of our actions and their teleological functions. Maybe we should allow that this idea will sometimes enable goal ascription to succeed, but it will probably not allow for a very wide range of goals to be correctly ascribed.
So what could R be? I think we can get a good idea by considering Csibra and Gergely's ideas about the teleological stance. They are making (in effect) a promising proposal about R.

‘an action can be explained by a goal state if, and only if, it is seen as the most justifiable action towards that goal state that is available within the constraints of reality’

(Csibra & Gergely 1998: 255)

This idea needs further scrutiny ...
So here we are discussing what Gergely & Csibra call teleological stance

the teleological stance (Gergely & Csibra)

Csibra and Gergely offer a 'principle of rationality' according to which ...
Csibra & Gergely's principle of rational action: an action can be explained by a goal state if, and only if, it is seen as the most justifiable action towards that goal state that is available within the constraints of reality.'\citep{Csibra:1998cx,Csibra:2003jv}
(Contrast a principle of efficiency: goal attribution requires that agents expend the least possible amount of energy within their motor constraints to achieve a certain end' \citep[p.\ 1061]{Southgate:2008el}).

This principle plays two distinct roles.
One role is mechanistic: this principle forms part of an account of how infants (and others) actually ascribe goals.
Another role is normative: this principle also identifies grounds on which it would be rational to ascribe a goal.

As Csibra and Gergely formulate it, the principle might seem simple.
But actually their eloquence is hiding some complexity.
How are we to understand 'justifiable action towards that goal state?'
It is perhaps worth spelling out what might be involved in applying this principle.
Let me try to spell it out as an inference with premises and a conclusion ...
[*these notes are a bit jumbled ... I'm trying to fix some problems with their view in order to focus on a key objection.]
What do we mean by better means'? Problem with defining $R$ in terms of rationality is requirement of core knowledge / modularity. So what about efficiency instead of rationality? One problem with defining $R$ in terms of minimising energy is that in acting we often face a trade off between how much energy to put into an action and how likely the action is to result in success.
Suppose I can save some energy by throwing the cup at the sink instead of walking over and carefully placing it in the sink, and suppose that I choose to walk over and place the cup in the sink. In this situation the principle of efficiency fails to identify $G$, placing the cup in the sink, as the goal of my action.
One way to address this problem might be to think of efficiency in terms of achieving a good trade-off between several factors: not just energy but also the probability that a particular action will in fact result in the goal being achieved. This is the idea I am trying to get at here ...
An action of type $a'$ is a better means of realising outcome $G$ in a given situation than an action of type $a$ if, for instance, actions of type $a'$ normally involve less effort than actions of type $a$
in situations with the salient features of this situation
and everything else is equal;
or if, for example, actions of type $a'$ are normally more likely to realise outcome $G$ than actions of type $a$
in situations with the salient features of this situation
and everything else is equal.

A problem with what we have so far is side-effects, which can be highly reliable.
Actions typically have side-effects which are not goals. For example,
suppose that I walk over here with the goal of being next to you.
This action has lots of side-effects:
\begin{itemize}
\item I will be at this location.
\item I will expend some energy.
\item I will be this much further away from the front
\end{itemize}
These are not goals to which my action is directed.
But they are things which my action would be a rational and efficient way of bring about.
So there is a risk that these optimising versions of $R$ will over-generate goals.
I think this first problem can be solved by adding a clause about desire.
[*] We can substantially mitigate the problem of side-effects by requiring that $R(a,G)$ hold only where $G$ is the type of outcome which is typically desirable for agents like $a$.
Now so far we've been considering this as an account of how someone could identify to which goal an action is directed without thinking about mental states.
That is, this inference is the core component in an account of pure goal ascription.
This gives us, in effect, a specification of R.
I've spent some time formulating this idea because I think it's a good candidate. We are not yet ready to accept the idea, though. Let's consider whether it meets the three requirements.
I take it the third requirement is obviously met
But what about the first and second requirements? It's just here that I think there is a problem we need to solve.
This is the probem. How good is the agent at optimising the rationality, or the efficiency, of her actions? And how good is the observer at identifying the optimality of actions in relation to outcomes? \textbf{ For the relation to be readily detectible, we want there to be a match between (i) how well the agent can optimise her actions and (i) how well the observer can detect optimality.} Failing such a match, the relation R will either not be detectible or not be reliable.
Csibra and Gergely seem both aware of this issue and dismissive of it.

Such calculations require detailed knowledge of biomechanical factors that determine the motion capabilities and energy expenditure of agents. However, in the absence of such knowledge, one can appeal to heuristics that approximate the results of these calculations on the basis of knowledge in other domains that is certainly available to young infants. For example, the length of pathways can be assessed by geometrical calculations, taking also into account some physical factors (like the impenetrability of solid objects). Similarly, the fewer steps an action sequence takes, the less effort it might require, and so infants’ numerical competence can also contribute to efficiency evaluation.’

Csibra & Gergely (forthcoming ms p. 8)

Csibra and Gergely seem both aware of this issue and dismissive of it. In short their solution to the problem--the problem of matching optimisation in planning actions with optimisation in predicting them--appears to be to insist that infants just do really complex detection. But this threatens the ready detectibility of the relation.
Let me offer a quick interim summary.

summary so far

How could pure goal ascription work?

R(a,G) = ???

teleological stance

problem: detecting optimality

a solution?

goal ascription is acting in reverse

The idea is that we could solve the problem--the problem of matching optimisation in planning actions with optimisation in predicting them--by supposing that a single set of mechanisms is used twice, once in planning action and once again in observing them.
What does this require?

-- in action observation, possible outcomes of observed actions are represented

-- these representations trigger planning as if performing actions directed to the outcomes

-- such planning generates predictions

predictions about joint displacements and their sensory conseuqences

-- a triggering representation is weakened if its predictions fail

The proposal is not specific to the idea of motor representations and processes, although there is good evidence for it (which I won't cover here because we're in Milan!)
So here is my proposal about how pure goal ascription could work.
1. reliably, R(a,G) when and only when a is directed to G
2. R(a,G) is readily detectable ...
3. ... without any knowledge of mental states

 R(a,G) =df a is the most justifiable action towards G available within the constraints of reality RM(a,G) =df if M were tasked with producing G it would plan action a
So here's the idea. The relation $R(a,G)$ should be defined relative to a planning mechanism. For planning mechanism $M$, $R{_M}(a,G)$ holds just if $M$ were tasked with producing $G$ it would plan action $a$.
With respect to the problems for the teleological stance, which was about matching observer and agent, we ensure a match insofar as observer an agent have similar planning mechanisms; this means, of course, that they must have similar expertise.
We also ensure we get good trade-offs---we get the right principle by deferring to the kinds of planning mechanism responsible for producing the action.
So I'm rejecting this claim.
Pure goal ascription need not involve reasoning at all.

‘when taking the teleological stance one-year-olds apply the same inferential principle of rational action that drives everyday mentalistic reasoning about intentional actions in adults’

(György Gergely and Csibra 2003; cf. Csibra, Bíró, et al. 2003; Csibra and Gergely 1998: 259)

Let me return to the two questions I started this section with:
1. What model of action underpins six- or twelve-month-old infants’ abilities to track the goals of actions?
2. How could infants identify goals without ascribing intentions?

(I.e., How could pure goal ascription work?)

Answer: goal ascription is acting in reverse

So far we've been discussing this question.
Answer: goal ascription is acting in reverse
But what about the first question? To describe how they identify goals (that is, distinguish among the actual and possible outcomes of an action which its goals are) is not yet quite to have explained how they model actions. Fortunately not much more is needed ...
Recall that a model of action has to explain in virtue of what an action is directed to a goal.
This, as I mentioned, is standardly done by invoking intentions. But there is another way.
Recall this:
 RM(a,G) =df if M were tasked with producing G it would plan action a

Now do you remember Dennett's ingeneous twist in The Intentional Stance? We're going to make the same move here.

‘What it is to be a true believer is to be … a system whose behavior is reliably and voluminously predictable via the intentional strategy.’

\citep[p.\ 15]{Dennett:1987sf}

Dennett 1987, p. 15

Dennett was interested in beliefs, but we can make essentially the same move for goals

An outcome, G, is among the goals of an action, a, exactly if RM(a,G)

The idea is to turn a heuristic into a constitutive claim.
So the infants' model of action is one on which goals are goals in virtue of relations between planning mechanisms and outcomes.
Note that this may not be a fully accurate model of action. Just like impetus mechanics, it is useful even if only approximate.