Press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (no equivalent if you don't have a keyboard)

Press m or double tap to see a menu of slides

\title {Origins of Mind: Lecture Notes \\ Lecture 08}
 
\maketitle
 

Origins of Mind

Lecture 08

\def \ititle {Origins of Mind}
\def \isubtitle {Lecture 08}
 
\
 
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}
 
\iemail %
\end{center}
 
\section{Action: The Basics}
 
\section{Action: The Basics}
Our first question is, When do human infants first track goal-directed actions and not just movements?
In examining nonlinguistic communication, we've assumed that infants from around 11 months of age can produce and comprehend informative pointing.
This commits us to saying that they have understood action.

When do human infants first track goal-directed actions
rather than mere movements only?

\#source 'research/teleological stance -- csibra and gergely.doc'
\#source 'lectures/mindreading and joint action - philosophical tools (ceu budapest 2012-autumn fall)/lecture05 actions intentions goals'
\#source 'lectures/mindreading and joint action - philosophical tools (ceu budapest 2012-autumn fall)/lecture06 goal ascription teleological motor'
When do human infants first track goal-directed actions and not just movements?
Here's a classic experiment from way back in 1995.
The subjects were 12 month old infants.
They were habituated to this sequence of events.

Gergely et al 1995, figure 1

There was also a control group who were habituated to a display like this one but with the central barrier moved to the right, so that the action of the ball is 'non-rational'.
For the test condition, infants were divided into two groups. One saw a new action, ...
... the other saw an old action.
Now if infants were considering the movements only and ignoring information about the goal, the 'new action' (movement in a straight line) should be more interesting because it is most different.
But if infants are taking goal-related information into acction, the 'old action' might be unexpected and so might generate greater dishabituation.

Gergely et al 1995, figure 3

Gergely et al 1995, figure 5

‘by the end of the first year infants are indeed capable of taking the intentional stance (Dennett, 1987) in interpreting the goal- directed behavior of rational agents.’
\citep[p.\ 184]{Gergely:1995sq}
‘12-month-old babies could identify the agent’s goal and analyze its actions causally in relation to it’
\citep[p.\ 190]{Gergely:1995sq}
You might say, it's bizarre to have used balls in this study, that can't show us anything about infants' understanding of action.
But adult humans naturally interpret the movements of even very simple shapes in terms of goals.
So using even very simple stimuli doesn't undermine the interpretation of these results.

Heider and Simmel, figure 1

Consider a related study by Woodward and colleagues.
(It's good that there is converging evidence from different labs, using quite different stimuli.)

Woodward et al 2001, figure 1

'Six-month-olds and 9-month-olds showed a stronger novelty response (i.e., looked longer) on new-goal trials than on new-path trials (Woodward 1998). That is, like toddlers, young infants selectively attended to and remembered the features of the event that were relevant to the actor’s goal.'
\citep[p.\ 153]{woodward:2001_making}
Consider a further experiment by \citet{Csibra:2003jv}.
This is just like the first ball-jumping experiment except that here infants see the action but not the circumstances in which it occurs.
Do they expect there to be an object in the way behind that barrier?

Csibra et al 2003, figure 6

human adults

Why think about adults when you want to know about the origins of knowledge?
Because sometimes it's possible to identify adult abilities which are plausibly identical to infant abilities.
We saw an example of this in thinking about knowledge of minds, and in thinking about knowledge of colour.
In both cases, linking infants' competence to adults' competence allowed us to better understand the competence.
Can we also make such a link in the case of action?
I already mentioned this classic study by Heider and Simmel.
This kind of study has been done with human adults quite a bit since then.
It is quite tempting to suppose that what we see here is automatic ascription of goals in adults.

Heider and Simmel, figure 1

automatic? perceptual?

But is it automatic? Is it perceptual? Is there any evidence?
\citet{Premack:1990jl} and \citet{Scholl:2000eq} all draw a parallel between causation and action.
(Premack's hypothesis that infants' understanding of intention is comparable to the understanding of causal interactions as shown by Michotte's launching stimuli (\citep{Premack:1990jl}; see also \citep{Premack:1997ek}).)
We saw (back in lecture 4) that there is quite convincing evidence that causal interactions are represented in perceptual proceses.
So if there is really a parallel, we could infer that relations between actions and goals are represented in perceptual processes too.

‘just as the visual system works to recover the physical structure of the world by inferring properties such as 3-D shape, so too does it work to recover the causal and social structure of the world by inferring properties such as causality’

Scholl & Tremoulet 2000, p. 299

\citep[p.\ 299]{Scholl:2000eq}
[Here I'm not trying to provide evidence, only to explain what the claim commits us to.]
Recall this habituation experiment.
they suggets that both
The new Gergely et al experiment about action is based on the same habituation technique.
If I had more time, I'd tell you a long story about speech and action.
Maybe some of us will get together another time and I can tell you that story then.
(*nb: \citep{zwickel:2010_interference} also argue for perceptual categories in perceiving actions, and do so by comparison with pop-out effects for orientations.)

evidence?

Subjects have to judge whether the dot is to the left or the right of the triangle (from their perspective).
If you think this is just a triangle, then it doesn't have a left or right so there's no congruent or incongruent.
But if you think the triangle performs goal directed actions, then in the figure on the top right, the dot is left of the triangle from your point of view but right of the triangle from its point of view.
Could there be altercentric interference?

Zwickel et al 2011, figure 1

Here are the results.
When the triangle makes random movements: there's no difference in RT between congruent and incongruent conditions. (As you'd expect---this just shows that there's nothing wrong with the setup.)
(The experiment involved comparing neurotypical (ordinary) subjects with AS subjects; that's interesting but too complex for us so we'll just focus on the neurotypical subjects' perforamnce.)
Could there be altercentric interference?

Zwickel et al 2011, figure 2

What can we conclude?
For adults, tracking information about goals makes one susceptible to interference from other other's perspective---it makes one susceptible to evaluting left and right from another's point of view.
This altercentric interference effect is a long way short of showing that the goal ascription is automatic or perceptual.
After all, such interference might be a consequence of thinking reflectively about the goals of the objects.
But it is at least a step in the direction in that it shows that, in adults, goal ascription can at least have effects on automatic processes.
(If automatic processes are informationally encapsulated, then this is evidence that goal ascription is automatic.)
(*Other evidence: \citep{Gao:2010,Teufel:2010})

Zwickel et al 2011, figure 1

altercentric interference

Would be nice to talk about this but we don't have time (could use some Brian Hare experiments; also Gomez?.)

nonhuman primates

So far ...

  1. Infants track the goals to which actions are directed from around three months of age.
  2. Adults’ abilities to track goal-directed action may resemble their abilities to track causal interactions in being (i) automatic and perhaps even (ii) perceptual.
(The three months of age figure comes from \citet{Sommerville:2005te}.)m
So it's maybe plausible there is core knowledge of action.
What we haven't done here (and can't do, as far as I know) is to show that infants lack knowledge knowledge of action.
So it's open to someone to deny that infants have core knowledge of action on the grounds that they have knowledge knowledge of it.

core knowledge of

  1. (colour)
  2. physical objects
  3. mental states
  4. (syntax)
  5. action
  6. number
 

How Do Infants Model Actions?

 
\section{How Do Infants Model Actions?}
 
\section{How Do Infants Model Actions?}
(In this unit I am merely raising a question.)

What model of action underpins six- or twelve-month-old infants’ abilities to track the goals of actions?

Or, to put the question another way, What do six- or twelve-month-old infants understand of actions and their goals?
It's essential to see that there's a question here. We've shown that infants from around six months of age can track differences that are in fact differences in the actions. But we haven't shown how infants understand these differences; we haven't said how actions appear from the point of view of the infant.
[***Say something about which model here by analogy with the physical case. Also relate it to the mental case if already done the mindreading lecture.]

What is a model of action?

To answer this question we need to step back and ask a more basic one: What is a model of action?
The key to answering this question (what is a model of action?) is to understand what a model of action needs to achieve. Part or all of what is needs to achieve is a specification of how purposive actions are related to their goals. That is, the model has to answer this question: What is the relation between an action and the outcome or outcomes to which it is directed?
One feature of actions is that, among all their actual and possible outcomes, some are goals to which they are directed. I seize little Isabel by the wrists and swing her around, thereby making her laugh and breaking a vase. You might wonder what the goal of my action was. Did I act in order to break the vase or to make Isabel laugh? Or was my action perhaps directed to some other goal, one that was not realised so that my action failed?
A model of action has to specify the relation between actions and the goal or goals to which they are directed. That is, it has to answer this question: Among all of the actual and possible outcomes of an action, which are goals of the action?
The standard answer to this question involves intention. An intention (1) represents an outcome, (2) causes an event; and (3) causes an event whose occurrence would normally lead to the outcome’s occurrence. What singles out an actual or possible outcome as one to which the component actions are collectively directed? It is the fact that this outcome is represented by the intention.
Note, by the way, that goals are not intentions. Goals are actual or possible outcomes. They are states of affairs. Intentions, by contrast, are or involve mental states that represent a goal; they are a variety of goal-states. It would be a terrible mistake to confuse a goal with a goal-state. That would be like confusing a person with a photograph.
So, on one model of action, the intention is what links an action to the outcomes to which it is directed. Is this the model of action that infants' actually use?

Does infants’ model involve intentions?

Our question is, How do infants model actions? One possibility is that they use an adult commonsense model involving intentions. According to this model, actions are events appropriately related to intentions and whether something is a goal of an action is determined by the contents of those intentions.
\citet{Premack:1990jl} endorses this possibility. He writes:

Yes: ‘in perceiving one object as having the intention of affecting another, the infant attributes to the object [...] intentions

Premack 1990: 14

\citep[p.\ 14]{Premack:1990jl}
By contrast, Geregely et al reject this possibility ...

No: ‘by taking the intentional stance the infant can come to represent the agent’s action as intentional without actually attributing a mental representation of the future goal state’

Gergely et al 1995, p. 188

\citep[p.\ 188]{Gergely:1995sq}
Btw, it isn't clear that this proposal can work (as introduced by Dennett, the intentional stance involves ascribing mental states), as these authors probably realised later, but the point about not representing mental states is good.
Finally, \citet{woodward:2001_making} offer a mixed view: infants do think about intentions, but don't have the same model of intention that adults do.

Sort of:‘to the extent that young infants are limited [...], their understanding of intentions would be quite different from the mature concept of intentions

Woodward et al 2001, p. 168

\citep[p.\ 168]{woodward:2001_making}
This isn't a very useful view for our purposes because it doesn't involve specifying a model. It merely says that the model, whatever it is, isn't the one that we already have some understanding of. Not very helpful.
I'm going to discount this view because it's not helpful at all. In what follows I will first consider Premark's view and then the alternative. This is worthwhile because each group takes a different view of how infants model actions and there don't seem to be arguments anywhere.
 

Does Infants’ Model of Action Involve Intentions?

 
\section{Does Infants’ Model of Action Involve Intentions?}
 
\section{Does Infants’ Model of Action Involve Intentions?}
So we have this model of action, where the relation between an action and a goal exists in virtue of an intention which represents the goal, coordinates the action, and coordinates the action in such a way that, normally, the existence of the action increases the probability that the outcome will occur.
Our question is whether this is infants' model of action. To answer this question we need first to understand more about the model. In particular we need to know ...
What is an intention?
[*new route: for the purposes of understanding these agents, we want the simplest possible notion of intention; so we'll take intention as an action-causing belief-desire pair.
The idea I want to consider is, surprisingly, that there are no such things as intentions.
\begin{quote}
`The expression `the intention with which James went to church' has the outward form of a description, but in fact it
...\ % is syncategorematic and
cannot be taken to refer to an entity, state, disposition, or event. Its function in context is to generate new descriptions of actions in terms of their reasons; thus `James went to church with the intention of pleasing his mother' yields a new, and fuller, description of the action described in `James went to church'.'
\citep[p.\ 690]{davidson:1963_orig}
\end{quote}
What motivates this view?
We already have beliefs and desires in our model of action explanation.
Introducing intentions as additional mental states would make the model more complicated.
So if we can do without intentions, we should do so in the interests of simplicity.
But how can we do without intentions?
Haven't we just seen that we need intentions in order to explain the relation between an action and the goal or goals to which it is directed?
Here's how Davidson's view works.
James desired to please his mother.
James believed that going to church would please his mother.
And this belief and desire caused his going to church.
So the belief--desire pair can play the role of an intention.
It (1) represents an outcome---in this case, the pleasing of James' mother---, (2) causes an event---James' going to church---; and (3) causes an event whose occurrence would normally lead to the outcome’s occurrence.
It appears, then, that we can explain the relation between an action and the goal or goals to which it is directed just in terms of belief and desire.
We don't need to introduce intentions as further mental states.
If we like we can say that an intention just is a suitable, action-causing belief-desire pair.
This view of intention is parallel to a view about knowledge.
Some suppose that knowledge is justified true belief.
Or belief meeting some condition like being true and justified.
On this sort of view, knowledge is not a mental state over and above belief.
Rather, there are just beliefs and some of these beliefs have a special status.
Similarly, Davidson's idea might be put by saying that intentions are just action-causing belief--desire pairs.
(This is not to say that \emph{any} belief--desire pair is an intention, of course.)
Let me give you one more example.
Let's assume that we know what beliefs and desires are.
It's a familiar idea that they combine to cause actions like this:

Belief: that if I don’t comb my hair I will get rabies.

Desire: that I don’t get rabies.

Baldric combed his hair with the intention of avoiding rabies.

To say that he had the intention to avoid rabies is, on this view, just to say that his action was caused by a belief and a desire with these contents.
Now this isn't the right way to model adults' understanding of intention, but it's an approximation that works in the cases we're concerned with.
(Interesting question is why we can’t stop with intention = action-causing belief-desire pair and what intentions are for, but this is a topic for the course about reasons and actions.])
Tomasello and Call say that this *is* the model of action that underpins primate cognition
And we saw earlier that Premack says the same about infants, although not everyone agrees.

‘in perceiving one object as having the intention of affecting another, the infant attributes to the object [...] intentions’

Premack 1990: 14

So should we just stop here? There are at least three reasons not to stop but rather to look for alternative models of action ...
Reason (a): understanding intention is not something potentially less sophisticated than understanding belief; on the contrary, even the simplest way of understanding intention presupposes understanding belief (and desire). So unless we think infants have a sophisticated understanding of mental states, we shouldn't suppose that their model of action includes intention.
Reason (b) Minimal theory of mind ... presupposes an understanding of goal-directed action. If we explain goal-directed action in terms of intention, we are thereby sneaking beliefs and desire in through the back door. So the whole minimal theory of mind project would fail. (You might say, on minimal theory of mind that instead of beliefs and desires we could combine registration and preference; but that would affect the construction since registration was explained in terms of goal-directed action).
Reason (c): Mere curiosity. It would be good to see if there is any other, perhaps simpler way of understanding this relation, perhaps one that doesn't involve insight into mental states.
So, is there a simpler model of the relation, one that infants might understand?
 

Pure Goal Ascription: the Teleological Stance

 
\section{Pure Goal Ascription: the Teleological Stance}
 
\section{Pure Goal Ascription: the Teleological Stance}
These two questions are closely related:
  1. What model of action underpins six- or twelve-month-old infants’ abilities to track the goals of actions?
  2. How could infants identify goals without ascribing intentions?
The first concerns the model of action, the second the process of ascription. As we'll see, we can find an answer to the first question by thinking about the second. (Compare the physical case: the question of which model of the physical you are adopting, e.g. impetus vs Newtonian mechanics, is closely related to the question of how you make predictions about objects' movements and interactions.)
Although the first is out current question, we'll approach it by asking the second.

How could pure goal ascription work?

\newcommand{\dfGoalAscription}{\emph{Goal ascription} is the process of identifying outcomes to which purposive actions are directed as outcomes to which those actions are directed.}
\dfGoalAscription{}
Pure goal ascription is goal ascription which occurs independently of any knowledge of mental states.
In looking for an alternative model of action we are also asking, How could pure goal ascription work? Could you maybe think about the goal and not the intention and that would be enough? Well no because a goal is just an outcome. Let me try to explain with an example ...
Earlier I said that \dfGoalAscription{} Given this definition, goal ascription involves three things: \begin{enumerate} \item representing an action \item representing an outcome \end{enumerate} and \begin{enumerate}[resume] \item capturing the directedness of the action to the outcome. \end{enumerate}  
It is important to see that the third item---capturing directedness---is necessary. This is quite simple but very important, so let me slowly explain why goal ascription requires representing the directedness of an action to an outcome. Imagine two people, Ayesha and Beatrice, who each intend to break an egg. Acting on her intention, Ayesha breaks her egg. But Beatrice accidentally drops her egg while carrying it to the kitchen.
So Ayesha and Beatrice perform visually similar actions which result in the same type of outcome, the breaking of an egg; but Beatrice's action is not directed to the outcome of her action whereas Ayesha's is.
Goal ascription requires the ability to distinguish between Ayesha's action and Beatrice's action. This requires representing not only actions and outcomes but also the directedness of actions to outcomes.
This is why I say that goal ascription requires capturing the directedness of an action to an outcome, and not just representing the action and the outcome.
So how could pure goal ascription work? How could we represent the directedness of an action to an outcome without representing an intention?

How could pure goal ascription work?

To explain the possibility of pure goal ascription we need to find a relation, $R$, such that: \begin{enumerate} \item reliably, $R(a,G)$ when and only when $a$ is directed\footnotemark to $G$; \item $R(a,G)$ is readily detectable; and \item $R(a,G)$ is readily detectable independently of any knowledge of mental states. \end{enumerate}
\footnotetext{
We want this to be true whether $a$’s being directed to $G$ involves intention, function or motor representation. }
We can make progress in explaining how pure goal ascription could work by identifying one or more values of $R$. What could $R$ be?

Three requirements ...

  1. reliably, R(a,G) when and only when a is directed to G
  2. R(a,G) is readily detectable ...
  3. ... without any knowledge of mental states
What could this relation be?
R(a,G) =df a causes G ?
R(a,G) =df G is a teleological function of a ?
R(a,G) =df ???a is the most justifiable action towards G available within the constraints of reality
=df (1)-(5) are true
How about taking $R$ to be causation? That is, how about defining $R(a,G)$ as $a$ causes $G$?
This proposal does meet the second requirement: causal relations are readily detectable, even by infants, as we saw.
This proposal also meets the third requirement: causal relations do not generally require identifying mental states
But this proposal does not meet the first criterion, (1), above. (This is the requirement that reliably, R(a,G) when and only when a is directed to G.) We can see this by mentioning two problems.
First problem: actions typically have side-effects which are not goals. For example, suppose that I walk over here with the goal of being next to you. This action has lots of side-effects: \begin{itemize} \item I will be at this location. \item I will expend some energy. \item I will be further away from the front \end{itemize} These are all causal consequence of my action. But they are not goals to which my action is directed. So this version of $R$ will massively over-generate goals.
Second problem: actions can fail. [...] So this version of $R$ will under-generate goals.
Why not define $R$ in terms of teleological function?

aside: what is a teleological function?

What do we mean by teleological function?
Here is an example: % \begin{quote}

Atta ants cut leaves in order to fertilize their fungus crops (not to thatch the entrances to their homes) \citep{Schultz:1999ps}

\end{quote}
What does it mean to say that the ants’ grass cutting has this goal rather than some other? According to Wright: \begin{quote}

‘S does B for the sake of G iff: (i) B tends to bring about G; (ii) B occurs because (i.e. is brought about by the fact that) it tends to bring about G.’ (Wright 1976: 39)

\end{quote}
For instance: % \begin{quote}

The Atta ant cuts leaves in order to fertilize iff: (i) cutting leaves tends to bring about fertilizing; (ii) cutting leaves occurs because it tends to bring about fertilizing.

\end{quote}
So, to return to the idea, why not define $R$ in terms of teleological function?
I do not think this idea will enable us to meet the second condition. How could we tell whether an action happens \emph{because} it brought about a particular outcome in the past? This might be done with insects. But it can's so easily be done with primates, who have a much broader repertoire of actions and wider range of motivations.
There is a related problem with the first requirement. The problem is that we primates can perform old actions in order to achieve novel goals. In such cases there will be a mismatch between the goals of our actions and their teleological functions. Maybe we should allow that this idea will sometimes enable goal ascription to succeed, but it will probably not allow for a very wide range of goals to be correctly ascribed.
So what could R be? I think we can get a good idea by considering Csibra and Gergely's ideas about the teleological stance. They are making (in effect) a promising proposal about R.

‘an action can be explained by a goal state if, and only if, it is seen as the most justifiable action towards that goal state that is available within the constraints of reality’

(Csibra & Gergely 1998: 255)

This idea needs further scrutiny ...
So here we are discussing what Gergely & Csibra call teleological stance

the teleological stance (Gergely & Csibra)

Csibra and Gergely offer a 'principle of rationality' according to which ...
Csibra & Gergely's principle of rational action: `an action can be explained by a goal state if, and only if, it is seen as the most justifiable action towards that goal state that is available within the constraints of reality.'\citep{Csibra:1998cx,Csibra:2003jv}
(Contrast a principle of efficiency: `goal attribution requires that agents expend the least possible amount of energy within their motor constraints to achieve a certain end' \citep[p.\ 1061]{Southgate:2008el}).
 
This principle plays two distinct roles.
One role is mechanistic: this principle forms part of an account of how infants (and others) actually ascribe goals.
Another role is normative: this principle also identifies grounds on which it would be rational to ascribe a goal.
 
As Csibra and Gergely formulate it, the principle might seem simple.
But actually their eloquence is hiding some complexity.
How are we to understand 'justifiable action towards that goal state?'
It is perhaps worth spelling out what might be involved in applying this principle.
Let me try to spell it out as an inference with premises and a conclusion ...
[*these notes are a bit jumbled ... I'm trying to fix some problems with their view in order to focus on a key objection.]
What do we mean by `better means'? Problem with defining $R$ in terms of rationality is requirement of core knowledge / modularity. So what about efficiency instead of rationality? One problem with defining $R$ in terms of minimising energy is that in acting we often face a trade off between how much energy to put into an action and how likely the action is to result in success.
Suppose I can save some energy by throwing the cup at the sink instead of walking over and carefully placing it in the sink, and suppose that I choose to walk over and place the cup in the sink. In this situation the principle of efficiency fails to identify $G$, placing the cup in the sink, as the goal of my action.
One way to address this problem might be to think of efficiency in terms of achieving a good trade-off between several factors: not just energy but also the probability that a particular action will in fact result in the goal being achieved. This is the idea I am trying to get at here ...
An action of type $a'$ is a better means of realising outcome $G$ in a given situation than an action of type $a$ if, for instance, actions of type $a'$ normally involve less effort than actions of type $a$
in situations with the salient features of this situation
and everything else is equal;
or if, for example, actions of type $a'$ are normally more likely to realise outcome $G$ than actions of type $a$
in situations with the salient features of this situation
and everything else is equal.
 
A problem with what we have so far is side-effects, which can be highly reliable.
Actions typically have side-effects which are not goals. For example,
suppose that I walk over here with the goal of being next to you.
This action has lots of side-effects:
\begin{itemize}
\item I will be at this location.
\item I will expend some energy.
\item I will be this much further away from the front
\end{itemize}
These are not goals to which my action is directed.
But they are things which my action would be a rational and efficient way of bring about.
So there is a risk that these optimising versions of $R$ will over-generate goals.
I think this first problem can be solved by adding a clause about desire.
[*] We can substantially mitigate the problem of side-effects by requiring that $R(a,G)$ hold only where $G$ is the type of outcome which is typically desirable for agents like $a$.
Now so far we've been considering this as an account of how someone could identify to which goal an action is directed without thinking about mental states.
That is, this inference is the core component in an account of pure goal ascription.
This gives us, in effect, a specification of R.
I've spent some time formulating this idea because I think it's a good candidate. We are not yet ready to accept the idea, though. Let's consider whether it meets the three requirements.
I take it the third requirement is obviously met
But what about the first and second requirements? It's just here that I think there is a problem we need to solve.
This is the probem. How good is the agent at optimising the rationality, or the efficiency, of her actions? And how good is the observer at identifying the optimality of actions in relation to outcomes? \textbf{ For the relation to be readily detectible, we want there to be a match between (i) how well the agent can optimise her actions and (i) how well the observer can detect optimality.} Failing such a match, the relation R will either not be detectible or not be reliable.
Csibra and Gergely seem both aware of this issue and dismissive of it.

`Such calculations require detailed knowledge of biomechanical factors that determine the motion capabilities and energy expenditure of agents. However, in the absence of such knowledge, one can appeal to heuristics that approximate the results of these calculations on the basis of knowledge in other domains that is certainly available to young infants. For example, the length of pathways can be assessed by geometrical calculations, taking also into account some physical factors (like the impenetrability of solid objects). Similarly, the fewer steps an action sequence takes, the less effort it might require, and so infants’ numerical competence can also contribute to efficiency evaluation.’

Csibra & Gergely (forthcoming ms p. 8)

Csibra and Gergely seem both aware of this issue and dismissive of it. In short their solution to the problem--the problem of matching optimisation in planning actions with optimisation in predicting them--appears to be to insist that infants just do really complex detection. But this threatens the ready detectibility of the relation.
Let me offer a quick interim summary.

summary so far

How could pure goal ascription work?

R(a,G) = ???

teleological stance

problem: detecting optimality

a solution?

goal ascription is acting in reverse

The idea is that we could solve the problem--the problem of matching optimisation in planning actions with optimisation in predicting them--by supposing that a single set of mechanisms is used twice, once in planning action and once again in observing them.
What does this require?

-- in action observation, possible outcomes of observed actions are represented

-- these representations trigger planning as if performing actions directed to the outcomes

-- such planning generates predictions

predictions about joint displacements and their sensory conseuqences

-- a triggering representation is weakened if its predictions fail

The proposal is not specific to the idea of motor representations and processes, although there is good evidence for it (which I won't cover here because we're in Milan!)
So here is my proposal about how pure goal ascription could work.
  1. reliably, R(a,G) when and only when a is directed to G
  2. R(a,G) is readily detectable ...
  3. ... without any knowledge of mental states

 

R(a,G) =df a is the most justifiable action towards G available within the constraints of reality
RM(a,G) =df if M were tasked with producing G it would plan action a
So here's the idea. The relation $R(a,G)$ should be defined relative to a planning mechanism. For planning mechanism $M$, $R{_M}(a,G)$ holds just if $M$ were tasked with producing $G$ it would plan action $a$.
With respect to the problems for the teleological stance, which was about matching observer and agent, we ensure a match insofar as observer an agent have similar planning mechanisms; this means, of course, that they must have similar expertise.
We also ensure we get good trade-offs---we get the right principle by deferring to the kinds of planning mechanism responsible for producing the action.
So I'm rejecting this claim.
Pure goal ascription need not involve reasoning at all.

 

‘when taking the teleological stance one-year-olds apply the same inferential principle of rational action that drives everyday mentalistic reasoning about intentional actions in adults’

(György Gergely and Csibra 2003; cf. Csibra, Bíró, et al. 2003; Csibra and Gergely 1998: 259)

Let me return to the two questions I started this section with:
  1. What model of action underpins six- or twelve-month-old infants’ abilities to track the goals of actions?
  2. How could infants identify goals without ascribing intentions?

    (I.e., How could pure goal ascription work?)

    Answer: goal ascription is acting in reverse

So far we've been discussing this question.
My answer is simple
Answer: goal ascription is acting in reverse
But what about the first question? To describe how they identify goals (that is, distinguish among the actual and possible outcomes of an action which its goals are) is not yet quite to have explained how they model actions. Fortunately not much more is needed ...
Recall that a model of action has to explain in virtue of what an action is directed to a goal.
This, as I mentioned, is standardly done by invoking intentions. But there is another way.
Recall this:
RM(a,G) =df if M were tasked with producing G it would plan action a

Now do you remember Dennett's ingeneous twist in The Intentional Stance? We're going to make the same move here.

‘What it is to be a true believer is to be … a system whose behavior is reliably and voluminously predictable via the intentional strategy.’

\citep[p.\ 15]{Dennett:1987sf}

Dennett 1987, p. 15

Dennett was interested in beliefs, but we can make essentially the same move for goals

An outcome, G, is among the goals of an action, a, exactly if RM(a,G)

The idea is to turn a heuristic into a constitutive claim.
So the infants' model of action is one on which goals are goals in virtue of relations between planning mechanisms and outcomes.
Note that this may not be a fully accurate model of action. Just like impetus mechanics, it is useful even if only approximate.
 

From Action to Communication and Joint Action

 
\section{From Action to Communication and Joint Action}
 
\section{From Action to Communication and Joint Action}

Summary: How do humans first come to know truths about actions?

  1. Infants can track the goals of actions from six months or earlier
  2. How? (How is pure goal ascription possible?)

    -- goal ascription is acting in reverse

  3. How do infants model actions?
    Note that Csibra & Gergely offer a continuity hypothesis.

    --RM(a,G) =df if M were tasked with producing G it would plan action a

  4. Infants have something like core knowledge of action
  5. How to get from that to knowledge of actions?
    This is always the sticking point. I don't have a suggestion about how the transition might be made. I think that experience probably plays a role analogous to it plays in knowledge of colour. But there's something else. I want to suggest we can use these discoveries about action to fill out our understanding of communication, and of cooperative interactions.
action
[Object]
communication
[Object]
communication by language
[Object]
recall from last lecture ...
Now contrast Grice and Davidson on the pointing action from the Hare et al study, where you're supposed to take one of two containers.

Grice

Goal: get Ayesha to select the left container

Means: get Ayesha to recognise that I intend Ayesha to select the left container

Intention: to get Ayesha to selet the left container by means of getting Ayesha to recognisethat I am pointing to the left container with the intention that she select the left container.

Davidson

Goal: get Ayesha to select the left container

Semantic Intention: that Ayesha take this pointing gesture to refer to the left container

Ulterior Intention: that Ayesha select the left container

Strictly speaking, that Ben should come over might not be the first meaning of the wave (so there are other options here).
As before, there's a contrast in what must be intended and so what we're committing ourselves to in saying that infants can produce and comprehend informative pointing.

understanding action

R(a,G) defined with intention

R(a,G) defined non-psychologically

communication

Refers(gesture,object) defined with communicative intention

R(a,G) defined non-psychologically

cooperative interaction

R(a1, a2, ..., G) defined with shared intention

R(a1, a2, ..., G) defined with expectations about common goals