Press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (no equivalent if you don't have a keyboard)

Press m or double tap to see a menu of slides

\title {Origins of Mind: Lecture Notes \\ Appendix: Theoretical Background}
 
\maketitle
 

Origins of Mind

Appendix: Theoretical Background

\def \ititle {Origins of Mind}
\def \isubtitle {Appendix: Theoretical Background}
 
\
 
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}
 
\iemail %
\end{center}
 
\section{Core Knowledge and Modularity}
 
\section{Core Knowledge and Modularity}
There are principles of object perception that explain abilities to segment objects, to represent them while temporarily unperceived and to track their interactions.
These principles are not known. What is their status?

the question

I noted earlier that Hood suggests that answering this question requires that we recognize separable systems of mental representation and different kinds of knowledge.
Now I'm going to try to make good on this idea.

‘there are many separable systems of mental representations ... and thus many different kinds of knowledge. ... the task ... is to contribute to the enterprise of finding the distinct systems of mental representation and to understand their development and integration’\citep[p.\ 1522]{Hood:2000bf}.

(Hood et al 2000, p.\ 1522)

Core knowledge is a label for what we need.
I'm going to adopt the label.
But this only amounts to labelling the problem, not to solving it.

core knowledge

‘Just as humans are endowed with multiple, specialized perceptual systems, so we are endowed with multiple systems for representing and reasoning about entities of different kinds.’

\citep[p.\ 517]{Carey:1996hl}

(Carey and Spelke 1996: 517)

So talk of core knowledge is somehow supposed to latch onto the idea of a system.
What do these authors mean by talking about 'specialized perceptual systems'?
They talk about things like perceiving colour, depth or melodies.
Now, as we saw when talking about categorical perception of colour, we can think of the 'system' underlying categorical perception as largely separate from other cognitive systems--- we saw that they could be knocked out by verbal interference, for example.
So the idea is that core knowledge somehow involves a system that is separable from other cognitive mechanisms.
As Carey rather grandly puts it, understanding core knowledge will involve understanding something about 'the architecture of the mind'.

‘core systems are largely innate, encapsulated, and unchanging, arising from phylogenetically old systems built upon the output of innate perceptual analyzers.’

\citep[p.\ 520]{Carey:1996hl}

(Carey and Spelke 1996: 520)

For something to be informationally encapsulated is for its operation to be unaffected by the mere existence of general knowledge or representations stored in other modules (Fodor 1998b: 127)

representational format: iconic (Carey 2009)

To say that a represenation is iconic means, roughly, that parts of the representation represent parts of the thing represented.
Pictures are paradigm examples of representations with iconic formats.
For example, you might have a picture of a flower where some parts of the picture represent the petals and others the stem.
Suppose we accept that there are core systems with this combination of features.
Then we can use the term 'core knowledge' to mean the representations in these core systems.
A piece of \emph{core knowledge} is a representation in a core system.
The hope is that this will help us with the second question.
Consider this hypothesis.
The principles of object perception, and maybe also the expectations they give rise to, are not knowledge.
But they are core knowledge.
The \emph{revised view}: the principles of object perception are not knowledge, but they are core knowledge.
But look at those features again --- innate, encapsulated, unchanging and the rest.
None of these straightforwardly enable us to predict that core knowledge of objects will guide looking but not reaching.
So the \emph{first problem} is that (at this stage) it's unclear what we gain by shifting from knowledge to core knowledge.

knowledge core knowledge

There is also a \emph{second problem}.
This problem concerns with the way we have introduced the notion of core knowledge.
We have introduced it by providing a list of features.
But why suppose that this particular list of features constitutes a natural kind?
This worry has been brought into sharp focus by criticisms of 'two systems' approaches.
(These criticisms are not directed specifically at claims about core knowledge, but the criticisms apply.)
\subsection{Objection}

‘there is a paucity of … data to suggest that they are the only or the best way of carving up the processing,

‘and it seems doubtful that the often long lists of correlated attributes should come as a package’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

we wonder whether the dichotomous characteristics used to define the two-system models are … perfectly correlated …

[and] whether a hybrid system that combines characteristics from both systems could not be … viable’

\citep[p.\ 537]{keren_two_2009}

Keren and Schul (2009, p. 537)

This is weak.
Remember that criticism is easy, especially if you don't have to prove someone is wrong.
Construction is hard, and worth more.
Even so, there is a problem here.

‘the process architecture of social cognition is still very much in need of a detailed theory’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

So what am I saying?
Our question is, Given that the simple view is wrong, what is the relation between the principles of object perception and infants’ competence in segmenting objects, object permanence and tracking causal interactions?
We are considering this (partial) answer: the principles are not knowledge but core knowledge.

We have core knowledge of the principles of object perception.

Let me remind you how we defined core knowledge.
First, we explained core knowledge in terms of core systems: a piece of core knowledge is a representation in a core system.
Second, we characterised core systems by appeal to a list of characteristics: they are innate, encapsulated, unchainging etc.
There are two problems for this answer as it stands.

two problems

    \begin{itemize}
  • \item How does this explain the looking/searching discrepancy?
  • \item Can appeal to core knowledge explain anything?
  • \end{itemize}
This looks like the sort of problem a philospoher might be able to help with.
Jerry Fodor has written a book called 'Modularity of Mind' about what he calls modules.
And modules look a bit like core systems, as I'll explain.
Further, Spelke herself has at one point made a connection.
So let's have a look at the notion of modularity and see if that will help us.

core system = module?

‘In Fodor’s (1983) terms, visual tracking and preferential looking each may depend on modular mechanisms.’

\citep[p.\ 137]{spelke:1995_spatiotemporal}

Spelke et al (1995, p. 137)

Modules are widely held to play a central role in explaining mental development and in accounts of the mind generally.
Jerry Fodor makes three claims about modules:
\subsection{Modularity}
Fodor’s three claims about modules:
\begin{enumerate}
\item they are ‘the psychological systems whose operations present the world to thought’;
\item they ‘constitute a natural kind’; and
\item there is ‘a cluster of properties that they have in common’ \citep[p.\ 101]{Fodor:1983dg}.
\end{enumerate}
What are these properties?
These properties include:
\begin{itemize}
\item domain specificity (modules deal with ‘eccentric’ bodies of knowledge)
\item limited accessibility (representations in modules are not usually inferentially integrated with knowledge)
\item information encapsulation (modules are unaffected by general knowledge or representations in other modules)
\item innateness (roughly, the information and operations of a module not straightforwardly consequences of learning; but see \citet{Samuels:2004ho}).
\end{itemize}
  • domain specificity

    modules deal with 'eccentric' bodies of knowledge

  • limited accessibility

    representations in modules are not usually inferentially integrated with knowledge

  • information encapsulation

    modules are unaffected by general knowledge or representations in other modules

    For something to be informationally encapsulated is for its operation to be unaffected by the mere existence of general knowledge or representations stored in other modules (Fodor 1998b: 127)
  • innateness

    roughly, the information and operations of a module not straightforwardly consequences of learning

Not all researchers agree about the properties of modules. That they are informationally encapsulated is denied by Dan Sperber and Deirdre Wilson (2002: 9), Simon Baron-Cohen (1995) and some evolutionary psychologists (Buller and Hardcastle 2000: 309), whereas Scholl and Leslie claim that information encapsulation is the essence of modularity and that any other properties modules have follow from this one (1999b: 133; this also seems to fit what David Marr had in mind, e.g. Marr 1982: 100-1). According to Max Coltheart, the key to modularity is not information encapsulation but domain specificity; he suggests Fodor should have defined a module simply as 'a cognitive system whose application is domain specific' (1999: 118). Peter Carruthers, on the other hand, denies that domain specificity is a feature of all modules (2006: 6). Fodor stipulated that modules are 'innately specified' (1983: 37, 119), and some theorists assume that modules, if they exist, must be innate in the sense of being implemented by neural regions whose structures are genetically specified (e.g. de Haan, Humphreys and Johnson 2002: 207; Tanaka and Gauthier 1997: 85); others hold that innateness is 'orthogonal' to modularity (Karmiloff-Smith 2006: 568). There is also debate over how to understand individual properties modules might have (e.g. Hirschfeld and Gelman 1994 on the meanings of domain specificity; Samuels 2004 on innateness).
In short, then, theorists invoke many different notions of modularity, each barely different from others. You might think this is just a terminological issue. I want to argue that there is a substantial problem: we currently lack any theoretically viable account of what modules are. The problem is not that 'module' is used to mean different things-after all, there might be different kinds of module. The problem is that none of its various meanings have been characterised rigorously enough. All of the theorists mentioned above except Fodor characterise notions of modularity by stipulating one or more properties their kind of module is supposed to have. This way of explicating notions of modularity fails to support principled ways of resolving controversy.
 
No key explanatory notion can be adequately characterised by listing properties because the explanatory power of any notion depends in part on there being something which unifies its properties and merely listing properties says nothing about why they cluster together.
 
Interestingly, Fodor doesn't define modules by specifying a cluster of properties (pace Sperber 2001: 51); he mentions the properties only as a way of gesturing towards the phenomenon (Fodor 1983: 37) and he also says that modules constitute a natural kind (see Fodor 1983: 101 quoted above).
It is tempting to appeal to spatial metaphors in thinking about modularity. Just as academics tend to work at high-speed on domain-specific problems when they can cut themselves off from administrative centres, so we might attempt to explain the special properties of modules by saying that they are cut off from the central system. But it isn't clear how to turn this metaphor into an explanation. The spatial metaphor only gives us the illusion that we understand modularity.
So what aim I suggesting.
First that we treat core knowledge and modularity as terms for a single thing, whatever it is.
This has the advantage that we can draw on Fodor's more detailed theorising about modularity.

core knowledge = modularity

So the view we are considering is that

We have core knowledge (= modular representations) of the principles of object perception.

Does this help us with the two problems I mentioned earlier?

two problems

    \begin{itemize}
  • \item How does this explain the looking/searching discrepancy?
  • \item Can appeal to core knowledge (/ modularity) explain anything?
  • \end{itemize}
Here we have the same problem as before. If anything, invoking modularity makes it worse.
But here our situation is better. To see why, recall the properties of modules.
  • domain specificity

    modules deal with 'eccentric' bodies of knowledge

  • limited accessibility

    representations in modules are not usually inferentially integrated with knowledge

  • information encapsulation

    modules are unaffected by general knowledge or representations in other modules

    For something to be informationally encapsulated is for its operation to be unaffected by the mere existence of general knowledge or representations stored in other modules (Fodor 1998b: 127)
  • innateness

    roughly, the information and operations of a module not straightforwardly consequences of learning

Limited accessibility explains why the representations might drive looking but not reaching.
But doesn't the bare appeal to limited accessibility leave open why the looking and not the searching (rather than conversely)?
I think not, given the assumption that searching is purposive in a way that looking is not. (Searching depends on practical reasoning.)
We'll come back to this later (if core knowledge of objects involves object files, it's easier to see why it affects looking but not actions like reaching.)
 
Except, of course, calling this an explanation is too much.
After all, limited accessibility is more or less what we're trying to explain.
But this is the first problem --- the problem with the standard way of characterising modularity and core systems merely by listing features.

summary so far

core knowledge = modularity

We have core knowledge (= modular representations) of the principles of object perception.

two problems

    \begin{itemize}
  • \item How does this explain the looking/searching discrepancy?
  • \item Can appeal to core knowledge (/ modularity) explain anything?
  • \end{itemize}
 
\section{Core Knowledge}
 
\section{Core Knowledge}
In the last lecture, we considered communication with language.
My overall plan now is to work backwards: the next item to consider is communicative action generally, and then its action generally.
I suggested that acquiring an ability to commmunicate with language typically involves (a) social interaction and (b) creating words.
I think looking at communication generally will help bring this into clearer focus.
But before I get into this, I want to take a huge detour.
The huge detour will allow me (i) to connect up the different things we've done; and (ii) pick up on some themes from your assessed essays.
(The detour means that this lecture, unlike the others, doesn't have a single unifying theme.)
I wanted to try to start drawing things together before the end of the last lecture.
This will mean a bit of repetition, but it will also help us in thinking through the issues together.

The Beginning of the End

This isn't a new question, but I think it's worth spending more time on it, partly because someone said in a seminar that we don't know what it is and partly because thinking about this is a way of organising much of what we've been learning.

What is core knowledge? What are core systems (≈ modules)?

The first, very minor thing is to realise that there are two closely related notions, core knowledge and core system.
The notion of a core system and that of a module are barely different; it's safe to treat these as interchangeable until you have a reason to distinguish them.
These are related this: roughly, core knowledge states are the states of core systems. More carefully:
For someone to have \textit{core knowledge of a particular principle or fact} is for her to have a core system where either the core system includes a representation of that principle or else the principle plays a special role in describing the core system.
So we can define core knowlegde in terms of core system.
The next step is to realise that there is a good reason why you don't know what core systems are.

Why don’t I know the answers?

Core systems are usually introduced implicitly, in explaining an idea.
\subsection{The idea}

‘We hypothesize that uniquely human cognitive achievements build on systems that humans share with other animals: core systems that evolved before the emergence of our species.

The internal functioning of these systems depends on principles and processes that are distinctly non-intuitive.

Nevertheless, human intuitions about space, number, morality and other abstract concepts emerge from the use of symbols, especially language, to combine productively the representations that core systems deliver’

Spelke and Lee 2012, pp. 2784-5

\citep[pp.\ 2784-5]{spelke:2012_core}.
So to properly understand what they are we would need (i) to have a deep understanding of the picture; (ii) and of the hypotheses it inspires; (iii) and of the evidence for these hypotheses, and then we would work back from this to say what core systems are.
Now you might say that this is terrible, how can scientists use terms without defining them.
But (a) it's not obvious that definitions are necessary for good science, or even that useful; and (b) compare the notion of knowledge: philosophers have made some informative observations about knowledge, but they've had no success at all in defining it.
(That said, I do think there's a particular problem with core knowledge.)
In answering the question, What is core knowledge? I think we should be inspired by the notion of radical interpretation.

‘All understanding of the speech of another involves radical interpretation’

Davidson 1973, p. 125

\citep[p.\ 125]{Davidson:1973jx}
(It's not just core knowledge; I think we have too approach science as radical interpreters ...)
How does radical interpretation work?
Interpretation is hard because there are two factors: truth and meaning.
The proposal Davidson makes is that we assume truth and infer meaning.
I'm recommending a similar strategy for core knowledge.
Very roughly, we take for granted that the evidence establishes various hypotheses. We then ask what core knowledge could be given these are true.
But more carefully, we first have to ask what motivates talk about core knowledge at all.
Fine, but this doesn't help us in practical terms. How are we to get a handle on the notion without doing lots of research?
The simple approach is to find out what people who use the term say it is.

But what is core knowledge?

What do people say core knowledge is?
\subsection{Two-part definition}
There are two parts to a good definition. The first is an analogy that helps us get a fix on what we is meant by 'system' generally. (The second part tells us which systems are core systems by listing their characteristic features.)

‘Just as humans are endowed with multiple, specialized perceptual systems, so we are endowed with multiple systems for representing and reasoning about entities of different kinds.’

\citep[p.\ 517]{Carey:1996hl}

(Carey and Spelke 1996: 517)

So talk of core knowledge is somehow supposed to latch onto the idea of a system.
What do these authors mean by talking about 'specialized perceptual systems'?
They talk about things like perceiving colour, depth or melodies.
Now, as we saw when talking about categorical perception of colour, we can think of the 'system' underlying categorical perception as largely separate from other cognitive systems--- we saw that they could be knocked out by verbal interference, for example.
So the idea is that core knowledge somehow involves a system that is separable from other cognitive mechanisms.
As Carey rather grandly puts it, understanding core knowledge will involve understanding something about 'the architecture of the mind'.
Illustration: edge detection.

‘core systems are

  1. largely innate,
  2. encapsulated, and
  3. unchanging,
  4. arising from phylogenetically old systems
  5. built upon the output of innate perceptual analyzers’

\citep[p.\ 520]{Carey:1996hl}.

(Carey and Spelke 1996: 520)

\textit{Note} There are other, slightly different statements \citep[e.g.][]{carey:2009_origin}.
This is helpful for getting started.
But we quickly run into the problem that different researchers say different things, and it isn't obvious which differences matter.
We also run into the problem that the definitions on offer aren't obviously correct: they list features that maybe aren't all necessary.

core system vs module

Aside: compare the notion of a core system with the notion of a module
The two definitions are different, but the differences are subtle enough that we don't want both.
My recommendation: if you want a better definition of core system, adopt core system = module as a working assumption and then look to research on modularity because there's more of it.
An example contrasting Grice and Davidson on the wave.
\subsection{Compare modularity}
Modules are ‘the psychological systems whose operations present the world to thought’; they ‘constitute a natural kind’; and there is ‘a cluster of properties that they have in common’ \citep[p.\ 101]{Fodor:1983dg}.
These properties include:
\begin{itemize}
\item domain specificity (modules deal with ‘eccentric’ bodies of knowledge)
\item limited accessibility (representations in modules are not usually inferentially integrated with knowledge)
\item information encapsulation (modules are unaffected by general knowledge or representations in other modules)
\item innateness (roughly, the information and operations of a module not straightforwardly consequences of learning; but see \citet{Samuels:2004ho}).
\end{itemize}

‘core systems are

  1. largely innate,
  2. encapsulated, and
  3. unchanging,
  4. arising from phylogenetically old systems
  5. built upon the output of innate perceptual analyzers’

(Carey and Spelke 1996: 520)

Modules are ‘the psychological systems whose operations present the world to thought’; they ‘constitute a natural kind’; and there is ‘a cluster of properties that they have in common’

  1. innateness
  2. information encapsulation
  3. domain specificity
  4. limited accessibility
  5. ...
So now we have a rough, starting fix on the notion we can ask a deeper question.

Why do we need a notion like core knowledge?

So why do we need a notion like core knowledge?
Think about these domains.
In each case, we're pushed towards postulating that infants know things, but also pushed against this.
Resolving the apparent contradiction is what core knowledge is for.
domain evidence for knowledge in infancy evidence against knowledge
colour categories used in learning labels & functions failure to use colour as a dimension in ‘same as’ judgements
physical objects patterns of dishabituation and anticipatory looking unreflected in planned action (may influence online control)
minds reflected in anticipatory looking, communication, &c not reflected in judgements about action, desire, ...
syntax [to follow] [to follow]
number [to follow] [to follow]
Key question: What features do we have to assign to core knowledge if it's to describe these discrepancies?
I think the fundamental feature is inaccessibility.
If this is what core knowledge is for, what features must core knowledge have?

If this is what core knowledge is for, what features must core knowledge have?

limited accessibility to knowledge

To say that a system or module exhibits limited accessibility is to say that the representations in the system are not usually inferentially integrated with knowledge.
I think this is the key feature we need to assign to core knowledge in order to explain the apparent discrepancies in the findings about when knowledge emerges in development.
Limited accessbility is a familar feature of many cognitive systems.
When you grasp an object with a precision grip, it turns out that there is a very reliable pattern.
At a certain point in moving towards it your fingers will reach a maximum grip aperture which is normally a certain amount wider than the object to be grasped, and then start to close.
Now there's no physiological reason why grasping should work like this, rather than grip hand closing only once you contact the object.
Maximum grip aperture shows anticipation of the object: the mechanism responsible for guiding your action does so by representing various things including some features of the object.
But we ordinarily have no idea about this.
The discovery of how grasping is controlled depended on high speed photography.
This is an illustration of limited accessibility.
(This can also illustrate information encapsulation and domain specificity.)

maximum grip aperture

(source: Jeannerod 2009, figure 10.1)

This picture is significantly different from some competitors (but not Carey on number):
(1) because it shows we aren't done when we've explained the acquisition of core knowledge (contra e.g. Leslie, Baillargeon), and
(2) because it shows we can't hope to explain the acquisition of knowledge if we ignore core knowledge (contra e.g. Tomasello)
***todo*** say something about what we've learnt from each case.
Syntax is important because it pushes us away from the idea of 'systems that humans share with animals' \citep[p.\ 2784]{spelke:2012_core}
Or maybe identify themes and point out which cases they're
e.g. colour shows (i) that perceptual mechanisms are important and (ii) that infants' core knowledge persists into adulthood
domain evidence for knowledge in infancy evidence against knowledge
colour categories used in learning labels & functions failure to use colour as a dimension in ‘same as’ judgements
physical objects patterns of dishabituation and anticipatory looking unreflected in planned action (may influence online control)
minds reflected in anticipatory looking, communication, &c not reflected in judgements about action, desire, ...
syntax [to follow] [to follow]
number [to follow] [to follow]
Let me pause to evalaute the picture I offered in lecture 1 in the light of what we've learnt so far.
(I hesitate to do this because it shows the picture I offered you isn't very good.)
Take each case in turn.
For colour it works quite well, providing, as I suggested last week, that acquiring words is a creative process of social interaction.
What about physical objects? Here there's no indication that using labels for objects drives a developmental change, and it's hard to see why it would.
(It's more plausible that tool use rather than word use matters; but even this is is hugely speculative.)
So no marks for that case at all.
What about minds, and in particular beliefs?
Superficially things look better here. There is both evidence that rich forms of social interaction facilitate development \citep{Hughes:2006fu}
and also evidence that language matters in various ways \citep{Astington2005ot}.
But these probably don't connect in the simple way I envisage.
Social interaction might matter because it provides experiences of perspective differences, or because it motivates children to think about others' minds.
And language might matter because having sentences around enables them to keep track of beliefs, or because using relative clauses might clue them in to a relation between beliefs and what utterances of sentences express.
So here the picture isn't right, but it might not be a million miles off either ...
It's wrong to think that labelling beliefs matters; but it may be that being able to talk about beliefs (implicitly or otherwise) does matter for coming to have knowledge of them.

So far we've ignored what is usually regarded as a paradigm case for core knowledge ...

next: syntax

 

Syntax / Innateness

 
\section{Syntax / Innateness}
 
\section{Syntax / Innateness}
So far we have considered examples of core knowledge. But we have ignored a paradigm case, one which has inspired much work on this topic (although it is not a case Spelke or Carey would recognize! *todo: stress throughout) ...

core knowledge of

  • physical objects
  • [colour]
  • mental states
  • action
  • number
  • ...
Human adults have extensive knowledge of the syntax of their languages, as illustrated by, for example, their abilities to detect grammatical and ungramatical sentences which they have never heard before, independently of their meanings. To adapt a famous example from Chomsky, ...
  1. The turnip of shapely knowing isn't yet buttressed by death.
  2. *The buttressed turnip shapely knowing yet isn't of by death.
We need to task two questions.

core knowledge of syntax is innate (?)

First, what is this thing, syntax, which is known?
This thing they know, the syntax, isn't plausibly just a list of which sentences are grammatical.
Because people can make judgements about arbitrarily long, entirely novel sentences.
Rather, the thing known must be something that enables people to make judgements about sentences.
We might think of it roughly as a theory of syntax.
It's like a theory in this sense: knowledge of it enables you to make judgements about the grammaticality of arbitrary sentences.
The second question is, Is it *knowledge* we have syntax or something else?
There's something interesting.
The knowledge can be revealed indirectly, by asking people about whether particular sentences are grammatical.
But people can't say anything about how they know the sentence is grammatical.
It's like perceiving the shape of something: there isn't much to say about how you know.
So the theory of syntax isn't something we can discover by introspection:
we have to *rediscover* it from scratch by investigating people's linguistic abilities.
Knowledge of syntax therefore seems to have some of the features associated with core knowledge.
First, it is domain-specific.
Second, it is inaccessible. That is, it can't guide arbitrary actions.
In what follows I want to suggest that syntax provides a paradigm case for thinking about core knowledge.
In addition, I want to use the case of syntax for thinking about the question, What is innate in humans?
I was astonished how many people considered this question in the unassessed essay, some people seem really fascinated by it.
But almost no one discussed the case of syntax in depth. If you're going to talk about innateness, you really need to know a little bit about syntax.
So I'm also going to provide you with that understanding.
Consider a phrase like 'the red ball'.
What is the syntactic structure of this noun phrase?
In principle there are two possibilities.

the red ball

‘I’ll play with this red ball and you can play with that one.’

Lidz et al (2003)

How can we decide between these?
Is the syntactic structure of ‘the red ball’ (a) flat or (b) hierachical?
\begin{center}
\includegraphics[scale=0.25]{../www.slides/src/raw/img/lidz_2003_fig0.neg.png}
\end{center}
\begin{center} from \citealp{lidz:2003_what} \end{center}
\begin{enumerate}
  1. \item ‘red ball’ is a constituent on (b) but not on (a)
  2. \item anaphoric pronouns can only refer to constituents
  3. \item In the sentence ‘I’ll play with this red ball and you can play with that one.’, the word ‘one’ is an anaphoric prononun that refers to ‘red ball’ (not just ball). \citep{lidz:2003_what,lidz:2004_reaffirming}.
\end{enumerate}
What I've just shown you is, in effect, how we can decide whihc way an adult human understands a phrase like 'the red ball'.
We can discover this by finding out how they understand a sentence like 'I’ll play with this red ball and you can play with that one.'.
But how could we do this with infants who are incapable of discussing sentences with us?

infants?

Here's how the experiment works (see \citealp{lidz:2003_what}) ...
The experiment starts with a background assumption:
‘The assumption in the preferential looking task is that infants prefer to look at an image that matches the linguistic stimulus, if one is available’ \citep{lidz:2003_what}.
So the key question was whether infants would look more at the yellow bottle (which is familiar) or the blue bottle (which is novel).
If they think 'one' refers to 'bottle', we'd expect them to look longer at the blue bottle;
and conversely if they think one refers to 'yellow bottle', then they're being asked whether they see another yellow bottle.
And, as always, we need a control condition to check that infants aren't looking in the ways predicted irrespective of the manipulation.

Lidz et al (2003)

And here's what they found ...

Lidz et al (2003, figure 1)

Look, a yellow bottle! control: What do you see now?
test: Do you see another one?
 
[yellow bottle] [yellow bottle] [blue bottle]
What can we conclude so far?

From 18 months of age or earlier, infants represent the syntax of noun phrases in much the way adults do.

So there is core knowledge of syntax ... or is there?
Core knowledge is often characterised as innate.
I think this is a mistake (more about this later), but many of you do not.

But are these representations innate?

How could we tell whether these representations are innate?
What do we mean by innate here?
The easy answer is: not learned.
But I think there's a more interesting way to approach understanding what 'innate' means.
Quite a few people pointed out that there isn't agreement on what innateness is.
But this is not very interesting by itself because there's disagreement about most things and potential causes of disagreement include ignorance and stupidity.
It's also important that the mere fact that a single term is used with multiple meanings isn't an objection to anyone.
As philosophers, some of you are tempted to catalogue different possible notions of innateness.
I encourage you to resist this temptation; if you want to collect something, pick something useful like banknotes.
There's a much better way to approach things.
Let's see what kind of findings are, or would be, taken to show that something is innate.
We can use these to constrain our thinking about innateness.
We will say: assuming that this is a valid argument that X is innate, what could innateness be?
Aside: we have too approach science as radical interpreters ...
How does radical interpretation work?
Interpretation is hard because there are two factors: truth and meaning.
The proposal Davidson makes is that we assume truth and infer meaning.
I'm recommending a similar strategy.
We take for granted that this argument establishes that X is innate; we then ask what innateness could be given that this is so.

‘All understanding of the speech of another involves radical interpretation’

Davidson 1973, p. 125

\citep[p.\ 125]{Davidson:1973jx}
\subsection{Poverty of stimulus arguments}
The best argument for innateness is the poverty of stimulus argument.
We need to step back and understand how poverty of stimulus arguments work.
Here I'm following \citet{pullum:2002_empirical}, but I'm simplifying their presentation.
How do poverty of stimulus arguments work? See \citet{pullum:2002_empirical}.
First think of them in schematic terms ...

Poverty of stimulus argument

    \begin{enumerate}
  1. \item
    Human infants acquire X.
  2. \item
    To acquire X by data-driven learning you'd need this Crucial Evidence.
  3. \item
    But infants lack this Crucial Evidence for X.
  4. \item
    So human infants do not acquire X by data-driven learning.
  5. \item
    But all acquisition is either data-driven or innately-primed learning.
  6. \item
    So human infants acquire X by innately-primed learning .
  7. \end{enumerate}

compare Pullum & Scholz 2002, p. 18

This is a good structure; you can use it in all sorts of cases, including the one about chicks' object permanence.
Now fill in the details ...
In our case, X is knowledge of the syntactic structure of noun phrases. (Caution: this is a simplification; see\citet[p,\ 158]{lidz:2004_reaffirming}).)
This is what the Lidz et al experiment showed.
Note that no one takes this to be evidence for innateness by itself.
What is the crucial evidence infants would need to learn the syntactic structure of noun phrases?
This is actually really hard to determine, and an on-going source of debate I think.
But roughly speaking it's utterances where the structure matters for the meaning, utterances like 'You play with this red ball and I'll play with that one'.
\citet{lidz:2003_what} establish this by analysing a large corpus (collection) of conversation involving infants.
What can we infer about innateness from this argument?
First, think about what is innate. The fact that knowledge of X is acquired other than by data-driven learning doesn't mean that X is not innate; it just means that something which enables you to learn this is.
Second, think about the function assigned to innateness. That which is innate is supposed to stand in for having the crucial evidence.
This, I think, is the key to thinking about what we *ought* to mean by innateness.
So attributes like being genetically specified are extraneous---they may be typical features of innate things, but they aren't central to the notion.
By contrast, that what is innate is not learned must be constitutive (otherwise that which is innate couldn't stand in for having the crucial evidence)
Contrary to what many philosophers (including Stich and Fodor) will tell you ...

‘the APS [argument from the poverty of stimulus] still awaits even a single good supporting example’

Pullum & Scholz 2002, p. 47

\citep[p.\ 47]{pullum:2002_empirical}
But they wrote this before \citet{lidz:2003_what} came out.

What is innate in humans?

I asked you this question, but what do I think?
I'd approach it by distinguishing two sub-questions (the second of which has two sub-sub-questions)
**todo: Stress other conceptions and arguments good; start with a project from \citet{spelke:2012_core} or from \citet{haun:2010_origins} and you reach a different point!
  1. What evidence is there?
  2. What does the evidence show is innate?
    1. Type: knowledge, core knowledge, modules, concepts, abilities, dispositions ...
    2. Content: e.g. universal grammar, principles of object perception, minimal theory of mind ...
    Arguments from the poverty of stimulus are the best way to establish innateness.
    The argument concerning syntax we've just been discussing is quite convincing, although if you follow up on the references given in the handout you'll see it's not decisive (as always).
    For things other than knowlegde of syntax, the evidence concerning humans is far less clear.
    There are, however, quite good cases in nonhuman animals, as many of you know.
    So it's not unreasonable to conjecture that learning in the several domains where infants appear to know things early in their first year is innately-primed rather than entirely data-driven.
    But, one or two cases aside, there's enough evidence to rule out the converse conjecture.
    I don't think what is innate is knowledge, nor do I think it's concepts.
    But I think there's a good chance that modules are innate (and therefore core knowledge if I'm right to suppose that 'core knowledge' is a term for the fundamental principles describing the operation of a module).
    On content: I think quite a lot is known about the modules thanks to detailed tests that have little to do directly with controversy about inateness.
Why care about whether something is innate? (This isn't suppose to be dismissive.)

so what?

Here are two reasons why I think we shouldn't worry too much about innateness in trying to understand the origins of mind.
(1) The question about innateness concerns the first transition, whereas I think the second should be our focus (for pragmatic reasons: there's more research).
(2) Discoveries about innately-primed learning make only a relatively modest contribution to understanding the emergence of core knowledge in development. So even when we consider the first transition, it's not obvious that discoveries about innateness are very illuminating, for all their pop-science appeal.
Metaphor: we find a cake in the ruins of Pompeii preserved for a couple of thousand years. We're trying to reconstruct its manufacture.
Its good if someone obsesses about where the eggs came from. Did the baker have her own chickens or did she get them from a friend?
But knowing where the eggs came from is unlikely to be critical to understanding how the cake was manufactured.
We're not finished when we know where the eggs came from, and we're not doomed to fail if we don't know.
So let me put the innateness issue aside and get back to what I think matters most ...

Conclusion

  1. Adults have inaccessible, domain-specific representations concerning the syntax of natural languages.
  2. So do infants (from 18 months of age or earlier, well before they can use the syntax in production).
  3. These representations plausibly enable understanding and play a key role in the development of abilities to communicate with language.
  4. These representations are a paradigm case of core knowledge.
  5. This paradigm allows me to highlight something about core knowledge.
    I would be a mistake to suppose that there is some core knowledge which later becomes knowledge proper --- e.g. the fact that barriers stop solid objects is first core knowledge then later knowledge.
    The content of the core knowledge is a theory of syntax (let's say).
    Or, in another case, the content of core knowledge is some principles of object perception.
    These are things that human adults do not typically know at all, at least not in the sense that they could state the principles.
    So core knowledge enables us to do things, like anticipate where unseen objects will re-appear or communicate with words.
    It doesn't seem to be linked directly to the acquisition of concepts.
 

Syntax: Knowledge or Core Knowledge?

 
\section{Syntax: Knowledge or Core Knowledge?}
 
\section{Syntax: Knowledge or Core Knowledge?}
Given that humans, infant and adult, do represent facts concerning the syntax of languages, what grounds are there to deny that these representations are knowledge?"

?

Are humans’ representations of syntax knowledge?

In the previous lecture I answered this question too quickly. Afterwards someone pointed out that the considerations I had offered are not conclusive. So I wanted to return to this point briefly.
Why bother? I think it matters because (i) the case of syntax is well-understood; (ii) if representations of syntax are not knowledge, the case of syntax provides one possible model for theorising about core knowledge; and (iii) whereas in cases like objects, knowledge knowledge comes relatively early in development (maybe a year or two after core knowledge), in the case of syntax knowledge knowledge typically requires a course in linguistics.
This last point, (iii), makes it a paradigm case of development by rediscovery.

Humans can’t usually report any relevant facts about syntax.

Reply: maybe they know but don’t know they know.

‘It is of the essence of a belief [or knowledge] state that it be at the service of many distinct projects, and that its influence on any project be mediated by other beliefs.’

\citep[p.\ 337]{Evans:1981pc}

Evans 1981, p. 337

Humans’ representations concerning syntax are tied to a single project.

(One requirement for this is that they exhibit limited accessibility.)

Earlier I explained limited accessibility in terms of inferential integration with knowledge. So the two ideas are barely different.

aside

Aside: While we're thinking about Evans, it's worth mentioning parallels between his notion of tacit knowledge and the notion of core knowledge.
Evans' was one of the pioneers here, and his ideas have been pursued by other philosophers, so we can consider research on tacit knowledge to be a resource for understanding the notion of core knowledge.
Evans characterisation of tacit knowledge involves two suggestions.
The first suggestion concerns similarities:
Tacit knowledge is analogous to belief at the level of input and output.

‘At the level of output, one who possesses the tacit knowledge that p is disposed to do and think some of the things which one who had the ordinary belief that p would be inclined to do an think (given the same desires).

At the level of input, one who possesses the state of tacit knowledge that p will very probably have acquired that state as the result of exposure to usage which supports of confirms … the proposition that p, and hence in circumstances which might well induce in a rational person the ordinary belief that p.’

(Evans 1981, p. 336)

\citep[p.\ 336]{Evans:1981pc}
Evans’ second suggestion concerns what distinguishes tacit knowledge from knowledge knowledge.
Evans’ idea was ‘that while the applicability of the generality constraint is a necessary feature of propositional attitudes, states that are intuitively subdoxastic - particularly states of input systems - are not subject to that constraint.’ \citep[p.\ 146]{Davies:1986qv}

The generality constraint applies to knowledge knowledge but not to tacit knowledge.

‘(It is one of the fundamental differences between human thought and the information-processing that takes place in our brains that the Generality Constraint applies to the former but not to the latter. When we attribute to the brain computations whereby it localizes the sounds we hear, we ipso facto ascribe it to representations of the speed of sound and of the distance between the ears, without any commitment to the idea that it should be able to represent the speed of light or the distance between anything else.)’
\citep[p.\ 104, footnote 22]{Evans:1982je}
It would take too long to explain the generality constraint here. If core knowledge matters to your project, I encourage you to read the chapter from Evans' book the Varieties of Reference in which the quote on your handout appears.
Back from the aside, here's our conclusion.

Humans’ representations of syntax aren’t knowledge because they exhibit limited accessbility and are tied to a single project.

This is maybe a good time to consider a question running through these lectures.

?

What is the relation between core knowledge and knowledge knowledge?

The Wrong View

Modules ‘provide an automatic starting engine for encyclopaedic knowledge’

Leslie 1988: 194

\citep[p.\ 194]{Leslie:1988ct}
For instance, a module that detects causal relations contributes to development by providing us with knowledge that there are certain causal relations in our environment.
This knowledge can then be used for making inferences and guiding action, just as any other knowledge can:

‘The module … automatically provides a conceptual identification of its input for central thought … in exactly the right format for inferential processes’

Leslie 1988: 193-4

\citep[p.\ 193--4]{Leslie:1988ct}
Why is this wrong? Because it ignores inaccessibility.

But: inaccessibility

I'll re-explain development by rediscovery with syntax as the illustration?
I don't think you should mention this in your essays, I'm still trying to work it out myself.

development as rediscovery

core knowledge of syntax
[Object]
ability to communicate with language
[Object]
core knowledge of mind
[Object]
...
[Object]
experience
emotion
sensation
[Object]
reflection on this
[Object]
knowledge knowledge of syntax
[Object]
 
\section{Computation is the Real Essence of Core Knowledge}
 
\section{Computation is the Real Essence of Core Knowledge}
Spelke and Carey characterise core knowledge by giving a list of features.
This seems dubious.
We then equated core knowledge with modular representation, following a suggested Spelke made at one point.
This equation of core knowledge and modularity is useful in one respect.
It is useful because Fodor has written a subtle philosophical book about modularity, so we can be confident that our notion is theoretically grounded.
However, the problem remains that Fodor, like Spelke and Carey, introduces modularity merely by listing features.
The key features for us are information encapsulation and limited accessibility.
But in saying that infants' representations of objects have these features, we are really only saying what they are not.
We haven't got very far past the problem I highlighted with the parable of the wrock.
The question, then, is whether we can come up with a better way of characterising core knowledge (or modularity).

the question

I want to approach this question indirectly, by appeal to Fodor's ideas about thinking generally.
It will seem at first that I am going off topic.

indirect approach

‘modern philosophers … have no theory of thought to speak of. I do think this is appalling; how can you seriously hope for a good account of belief if you have no account of belief fixation?’

(Fodor 1987: 147)

\citep[p.\ 147]{Fodor:1987rt}

‘Thinking is computation’

(Fodor 1998: 9)

\citep[p.\ 9]{Fodor:1998ap}
The Computational Theory of Mind:
\begin{enumerate}
\item ‘Thoughts have their causal roles in virtue of, inter alia, their logical form.
\item ‘The logical form of a thought supervenes on the syntac¬tic form of the corresponding mental representation.
\item ‘Mental processes (including, paradigmatically, think¬ing) are computations, that is, they are operations defined on the syntax of mental representations, and they are reliably truth preserving in indefinitely many cases’
\citep[pp.\ 18--19]{Fodor:2000cj}
\end{enumerate}

three points of comparison

  • performance (patterns of success and failure)
  • hardware
  • program (symbols and operations vs. knowledge states and inferences)

thinking isn’t computation … Fodor’s own argument

1. Computational processes are not sensitive to context-dependent relations among representations.

2. Thinking sometimes involves being sensitive to context-dependent relations among representations as such.

In Fodor's terminology, a relation between representations is context dependent if whether it holds between two of your representations may depend, in arbitrarily complex ways, on which other mental representations you have. For our purposes, what matters is that the relation … is adequate evidence for me to accept that … is a context dependent relation. This is because almost anything you know might be relevant to determining what counts as adequate evidence for accepting the truth of a conclusion. Knowing that Sarah missed the conference is (let's suppose) adequate evidence for you to conclude that she is ill … until you discover that she couldn't resist visiting a cheese factory, or that she urgently needs to finish writing a paper. So the adequate evidence relation is context dependent. But since thinking requires sensitivity to whether evidence is adequate, some of the processes involved in thinking must be sensitive to context dependent relations. So not all of the processes involved in thinking could be computational processes of the kind Fodor envisages. This is why the Computational Theory fails as an account of how we think.

(e.g. the relation … is adequate evidence for me to accept that … )

3. Therefore, thinking isn’t computation.

‘the Computational Theory is probably true at most of only the mind’s modular parts. … a cognitive science that provides some insight into the part of the mind that isn’t modular may well have to be different, root and branch’

(Fodor 2000: 99)

\citep[p.\ 99]{Fodor:2000cj}

1. Computational processes are not sensitive to context-dependent relations among representations.

2. Thinking sometimes involves being sensitive to context-dependent relations among representations as such.

3. Therefore, thinking isn’t computation.

Thinking isn't computation because:
\begin{enumerate}
\item Computational processes are not sensitive to context-dependent relations among representations.
\item Thinking sometimes involves being sensitive to context-dependent relations among representations as such.
\item Therefore, thinking isn’t computation \citep{Fodor:2000cj}.
\end{enumerate}

If a process is not sensitive to context-dependent relations, it will exhibit:

  • information encapsulation;
  • limited accessibility; and
  • domain specificity.

(Butterfill 2007)

\citep{Butterfill:2007pe}
Why accept this?
Consider information encapsulation
Approximating evidential and relevance relations with relations that are not context dependent will require restricting the type of input the module is able to process. (Contrast the question, What in general counts as evidence that this is the same face as that? with the question, Which featural information counts as evidence that this is the same face as that?) This contributes to explaining why a Computational process is likely to be informationally encapsulated (to some extent): insensitivity to context dependent relations limits the range of inputs it can usefully accept.
... but maybe not other properties

computation is the real essence of core knowledge (/modularity)

This answers some of the objections we considered earlier.

‘there is a paucity of … data to suggest that they are the only or the best way of carving up the processing,

‘and it seems doubtful that the often long lists of correlated attributes should come as a package’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

we wonder whether the dichotomous characteristics used to define the two-system models are … perfectly correlated …

[and] whether a hybrid system that combines characteristics from both systems could not be … viable’

\citep[p.\ 537]{keren_two_2009}

Keren and Schul (2009, p. 537)

Even so, there is a problem here.

‘the process architecture of social cognition is still very much in need of a detailed theory’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

This proposal departs from Fodor's overall strategy. Fodor starts by asking what thinking is, and answers that it's a special kind of Computational process. He then runs into the awkward problem that such Computation only happens in modules, if at all. Instead of taking this line, we started by asking what modularity is. The answer I'm suggesting is that modular cognition is a Computational process. On this way of looking at things, that such Computation only happens in modules is a useful result because enables us to identify what is distinctive of modular cognition.

Fodor

Q: What is thinking?

A: Computation

Awkward Problem: Fodor’s Computational Theory only works for modules

Fodor

Q: What is modularity?

A: Computation

Useful Consequence: Fodor’s Computational Theory describes a process like thinking

Here's where we were at the end of the previous section.
The question was, Can appeal to core knowledge (/ modularity) explain anything?
Have we made any progress?

core knowledge = modularity

We have core knowledge (= modular representations) of the principles of object perception.

two problems

  • How does this explain the looking/searching discrepancy?
  • Can appeal to core knowledge (/ modularity) explain anything?
So how far have we got with respect to the three questions?

questions

1. How do humans come to meet the three requirements on knowledge of objects?

2a. Given that the simple view is wrong, what is the relation between the principles of object perception and infants’ competence in segmenting objects, object permanence and tracking causal interactions?

2b. The principles of object perception result in ‘expectations’ in infants. What is the nature of these expectations?

3. What is the relation between adults’ and infants’ abilities concerning physical objects and their causal interactions?

With respect to the third question, we have made no progress unless we assume that modules are continuous throughout development. But our little theory of modularity doesn't tell us this.
With respect to question 2a, our claim is that the principles are not knowledge but core knowledge, or modular representations; or else that they describe the operations of a module.
Note that we have yet to say which module they describe.
At this point, we suppose they are part of a sui generis module that is concerned with physical objects and their causal interactions.
With respect to question 2b, again the idea is that the expectations are modular representations.
And with respect to question 1, our current answer is that humans meet the three requirements (abilities to segment, &c) by virtue of a module or core knowledge system that is in place from around six months of age or earlier.