Press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (no equivalent if you don't have a keyboard)

Press m or double tap to see a menu of slides

 

Core Knowledge and Modularity

There are principles of object perception that explain abilities to segment objects, to represent them while temporarily unperceived and to track their interactions.
These principles are not known. What is their status?

the question

I noted earlier that Hood suggests that answering this question requires that we recognize separable systems of mental representation and different kinds of knowledge.
Now I'm going to try to make good on this idea.

‘there are many separable systems of mental representations ... and thus many different kinds of knowledge. ... the task ... is to contribute to the enterprise of finding the distinct systems of mental representation and to understand their development and integration’\citep[p.\ 1522]{Hood:2000bf}.

(Hood et al 2000, p.\ 1522)

Core knowledge is a label for what we need.
I'm going to adopt the label.
But this only amounts to labelling the problem, not to solving it.

core knowledge

‘Just as humans are endowed with multiple, specialized perceptual systems, so we are endowed with multiple systems for representing and reasoning about entities of different kinds.’

\citep[p.\ 517]{Carey:1996hl}

(Carey and Spelke 1996: 517)

So talk of core knowledge is somehow supposed to latch onto the idea of a system.
What do these authors mean by talking about 'specialized perceptual systems'?
They talk about things like perceiving colour, depth or melodies.
Now, as we saw when talking about categorical perception of colour, we can think of the 'system' underlying categorical perception as largely separate from other cognitive systems--- we saw that they could be knocked out by verbal interference, for example.
So the idea is that core knowledge somehow involves a system that is separable from other cognitive mechanisms.
As Carey rather grandly puts it, understanding core knowledge will involve understanding something about 'the architecture of the mind'.

‘core systems are largely innate, encapsulated, and unchanging, arising from phylogenetically old systems built upon the output of innate perceptual analyzers.’

\citep[p.\ 520]{Carey:1996hl}

(Carey and Spelke 1996: 520)

For something to be informationally encapsulated is for its operation to be unaffected by the mere existence of general knowledge or representations stored in other modules (Fodor 1998b: 127)

representational format: iconic (Carey 2009)

To say that a represenation is iconic means, roughly, that parts of the representation represent parts of the thing represented.
Pictures are paradigm examples of representations with iconic formats.
For example, you might have a picture of a flower where some parts of the picture represent the petals and others the stem.
Suppose we accept that there are core systems with this combination of features.
Then we can use the term 'core knowledge' to mean the representations in these core systems.
A piece of \emph{core knowledge} is a representation in a core system.
The hope is that this will help us with the second question.
Consider this hypothesis.
The principles of object perception, and maybe also the expectations they give rise to, are not knowledge.
But they are core knowledge.
The \emph{revised view}: the principles of object perception are not knowledge, but they are core knowledge.
But look at those features again --- innate, encapsulated, unchanging and the rest.
None of these straightforwardly enable us to predict that core knowledge of objects will guide looking but not reaching.
So the \emph{first problem} is that (at this stage) it's unclear what we gain by shifting from knowledge to core knowledge.

knowledge core knowledge

There is also a \emph{second problem}.
This problem concerns with the way we have introduced the notion of core knowledge.
We have introduced it by providing a list of features.
But why suppose that this particular list of features constitutes a natural kind?
This worry has been brought into sharp focus by criticisms of 'two systems' approaches.
(These criticisms are not directed specifically at claims about core knowledge, but the criticisms apply.)
\subsection{Objection}

‘there is a paucity of … data to suggest that they are the only or the best way of carving up the processing,

‘and it seems doubtful that the often long lists of correlated attributes should come as a package’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

we wonder whether the dichotomous characteristics used to define the two-system models are … perfectly correlated …

[and] whether a hybrid system that combines characteristics from both systems could not be … viable’

\citep[p.\ 537]{keren_two_2009}

Keren and Schul (2009, p. 537)

This is weak.
Remember that criticism is easy, especially if you don't have to prove someone is wrong.
Construction is hard, and worth more.
Even so, there is a problem here.

‘the process architecture of social cognition is still very much in need of a detailed theory’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

So what am I saying?
Our question is, Given that the simple view is wrong, what is the relation between the principles of object perception and infants’ competence in segmenting objects, object permanence and tracking causal interactions?
We are considering this (partial) answer: the principles are not knowledge but core knowledge.

We have core knowledge of the principles of object perception.

Let me remind you how we defined core knowledge.
First, we explained core knowledge in terms of core systems: a piece of core knowledge is a representation in a core system.
Second, we characterised core systems by appeal to a list of characteristics: they are innate, encapsulated, unchainging etc.
There are two problems for this answer as it stands.

two problems

    \begin{itemize}
  • \item How does this explain the looking/searching discrepancy?
  • \item Can appeal to core knowledge explain anything?
  • \end{itemize}
This looks like the sort of problem a philospoher might be able to help with.
Jerry Fodor has written a book called 'Modularity of Mind' about what he calls modules.
And modules look a bit like core systems, as I'll explain.
Further, Spelke herself has at one point made a connection.
So let's have a look at the notion of modularity and see if that will help us.

core system = module?

‘In Fodor’s (1983) terms, visual tracking and preferential looking each may depend on modular mechanisms.’

\citep[p.\ 137]{spelke:1995_spatiotemporal}

Spelke et al (1995, p. 137)

Modules are widely held to play a central role in explaining mental development and in accounts of the mind generally.
Jerry Fodor makes three claims about modules:
\subsection{Modularity}
Fodor’s three claims about modules:
\begin{enumerate}
\item they are ‘the psychological systems whose operations present the world to thought’;
\item they ‘constitute a natural kind’; and
\item there is ‘a cluster of properties that they have in common’ \citep[p.\ 101]{Fodor:1983dg}.
\end{enumerate}
What are these properties?
These properties include:
\begin{itemize}
\item domain specificity (modules deal with ‘eccentric’ bodies of knowledge)
\item limited accessibility (representations in modules are not usually inferentially integrated with knowledge)
\item information encapsulation (modules are unaffected by general knowledge or representations in other modules)
\item innateness (roughly, the information and operations of a module not straightforwardly consequences of learning; but see \citet{Samuels:2004ho}).
\end{itemize}
  • domain specificity

    modules deal with 'eccentric' bodies of knowledge

  • limited accessibility

    representations in modules are not usually inferentially integrated with knowledge

  • information encapsulation

    modules are unaffected by general knowledge or representations in other modules

    For something to be informationally encapsulated is for its operation to be unaffected by the mere existence of general knowledge or representations stored in other modules (Fodor 1998b: 127)
  • innateness

    roughly, the information and operations of a module not straightforwardly consequences of learning

Not all researchers agree about the properties of modules. That they are informationally encapsulated is denied by Dan Sperber and Deirdre Wilson (2002: 9), Simon Baron-Cohen (1995) and some evolutionary psychologists (Buller and Hardcastle 2000: 309), whereas Scholl and Leslie claim that information encapsulation is the essence of modularity and that any other properties modules have follow from this one (1999b: 133; this also seems to fit what David Marr had in mind, e.g. Marr 1982: 100-1). According to Max Coltheart, the key to modularity is not information encapsulation but domain specificity; he suggests Fodor should have defined a module simply as 'a cognitive system whose application is domain specific' (1999: 118). Peter Carruthers, on the other hand, denies that domain specificity is a feature of all modules (2006: 6). Fodor stipulated that modules are 'innately specified' (1983: 37, 119), and some theorists assume that modules, if they exist, must be innate in the sense of being implemented by neural regions whose structures are genetically specified (e.g. de Haan, Humphreys and Johnson 2002: 207; Tanaka and Gauthier 1997: 85); others hold that innateness is 'orthogonal' to modularity (Karmiloff-Smith 2006: 568). There is also debate over how to understand individual properties modules might have (e.g. Hirschfeld and Gelman 1994 on the meanings of domain specificity; Samuels 2004 on innateness).
In short, then, theorists invoke many different notions of modularity, each barely different from others. You might think this is just a terminological issue. I want to argue that there is a substantial problem: we currently lack any theoretically viable account of what modules are. The problem is not that 'module' is used to mean different things-after all, there might be different kinds of module. The problem is that none of its various meanings have been characterised rigorously enough. All of the theorists mentioned above except Fodor characterise notions of modularity by stipulating one or more properties their kind of module is supposed to have. This way of explicating notions of modularity fails to support principled ways of resolving controversy.
 
No key explanatory notion can be adequately characterised by listing properties because the explanatory power of any notion depends in part on there being something which unifies its properties and merely listing properties says nothing about why they cluster together.
 
Interestingly, Fodor doesn't define modules by specifying a cluster of properties (pace Sperber 2001: 51); he mentions the properties only as a way of gesturing towards the phenomenon (Fodor 1983: 37) and he also says that modules constitute a natural kind (see Fodor 1983: 101 quoted above).
It is tempting to appeal to spatial metaphors in thinking about modularity. Just as academics tend to work at high-speed on domain-specific problems when they can cut themselves off from administrative centres, so we might attempt to explain the special properties of modules by saying that they are cut off from the central system. But it isn't clear how to turn this metaphor into an explanation. The spatial metaphor only gives us the illusion that we understand modularity.
So what aim I suggesting.
First that we treat core knowledge and modularity as terms for a single thing, whatever it is.
This has the advantage that we can draw on Fodor's more detailed theorising about modularity.

core knowledge = modularity

So the view we are considering is that

We have core knowledge (= modular representations) of the principles of object perception.

Does this help us with the two problems I mentioned earlier?

two problems

    \begin{itemize}
  • \item How does this explain the looking/searching discrepancy?
  • \item Can appeal to core knowledge (/ modularity) explain anything?
  • \end{itemize}
Here we have the same problem as before. If anything, invoking modularity makes it worse.
But here our situation is better. To see why, recall the properties of modules.
  • domain specificity

    modules deal with 'eccentric' bodies of knowledge

  • limited accessibility

    representations in modules are not usually inferentially integrated with knowledge

  • information encapsulation

    modules are unaffected by general knowledge or representations in other modules

    For something to be informationally encapsulated is for its operation to be unaffected by the mere existence of general knowledge or representations stored in other modules (Fodor 1998b: 127)
  • innateness

    roughly, the information and operations of a module not straightforwardly consequences of learning

Limited accessibility explains why the representations might drive looking but not reaching.
But doesn't the bare appeal to limited accessibility leave open why the looking and not the searching (rather than conversely)?
I think not, given the assumption that searching is purposive in a way that looking is not. (Searching depends on practical reasoning.)
We'll come back to this later (if core knowledge of objects involves object files, it's easier to see why it affects looking but not actions like reaching.)
 
Except, of course, calling this an explanation is too much.
After all, limited accessibility is more or less what we're trying to explain.
But this is the first problem --- the problem with the standard way of characterising modularity and core systems merely by listing features.

summary so far

core knowledge = modularity

We have core knowledge (= modular representations) of the principles of object perception.

two problems

    \begin{itemize}
  • \item How does this explain the looking/searching discrepancy?
  • \item Can appeal to core knowledge (/ modularity) explain anything?
  • \end{itemize}