Press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (no equivalent if you don't have a keyboard)

Press m or double tap to see a menu of slides

 

Computation is the Real Essence of Core Knowledge

Spelke and Carey characterise core knowledge by giving a list of features.
This seems dubious.
We then equated core knowledge with modular representation, following a suggested Spelke made at one point.
This equation of core knowledge and modularity is useful in one respect.
It is useful because Fodor has written a subtle philosophical book about modularity, so we can be confident that our notion is theoretically grounded.
However, the problem remains that Fodor, like Spelke and Carey, introduces modularity merely by listing features.
The key features for us are information encapsulation and limited accessibility.
But in saying that infants' representations of objects have these features, we are really only saying what they are not.
We haven't got very far past the problem I highlighted with the parable of the wrock.
The question, then, is whether we can come up with a better way of characterising core knowledge (or modularity).

the question

I want to approach this question indirectly, by appeal to Fodor's ideas about thinking generally.
It will seem at first that I am going off topic.

indirect approach

‘modern philosophers … have no theory of thought to speak of. I do think this is appalling; how can you seriously hope for a good account of belief if you have no account of belief fixation?’

(Fodor 1987: 147)

\citep[p.\ 147]{Fodor:1987rt}

‘Thinking is computation’

(Fodor 1998: 9)

\citep[p.\ 9]{Fodor:1998ap}
The Computational Theory of Mind:
\begin{enumerate}
\item ‘Thoughts have their causal roles in virtue of, inter alia, their logical form.
\item ‘The logical form of a thought supervenes on the syntac¬tic form of the corresponding mental representation.
\item ‘Mental processes (including, paradigmatically, think¬ing) are computations, that is, they are operations defined on the syntax of mental representations, and they are reliably truth preserving in indefinitely many cases’
\citep[pp.\ 18--19]{Fodor:2000cj}
\end{enumerate}

three points of comparison

  • performance (patterns of success and failure)
  • hardware
  • program (symbols and operations vs. knowledge states and inferences)

thinking isn’t computation … Fodor’s own argument

1. Computational processes are not sensitive to context-dependent relations among representations.

2. Thinking sometimes involves being sensitive to context-dependent relations among representations as such.

In Fodor's terminology, a relation between representations is context dependent if whether it holds between two of your representations may depend, in arbitrarily complex ways, on which other mental representations you have. For our purposes, what matters is that the relation … is adequate evidence for me to accept that … is a context dependent relation. This is because almost anything you know might be relevant to determining what counts as adequate evidence for accepting the truth of a conclusion. Knowing that Sarah missed the conference is (let's suppose) adequate evidence for you to conclude that she is ill … until you discover that she couldn't resist visiting a cheese factory, or that she urgently needs to finish writing a paper. So the adequate evidence relation is context dependent. But since thinking requires sensitivity to whether evidence is adequate, some of the processes involved in thinking must be sensitive to context dependent relations. So not all of the processes involved in thinking could be computational processes of the kind Fodor envisages. This is why the Computational Theory fails as an account of how we think.

(e.g. the relation … is adequate evidence for me to accept that … )

3. Therefore, thinking isn’t computation.

‘the Computational Theory is probably true at most of only the mind’s modular parts. … a cognitive science that provides some insight into the part of the mind that isn’t modular may well have to be different, root and branch’

(Fodor 2000: 99)

\citep[p.\ 99]{Fodor:2000cj}

1. Computational processes are not sensitive to context-dependent relations among representations.

2. Thinking sometimes involves being sensitive to context-dependent relations among representations as such.

3. Therefore, thinking isn’t computation.

Thinking isn't computation because:
\begin{enumerate}
\item Computational processes are not sensitive to context-dependent relations among representations.
\item Thinking sometimes involves being sensitive to context-dependent relations among representations as such.
\item Therefore, thinking isn’t computation \citep{Fodor:2000cj}.
\end{enumerate}

If a process is not sensitive to context-dependent relations, it will exhibit:

  • information encapsulation;
  • limited accessibility; and
  • domain specificity.

(Butterfill 2007)

\citep{Butterfill:2007pe}
Why accept this?
Consider information encapsulation
Approximating evidential and relevance relations with relations that are not context dependent will require restricting the type of input the module is able to process. (Contrast the question, What in general counts as evidence that this is the same face as that? with the question, Which featural information counts as evidence that this is the same face as that?) This contributes to explaining why a Computational process is likely to be informationally encapsulated (to some extent): insensitivity to context dependent relations limits the range of inputs it can usefully accept.
... but maybe not other properties

computation is the real essence of core knowledge (/modularity)

This answers some of the objections we considered earlier.

‘there is a paucity of … data to suggest that they are the only or the best way of carving up the processing,

‘and it seems doubtful that the often long lists of correlated attributes should come as a package’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

we wonder whether the dichotomous characteristics used to define the two-system models are … perfectly correlated …

[and] whether a hybrid system that combines characteristics from both systems could not be … viable’

\citep[p.\ 537]{keren_two_2009}

Keren and Schul (2009, p. 537)

Even so, there is a problem here.

‘the process architecture of social cognition is still very much in need of a detailed theory’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

This proposal departs from Fodor's overall strategy. Fodor starts by asking what thinking is, and answers that it's a special kind of Computational process. He then runs into the awkward problem that such Computation only happens in modules, if at all. Instead of taking this line, we started by asking what modularity is. The answer I'm suggesting is that modular cognition is a Computational process. On this way of looking at things, that such Computation only happens in modules is a useful result because enables us to identify what is distinctive of modular cognition.

Fodor

Q: What is thinking?

A: Computation

Awkward Problem: Fodor’s Computational Theory only works for modules

Fodor

Q: What is modularity?

A: Computation

Useful Consequence: Fodor’s Computational Theory describes a process like thinking

Here's where we were at the end of the previous section.
The question was, Can appeal to core knowledge (/ modularity) explain anything?
Have we made any progress?

core knowledge = modularity

We have core knowledge (= modular representations) of the principles of object perception.

two problems

  • How does this explain the looking/searching discrepancy?
  • Can appeal to core knowledge (/ modularity) explain anything?
So how far have we got with respect to the three questions?

questions

1. How do humans come to meet the three requirements on knowledge of objects?

2a. Given that the simple view is wrong, what is the relation between the principles of object perception and infants’ competence in segmenting objects, object permanence and tracking causal interactions?

2b. The principles of object perception result in ‘expectations’ in infants. What is the nature of these expectations?

3. What is the relation between adults’ and infants’ abilities concerning physical objects and their causal interactions?

With respect to the third question, we have made no progress unless we assume that modules are continuous throughout development. But our little theory of modularity doesn't tell us this.
With respect to question 2a, our claim is that the principles are not knowledge but core knowledge, or modular representations; or else that they describe the operations of a module.
Note that we have yet to say which module they describe.
At this point, we suppose they are part of a sui generis module that is concerned with physical objects and their causal interactions.
With respect to question 2b, again the idea is that the expectations are modular representations.
And with respect to question 1, our current answer is that humans meet the three requirements (abilities to segment, &c) by virtue of a module or core knowledge system that is in place from around six months of age or earlier.