If computers are made up of hardware and software, transistors and resistors, what are minds made of? Minds clearly are not made up of transistors and resistors, but there is a case to be made that one of the most basic elements of computation is shared by man and machine, the ability to represent information in terms of an abstract, algebra-like code. In a computer, this means that software is made up of hundreds, thousands, even millions of lines that say things like IF X IS GREATER THAN Y, DO Z, or CALCULATE THE VALUE OF Q BY ADDING A, B, AND C. As a graduate student, I wondered whether the mind might use similar abstract algebra-like representations.
For most people, the answer was obvious. Obviously "yes", or obviously "no", depending on whom you talked to. I came into the field right at the time that something called connectionism or "Parallel Distributed Processing", was becoming popular. The idea was to replace the mind-as-computer metaphor with theories in which the mind would be described in terms of a network of interconnected, neuronlike units.
My early research looked at whether toddlers used algebra-like rules in their efforts to acquire language. In particular, I focused on some of the errors that English-speaking children make as they try to acquire the confusing system of the English past tense. Mainly made up of regular verbs like walk-walked and talk-talked, the English past system is littered with exceptions like sing-sang and ring-rang, keep-kept, and even odd historical relics like go-went. Children routinely make errors as they learn this system, saying things like breaked instead of broke. Based on an exhaustive analysis of 11,500 past tense utterances made by young children, I argued that these errors were a consequence of an overextension of an algebraic rule that says TO FORM THE PAST TENSE OF VERB X, CONCATENATE THE STEM OF VERB X WITH THE ED MORPHEME. These data have become perhaps most widely modeled data set in the field of language acquisition, with about 25 attempts since to capture them.
Having convinced myself (if not quite all of my colleagues) that by the age of three children had acquired at least a few basic linguistic rules, I turned to the question of whether the ability to acquire algebraic rules might start earlier, prior to language. I have been conducting a series of studies in which I present babies with sentences from a made-up language, and then ask what the babies can learn about that made-up language. For example, in one set of studies, I asked whether seven-month-old babies could acquire simple quasi-linguistic rules. For two minutes, babies heard sentences like la ta la and ga na ga; I then asked whether those babies could tell the difference between new sentences like wo fe wo (same structure as those they have already heard) and wo fe fe (different structure). It turned out that the seven-month-old babies could in fact tell the difference, suggesting that the ability to learn and generalize abstract algebraic rules is present quite early in life.
In a 2001 book, The Algebraic Mind: Integrating Connectionism and Cognitive Science (MIT Press), I have tried to move beyond algebraic rules, into questions about other basic building blocks of mind, and questions about how they might be implemented in the hardware of the brain. The book is at once a critique of current research in neural networks and a suggestion for a better way to build such networks.
Most recently I have developing a new
line of research, aimed at bringing biology to bear on questions
about innateness. Although I have long been convinced that children
are born with sophisticated tools for learning language, I was
never clear about what it meant for something to be innate; I
am now trying to use computational modeling to bring insights
from genetics and the human genome project to bear on classic
questions about cognitive and linguistic development.