Pomona College Magazine
Volume 41. No. 2.
Issue Home
Past Issues
Pomona College Home
Related Links
 Jim Marshall's Web
·   ·   ·   ·   ·   ·   ·   ·   ·
Pomona College Magazine is published three times a year by Pomona College
550 N. College Ave, Claremont, CA 91711

Online Editor: Mark Kendall

For editorial matters:
Editor: Mark Wood
Phone: (909) 621-8158
Fax: (909) 621-8203

PCM Editorial Guidelines

Contact Alumni Records for changes of address, class notes, or notice of births or deaths.
Phone: (909) 621-8635
Fax: (909) 621-8535
Email: alumni@pomona.edu
·   ·   ·   ·   ·   ·   ·   ·   ·


Of Metacats and Robodogs
Professor Jim Marshall's research is helping shape the emerging field of developmental robotics.

As a 12-year-old boy in Dallas, Jim Marshall came under the spell of author Arthur C. Clarke. After reading Clarke’s novel, 2001: A Space Odyssey, then seeing Stanley Kubrick’s eerie film adaptation (1968), Marshall was hooked on HAL, the mastermind computer. “The story blew my mind,” he says. “The book and movie combined astronomy, space flight and the idea of artificial intelligence into one wonderful science fiction story which was plausible.”

It was the first time Marshall had come face-to-face with the idea of an intelligent computer, “a conscious computer with a human name. It was a very profound idea for someone in seventh grade and it piqued my interest.”

Thirty years later, as an assistant professor of computer science at Pomona, Marshall is still thinking about computers that can think for themselves.

At Cornell University, Marshall majored in computer science, with a minor in astronomy. Eventually, he says, he was attracted to the field of artificial intelligence by its broad questions—questions like “whether we can create machines that think and whether we should.”

He began tackling those questions in graduate school at Indiana University, Bloomington, where he developed a computer program with “the ability to watch its own behavior and compare its answers.” His 1999 Ph.D. thesis in computer science and cognitive science, titled Metacat: A Self-Watching Cognitive Architecture for Analogy-Making and High-Level Perception, attracted international attention.

According to Marshall, “Metacat is a computer program that uses ‘introspection’ to help it solve and understand analogy problems involving letters. For example, if you change abc into abd, how would you do the same thing to mrrjjj? Or if you change eqe into qeq, how would you change abbba in an analogous way? What about abbbc?” As Metacat works on problems like these, it watches and remembers its own actions at a higher level,
as if observing its own “train of thought.” “This way, when it comes up with a new answer,” says Marshall, “it knows something about the reasoning process that led it to that answer. This makes it possible for the program to analyze and compare different analogies in insightful ways. People think about their own thinking all the time, but getting a computer to do this is a great unsolved challenge for artificial intelligence. Metacat represents a small step toward the still distant goal of understanding what self-awareness really is, and capturing it in a computational model.”

During a year’s leave at Bryn Mawr College in Pennsylvania, Marshall revisited his original Metacat program, which relied in part on a proprietary system developed by Motorola. This meant that his program was not freely available for people to download and run. So, he returned to his program code—all 20,000 lines of it—and recreated it using an open, non-proprietary system. The new version can be downloaded from Marshall’s Web site (www.cs.po-mona.edu/~marshall/metacat).

“The AI field was born in the 1950s,” says Marshall, “and a lot of work was done in the 1960s on neural network or brain-inspired approaches. And, then, interest waned throughout the 1970s while researchers focused on a different approach referred to as symbolic AI.”
In the mid-1980s, interest was rekindled in the neural network approach which by then had managed to solve some of the problems that had stymied earlier researchers in the field.
“I entered graduate school a few years after the neural network renaissance,” notes Marshall. Fortunately, Indiana University also has a very good cognitive science program, “which is really at the intersection of AI, cognitive psychology, neuroscience, philosophy of mind, anthropology to a certain extent, and certainly linguistics. And, remarkably, all these fields are at Pomona.

“The big question for me is how can the three-pound lump of matter in our heads exhibit such incredibly flexible behavior. And, how can this chunk of matter be conscious? It’s just a mind-blowing idea when you think about it.”

Cognitive psychologists test out human subjects in a lab to determine how the mind must work at an abstract level. Neuroscientists actually look at brain circuits. And philosophers of the mind are struggling with the idea of consciousness. Marshall wants to know why all this activity doesn’t simply go on in the brain in the dark, without any subjective experience going along with it.

“I try to understand the mind from the perspective of computer science, mathematics, and from logic, attempting to write computer programs that will exhibit the same kind of intelligence that we see in a human,” says Marshall. “All these scientists are grappling with this very hard question that no one has yet solved.”

The current focus of Marshall’s own research is on developmental robotics, a newly emerging subfield of artificial intelligence that studies the ways in which autonomous robots can acquire their behavior and knowledge strictly through their sensory experiences and interactions with the surrounding environment.

According to Marshall, “Developmental robotics is a move away from task-specific methodologies where a robot is designed to solve a particular pre-defined task—such as path planning to a goal location. The approach takes its inspiration from developmental psychology and developmental neuroscience.

“The goal of developmental robotics,” explains Marshall, “is to let the behavior of a robot develop over time in an open-ended, self-motivated way, with the robot itself deciding on which aspects of its environment to focus. In a developmental system, a robot would be designed with some ‘innate knowledge’ or capacity per design, but then through experience, that is, as the robot ‘assesses’ and ‘interacts with’ the environment in which the robot finds itself or which the robot encounters, the robot would learn increasingly complex behaviors and representations of knowledge.” Unlike many other types of learning systems, a developmental system utilizes training feedback that comes from within the system itself, in the form of internal motivation or reinforcement signals. “The advantage of this approach,” says Marshall, “is that the internal representations created by a robot to model its environment are tied to the robot’s actual sensory perceptions and motor actions, instead of being designed by the programmer, which avoids the subtle problem of human perceptual biases being designed into the system from the start.”

Marshall also has developed a course in artificial intelligence in which students learn to write programs that enable robots to behave intelligently.

His newest acquisition is the Sony AIBO, a robot dog with moveable joints. With sensors on its paws and back, a camera in its nose and microphones in its ears, the robot dog is functional in a realistic way, able to fetch a bone and retrieve a ball. Other robots that Marshall uses in his research include two Khepera II mini-robots, which he likes to refer to as “hockey pucks.” The mini-robots have infrared and light sensors around the sides and two motorized, independently-controlled wheels. One of them also has a camera. Pioneer 3, one of the most popular research-level robots used in Marshall’s work, has sonar sensors around the sides and two motorized wheels and is the largest of his robots.

Marshall hopes such programming will lead to some unexpected behaviors. “I haven’t seen anything too surprising yet coming from my robots, because my colleagues and I are still in the early stages of trying to understand how to program them to learn in very flexible and open-ended ways,” he says.

“This is actually somewhat analogous to the liberal-arts-college philosophy,” he adds. “In other words, we try to equip them for lifelong learning.”
—Don Pattison

©Copyright 2004
by Pomona College
Top of Page Pomona College Magazine • 550 N. College Ave, Claremont, CA 91711 • Contact us for editorial matters