On a random weeknight at a comedy club in Burbank, Pomona College Professor Ori Amir bounds onto the stage.

“Hello, party people!”

By day, the bearded redhead with perpetually tousled hair is a visiting professor of psychology who has taught at Pomona since 2017. By night? An amateur standup comic. 

“As you can tell by my accent, I am a neuroscientist,” the native Israeli says, drawing titters from an audience that doesn’t quite know what to believe. “Sorry, I forgot I’m in Hollywood: I’m a neuroscientist-slash-model,” he says.

“I did get a new haircut. I went to Floyd’s and I told them I work at a college, so could you just give me the haircut of whatever celebrity is most popular among college students these days? So they gave me the Bernie Sanders.”

This time, the laughter is in full.

To Amir, standup comedy is like a scientific experiment that provides immediate results. You test the hypothesis that your joke is funny: They either laugh or they don’t. There are variables such as word choice, delivery or audience demographics, but the feedback is instant—sometimes painfully so.

His academic research is a far more sophisticated inquiry. Other researchers have used fMRI analysis, or functional magnetic resonance imaging, to study the brain’s responses to humor. Amir’s work with fMRIs and eye-tracking technology is groundbreaking: He studies the workings of the brain during the actual creation of humor.

Comedy, it turns out, is a near perfect subject for exploring the creative process.

“It’s a cognitive process that under the right setting could take 15 seconds and you can replicate it many times. Anybody can at least try to do it,” Amir says. “It’s hard to ask a novelist to come up with a novel while you’re watching.”

Using Technology to Study Humor

The scientific journal Nature featured Amir’s work last fall in an article on how neuroscience is breaking out of the lab, citing his doctoral research at the University of Southern California with Irving Biederman on the neural correlates of humor creativity

Amir recruited professional comedians—including some from the Groundlings, the famed Los Angeles improv troupe that helped spark the careers of Melissa McCarthy and Will Ferrell—along with amateur comedians and a control group made up of students and faculty. He then presented examples of classically quirky cartoons from The New Yorker with the original captions removed and asked the subjects to come up with their own captions—some humorous, some mundane and sometimes no caption at all—as he recorded which areas of the brain were activated.

What Amir found was somewhat unexpected: The regions of the brain lit up by the creation of the funniest jokes by the most experienced comedians weren’t so much in the medial prefrontal cortex, the area of the brain associated with cognitive control, but in the temporal lobes, the regions of the brain connected to more spontaneous association. The findings fit perfectly, he says, with the classic but decidedly unscientific advice by improv comedy coaches to “get out of your head.”

Amir has expanded his work at Pomona, where he teaches such courses as Psychology of Humor, Data Mining for Psychologists and fMRI Explorations Into Cognition. His current work uses eye-tracking technology to examine the relationship between visual attention and the creation of humor.

That study has given undergraduate students who are headed toward entirely different careers an opportunity to contribute to research that Amir expects to publish in a scientific journal next year.

Recent cognitive science graduates Konrad Utterback ’19, who is beginning his career as a financial analyst, and Justin Lee ’19, who plans to go to law school, will be among the paper’s coauthors. Other collaborators include Alexandra Papoutsaki—a professor of computer science whose expertise in the emerging uses and potential of eye tracking has been featured in Fortune and Fast Company—and students Sue Hyun Kwon ’18 and Kevin Lee ’20, who wrote computer code for the project.

Once again using uncaptioned New Yorker cartoons as prompts, Utterback and Justin Lee conducted experiments using a similar assortment of professional comedians that included comics from the Groundlings and Second City, along with amateur comedians and students. 

The eye-tracking device—a low-end model by Tobii that costs about $170 and looks like a narrow black bar attached to the bottom of a standard computer monitor—allowed the researchers to chart the movement of the subjects’ eyes on an X-Y coordinate plane over the 30 seconds they were given to look at each cartoon.

The results were then compared to something called a saliency map of the cartoon image. 

“It's this algorithm that basically determines which part of the cartoon is the most visually salient; it defines visual saliency in terms of things like edges and contrast and light—factors which are likely to attract low-level, primitive visual attention,” Utterback explains. “It basically is a computational way of ranking, at a pixel-level resolution, which parts of the cartoon are the most visually interesting.”

Once again, the results were surprising. The expert comedians focused most closely on the salient or conspicuous features of the cartoon, including faces.

“It's actually a little counterintuitive because you would think, well, you have all this experience doing comedy and then you end up looking at those features that the low-level algorithm has determined to be the most salient ones,” Amir says. “Our interpretation was that it has to do with them actually using the image to generate the captions, using the input to generate associations to come up with something funny, as opposed to trying to sort of top-down impose their ideas.”

That made sense to Utterback.

“The fact that these were improv comedians in particular is relevant because that's consistent with how comedians do improv comedy,” he says. “They're basically trained to listen to what other people are saying first and not ruminate internally too much trying to think of something funny on their own, and sort of just be reactive. It makes perfect sense with these results because they were focusing much more on the actual content of the image to create the joke rather than trying to generate it themselves and forcing it to fit the cartoon, which is what we would expect people with no comedy experience to do.”

Justin Lee’s part of the study built on those results, adding the captions the subjects produced to the original cartoons and then asking three different people to rate the funniness of the cartoons and their captions.

“We were able to use the data to determine that this fixation on the salient parts of the image directly correlates with how funny the caption actually ends up being,” Lee says. 

The students’ findings support Amir’s earlier results from his fMRI research.

“We basically proved the same thing that he did using a different modality (eye tracking versus fMRI),” Utterback says. “In a nutshell, both experiments show that people with more comedy experience display a higher level of bottom-up, automatic control and less top-down, intentional influence on the humor creation process.”

But Can a Machine Create Comedy?

Amir plans to turn his gaze next to the potential for artificial intelligence to produce comedy. 

His initial instinct is that it is an “AI-complete problem”—one of the few things robots are not going to soon be able to do better than humans. There are types of humor, however, that computers should be able to excel at—such as puns, the proverbial lowest form of humor.

“That’s the first type of humor computers are able to do,” he says.

By the way, Amir—who performs around Los Angeles maybe a couple of times a week, sometimes at gigs, sometimes at open-mic nights—already has had the distinction of being the opening act for a joke-telling robot. 

The electronic novice of the standup circuit was pretty funny, he admits. However, there was a catch.

“The robot told jokes written by a good comedy writer.”