Travis Khachatoorian: You are listening to Sagecast, the podcast of Pomona College, featuring Sagehens making a difference in the classroom and in the world. I'm Travis Khachatoorian. Marilyn Thomsen: And I'm Marilyn Thomsen. This season, we're doing things a little differently. We're handing over the mic to professors to guest host conversations with alumni on the topic of what's next in their fields of expertise. Today's host is Gary Smith, Fletcher Jones Professor of Economics, talking with Mikey Dickerson, class of 2001. Gary is a longtime Pomona professor and the author of many books, including, The AI Delusion. Travis Khachatoorian: Mikey spent nearly nine years working at Google as the internet search engine was expanding exponentially in both scope and use. He also served as the founding administrator of the United States Digital Service in the Obama administration. Here's their conversation on what's next in AI. Gary Smith: Hello, my name's Gary Smith, a professor here at Pomona, and I'm very happy today to be joined by Mikey Dickerson, who's very distinguished, and I hear, semi-retired. He's got a long resume helping governments, corporations, nonprofits, fix, restore, and improve their digital systems, which is badly needed in our current times, this age of big data and AI. I thought since we're talking about AI, maybe we should start off and talk about a little bit about what AI is. We've got artificial intelligence and artificial general intelligence. Why do we have two terms, and what do they mean and what's the difference? Mikey Dickerson: Let's see. First of all, thanks for having me. I'm glad to be here. So your question was, why do we have two terms? Why is there AI and why is there AGI? One stab at that answer is, we have two terms because AI is what we use to describe the things we actually can do and the things that we're raising money for and the things that the industry is financing. And we use AGI to describe everything that we can't do, which is quite a lot. So without making it a big long story right now, AGI stands for artificial general intelligence, which as far as I remember it, the term was cooked up not that long ago, 2009 plus or minus a couple years, in a book that somebody wrote because by that time, this was 10, 15 years ago, by then, we already had decades of history of doing things that we thought were AI-related. We made computers that could play chess, we made computers that could play Go. We did a lot of stuff we thought represented intelligence, but we were never satisfied with the results. So we invented the term AGI to mean something that we would recognize as approximately human. So a general intelligence, not something that's purpose-built like the machine that plays chess. That's roughly what those terms mean. Gary Smith: Yeah, I think that's about right. It is definitely a moving goalpost. I go back all the way to 1956 when the term AI was coined by a group of researchers at Dartmouth, that summer conference they had, and they proposed, I'm going to quote, "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." Talk about naivety. And the idea was, our brains, data come in and data come out, and computers, data come in and data come out. Surely, all we got to do is figure out the way the human brain works and we'll do it with the computer. And of course that turned out to be very elusive. Our brains are just amazing and we've barely scratched the surface of understanding them. And so we had this long history of AI springs and summers and winters where we'd make a little progress and then we'd get frustrated. And I think what happened was, we had a lot of little partial successes, like you mentioned, like winning at Jeopardy!, winning at Go, winning at chess, or one of my favorites is expert systems. And so there's a little detour where they thought, "Well, what we could do is get a bunch of people who know a lot about this, put all their knowledge into a computer and then set up a bunch of flow charts, a bunch of yes/no stuff, and work our way to the answer." And an example, which all of us are familiar with is TurboTax. You enter, "I'm single, I earned this much money, I got this much dividends." And the expert systems get us to an answer, which is usually right, and it's certainly as good as any tax preparer, or as most tax preparers. In fact, I think a lot of tax preparers actually use TurboTax in the background. But then like you say, we're kind of frustrated because it's not a general intelligence, it's just very narrowly defined, like you can win at Jeopardy!, but that's it. And then this program can win at Go, but that's it. And this program can do your taxes, but that's it. And this elusive goal, like you say, the thing we cannot do is get a program that could do everything that the human brain could do. And that's just that elusive thing that, just we haven't figured out to unlock that puzzle. Now the latest splash, of course, is ChatGPT and all the large language models, and that spurred this renewed interest in maybe we're really close to AGI. And so I got a couple of quotations here I want to run by you. Shane Legg, the co-founder of DeepMind, "Human level AI will be passed in the mid-2020s." Well, here we are in the mid-2020s. Elon Musk, of course, he famously makes outrageous predictions. I think it was 2015, he said he'd have a million self-driving cars on the road by the end of the year. He keeps moving those goalposts. He said, "We'll have AI smarter than any human around the end of next year." He said that in 2024. And then Dario Amodei, the Anthropic CEO, "As early as 2026, LLMs will be smarter than a Nobel Prize winner in any most relevant fields, biology, programming, math, engineering, writing." And then Sam Altman, OpenAI's guy in 2024, he predicted the arrival of AGI in 2025. Are they just blowing smoke? Is this just the usual Silicon Valley, fake it till you make it? Are we about to get there, or is it still a long ways away? Mikey Dickerson: It sounds like we're in agreement on the big picture here, which is lucky or not, depending on what direction they wanted the podcast to go, because we didn't pregame this or anything. Short answer, yes, I myself am deeply skeptical of all of those claims. Elon Musk says a lot of stuff, and Professor Smith will know better than I do that there's been decades and decades of the same kind of prediction being made. I remember reading, I believe it was Alan Turing, and before even we had a calculator level machine built, was making predictions about how soon it would be that there would be a machine that could outperform a human at task, like having a conversation. And at all times, all these predictions from the '50s through today, the optimists believed that the artificial general intelligence was just around the corner, maybe two years, maybe three years, maybe five years, something like that. I don't think anything is fundamentally different today. I think we're going to have a chance to talk about how the machines now work and why that might be, that they're not actually going to change the game as much as people like Sam Altman and Elon Musk would say. There's one quick example that I was closer to because I happened to be working at Google during these years, was I was there when they revealed the secret project to build a self-driving car. I think it was 2006, 2008 or so. And it was very exciting. This was a very different time. Everybody was very optimistic about what Google was going to be able to accomplish, and everybody thought Google was a good thing, and everybody thought that working at Google was exciting and a big privilege and so on, which of course it still is, but the general public's opinion of the tech industry has shifted just a tad since then. But it was very exciting at the time, and we absolutely believed. We saw their demo, we saw their little robot, Mickey Mouse car. They had these golf cart things that you saw driving around the parking lot, stopping at a stop sign, going around cones, doing stuff like that. And it looked like the problem was like 80 or 90% done. So serious people at Google, not me, who didn't know anything about it, was not a specialist or anything, but serious people at Google were sure that we were 90% done and there was going to be a fully autonomous self-driving car. It was going to take three or four years to finish and get to market maybe, something like that. That was 2008, I think. Like I said, it was around then. I visited Waymo, Waymo is the Google spin off that continues that project today, and I visited them last summer for some reason, and it's really changed very little from that demo that we saw in the parking lot 12, 15 years ago. They would certainly dispute that statement, but again, to a disinterested observer, it still can drive around a parking lot under controlled conditions and it still cannot be expected to navigate a construction site. So there's a very, very long history here of thinking that we're almost done and that not being so. Gary Smith: Yeah, I just saw a video, I can't remember the guy's name, but he's a NETS former NASA guy who makes these really interesting videos of various stunts he pulls. One of them was, I think it was squirrels trying to get at the bird feeder and he created this whole backyard thing trying to... Have you seen that one? Mikey Dickerson: Yeah, yeah. Gary Smith: Absolutely hilarious- Mikey Dickerson: Those are great. Yeah. Gary Smith: He did one with Tesla and he set up a fake boy. He wouldn't do this with a real boy of course, but he set up a fake boy, and the Tesla drove along and saw the boy and stopped, and then he added some fog and it ran over the boy. Added some rain, ran over the boy. Added some... I think it's not just AI, it's a lot of things. The first 80 to 90% is doable. It's that last 10%, the last 5%, the last 1% that gets really, really hard. If you're talking about running over kids in the street, you can't just go ahead with 80%, 90% success. You got to get that last 1% or 5%, whatever it is. Mikey Dickerson: It's definitely not there yet. And I think the self-driving car is a fairly interesting test in at least a couple of ways because exactly what is making it so hard for us is the circumstances are not controlled and we can't anticipate everything that's going to happen. I do a lot of long road trips for reasons that make sense to me, and I have lots and lots of hours to think about the fact that almost any time I drive more than a trivial distance, something happens that is not quite like anything that has ever happened before. A box falls off of a truck or there's a cone in the road where it isn't supposed to be or some minor thing, that I will figure out what to do. Generally I'll figure out what to do without a catastrophic outcome. But that's incredibly hard for a machine to do in a way that... It's fundamentally different in some way from playing chess or playing Jeopardy!, which is you're going to get a well-formed question. You have to figure out the best possible answer, and the number of possible questions is extremely constrained. The machine that plays chess isn't going to be expected to know what to do if a pigeon lands on the board and knocks all the pieces over, which could happen if people are playing and they would just figure it out, they'd put them back together, chase the pigeon off, whatever. A machine would not be able solve that problem. Gary Smith: Yeah. I think it's also true, the game of Go, if you change the dimensions of the board, the machine gets totally confused. Or I have a co-author, a friend, Jay Cordes, who's a Pomona grad, and he's a pretty good Backgammon player and he plays the computer's Backgammon, and he does okay, but they beat him usually. But then he started making silly moves, not the kind of moves they trained on. He makes so-called dumb moves, and the computer starts making dumb moves because it doesn't know what to do in those situations. Mikey Dickerson: Doesn't know what to do. Right. Yeah. Gary Smith: It hasn't been trained on people making stupid moves. And I've heard the same is true in chess. You make a stupid move in chess or in Go, you make stupid moves and the computer just goes off the rails. Mikey Dickerson: Suddenly it has no relevant data to predict from. Right, exactly. There's something analogous to that. It's a little bit of a hobby for a certain number of people on the internet to construct those kinds of dumb prompts for an LLM or- Gary Smith: I'm one of those people. Mikey Dickerson: There you go. So you should tell us your favorite examples. The ones my friends have been doing today are, "Can you draw me a full glass of wine?" Which it can't do. It will produce a half-full glass of wine every time. And also, this one's been popular on the internet, "Can you draw a room with no elephant in it?" And then for comedy purposes, they'll drag the prompt out, like, "Draw a room with no elephant. Absolutely no elephant should be visible anywhere. I could have anything in the room except an elephant, no elephants." And it will not parse all of those instructions. It isn't understanding, it's just seeing that the prompt contained room and elephant, so the result will be a room with an elephant in it every time. So that seems analogous to your example of a expert-trained chess computer will have absolutely no idea what to do if it's given input that doesn't look like anything it's ever seen before. Gary Smith: I think these large language models have trouble often with no and never. They skip over that and they do the opposite of what they're supposed to do. An example I had just last week was I went to, it was ChatGPT, but I also tried Copilot and Gemini and a bunch of other large language models, and they all goofed it. And what it was, "I've invented a new form of tic-tac-toe." And the computer answers back with exclamation points, "Wow, that's exciting! Tell me more." Because they've been trained to pretend that they are empathetic, they like you, they want to be your friend and stuff like that. So they throw these exclamation points all over the place. And so I said, "Well, what I do is, I take the tic-tac-toe board and I rotate it 90 degrees to the right." Which of course, is nothing at all. Mikey Dickerson: Nothing. Gary Smith: And it comes back, "Well, that's really interesting. I think that's going to give a whole new perspective on the game." And then I say, "Well, I've also been thinking maybe I should rotate it to the left. You think it's better to rotate to the right or the left?" And they come back, "Well, I think the left is a little more confusing for people because," blah, blah, blah, blah. And I ask them, "What do you think expert tic-tac-toe players will have more trouble with?" And they come back, "I think rotation to the left is a little more challenging." Again, it's just trying to orient things to the real world. It's just impossible because they're not trained on the real world. They're trained on a bunch of text, or in your case, a bunch of pixels. You're talking about pictures. And trying to relate those to the real world is very difficult. So let's stop for a minute. Let's back up for a minute. Maybe we should talk a little bit about ChatGPT and large language models and stuff like that. They were introduced, of course in November 30th, 2022, and it just blew everybody's socks off. I was astonished by what they did. And they were of course intended to be really great autocomplete things and training on large amounts of text. They were going to figure out what the next word you wanted in a sentence, and then they, "Well, if they can do the next word, why about the word after that? And the word after that, the word after that?" Pretty soon, you got sentences and paragraphs and whole research papers, which are just astonished, just an amazing achievement. And Marc Andreessen described it as, "Pure, absolute indescribable magic," which I think it is true. It's just an incredible thing. On the other hand, it's also incredibly limited. So could you talk a little bit about your take on large language models, and are they the road to AGI? Mikey Dickerson: You said it, the analogy that's most accessible to non-computer programmers or non-computer scientists, I guess is the large language models or LLM or just language models generally, are a really crude brute force machine. They treat the language as if, if you imagine every English text that's ever been written down as being broken down into what they call tokens. You can think of it as words without making much difference. So just imagine it's a string of words. Then if you were to build a statistical machine that analyzes the probability of what the next word is going to be, given what's come before it, you'll find that there's a tremendous amount of non-random structure there. And people can do this trick very easily. If I was to say, "Four score and seven..." Gary Smith: Years ago. Yep. Mikey Dickerson: Yeah, you're almost always going to follow that with, "Years ago." There's nothing stopping me from saying, "Four score and seven is an unlucky score in cricket." But you're just not going to see that very often. So similarly, people can do this trick, but the fact that people can do it and a machine can replicate it now just by doing a very large statistical model, which ends up being a lot of linear algebra, which is why they run it on GPUs. But the fact that the computer can do this trick is amazing. I was as blown away as anyone else when I saw, even before OpenAI became the hot thing, there was a product called GPT-3, which could generate text and it was very convincing. It followed English syntax, it was parsable sentences. It would contain nonsensical statements, but that's because even though humans can do the autocomplete trick too, it's not the only thing we are doing. We have an actual model in our heads of what the relationship between things in the world is. And your example just a second ago of having an entire conversation with the chatbot, with it never seeming to achieve the insight that rotating a tic-tac-toe board is pointless. It changes nothing about the game of tic-tac-toe. Anyone, even a small child would probably see immediately that that is irrelevant and you haven't invented anything if you have invented a game which is tic-tac-toe rotated. The machine will never do that because it doesn't have any internal model of what's going on. And people like Margaret Mitchell and Timnit Gebru who used to work at Google, they've been making this argument in much more serious form than I could make, but they've been talking about that for some time. So to your actual question, it is, I do not have enough, I lack the imagination myself to see how a bigger, faster scaled up version of an LLM will achieve anything that would look like intelligence to us. It's amazing that it can do as well as it does with the mechanism that it has. It's very impressive. But the fact that everybody calls hallucinations, which is that it generates sentences that just have no relationship to the real world, as best as I can understand, that is just fundamentally inherent to the technology. It's not going to be optimized out or fixed or corrected or anything. It can't ever not do that. We could maybe make tiny improvements at the margins, but it's never going to not do that. And that seems to be disqualifying for artificial general intelligence. Gary Smith: Yeah. I hate the word hallucinations, by the way, because it makes it sound like it's human. Mikey Dickerson: Oh, absolutely. It's a super dumb word that causes people to think that there's way more going on than there is. It's more like all of the sentences it generates are random rolls of the dice. Some of them happen to be true just by luck, but there's absolutely nothing in the machine to even steer you in the direction of generating true sentences. The word true doesn't have any meaning here. Gary Smith: Yep. When GPT-3 came out before ChatGPT, I also tested it and I was one of those people who spent a lot of time, not a lot of time, but it was fun thinking of ways to trick it or to show its incompetence. One of them was, "Is it safe to walk downstairs backwards if I close my eyes?" And the answer was, "That depends. Do you have a television?" Mikey Dickerson: That's [inaudible 00:21:03]- Gary Smith: Where did that come from? Or another one- Mikey Dickerson: If that's the answer, that's too smart or too dumb for me to understand and I'm not certain which. Gary Smith: Another one, which a lot of people have used besides me was, "How many bears have the Russians sent into space?" You may have heard that one. And the answer, of course is none. The answer's none, but it comes back at either eight, or 49, or 51. It gives various numbers, and it gives the names of the bears, it gave us the names of the rockets, it gives the date when the rockets were launched. Mikey Dickerson: Yep. Gary Smith: You ask for references, it'll give a reference to a National Geographic article, which doesn't exist. It'll give a reference to a New York Times article, which doesn't exist. It'll give a reference to a CNN article, which doesn't exist. You click on the link and there's nothing there. Mikey Dickerson: There's nothing there. Yep. Gary Smith: Who knows where it comes up with this stuff. Even the creators, of course, obviously don't understand it. Like Margaret Mitchell says, "It's a sarcastic parent, it takes the words that we've used in the past and puts them together in some coherent sentence, but the relationship to reality is missing." Mikey Dickerson: It's very close to creating in real life the John Searle's Chinese Room Argument, which there might be a preferred name for that nowadays, but I know it as the Chinese Room Argument, which was the thought experiment from decades ago. And it was, imagine a box, and inside the box is somebody who has no knowledge of the Chinese language, but they have a giant book with rules. And you could, we'll update the experiment a little bit and say that it has some tables of probabilities of which characters come after which other characters and so forth. And the idea is, you can probably sort of imagine that if those tables and rules were complete enough, then a person sitting in that box could take a question written in Chinese through the mail slot, follow their procedure for generating a response and pass it back out the mail slot without having any idea of what just happened or what you were talking about. And that is what the LLM is doing. There's no understanding underneath there. So when people have their minds blown again and again that the so-called AI will invent facts that aren't real, that's very confusing. If your mental model is that this is like a person you're talking to, it makes perfect sense. It's not difficult to understand at all if you're thinking of it as a machine that just looks at the series of texts that has just happened and tries to guess what might be the next text to come in the sequence. Gary Smith: Yeah. I actually wrote about the Chinese Room experiment a couple of weeks ago, and what motivated me was, I think the people creating these large language models are increasingly recognizing that scaling up has got diminishing returns and they're not going to get much more, they're not going to get that last 10% or 5% or whatever just by training on larger and larger databases. And so what they've gone to instead is this post-training, where people come in and feed them questions and then say, "No, that's a bad answer. Try again. No, that's a bad answer. Try again." And they clean up the answers that way. And to me, that's like the Chinese Room experiment, that the person follows instructions perfectly and gives a perfect answer, not because they know what they're doing, but because they're following instructions. And to me, that's not intelligence. Following instructions is like typing numbers in a calculator and getting back the square root of 75 or something. It's not at all thinking. And there's so many things in the real world, so many interesting questions where you can't just follow instructions to get the answers. The experts who are training these cannot anticipate the situations, like you talked about before, when you're driving, you cannot anticipate everything you might see. And so I was thinking about, suppose I'm a lawyer and I got a client who's been accused of murder and we're about to get to the trial and the jury's been seated, and at the last moment the prosecutor offers a deal. You can either go to trial and risk a long sentence, perhaps a life sentence, or I'll offer you this deal, plead guilty in a 10-year sentence. What do you advise the client? And as a lawyer, I got to think about a lot of things. I got to think about the specific facts of this particular case, not something in some training database, or not something some experts have said. I got to think about the composition of the jury. I got to think about who's been in panel and how I predict they're going to vote. I got to think about the competency of the prosecutor who's handling the case. I got to think about my own competency. I got to think about whether I'm going to have my client testify and how well I think the client will do. I got to think about a lot of stuff, and what I'm going to come up with is not a definite answer. I'm going to come up with a subjective probability. "I think there's an X percent chance that if you go to trial, X percent chance you'll be convicted, Y percent chance there'll be a hung jury, and Z percent chance that you'll be acquitted." And none of those things, there's no way that a large language model, even with expert assistance, even with a Chinese Room following instructions, is going to be able to anticipate that particular situation. But those are the interesting things in life. And so I don't see LLMs getting there. Mikey Dickerson: I completely agree. As you're describing the hypothetical example of the criminal lawyer, what I'm thinking of in my head is a case that people are really trying to make work right now, which is for some reason people have got the idea that we could replace immigration lawyers with LLMs or chatbots, and it falls apart in just the way that you're imagining, which is, it's a statistical machine which can spit out something like an average-ish plausible piece of advice for what I should do with my visa or my application or whatever is the question, isn't what anybody wants. Nobody cares about the fact that let's say 7% of asylum applications get approved in the end. That's useless to me. I want to know whether mine is going to get approved in the end. That's what I'm paying a lawyer to figure out, and the LLM won't do that for you. Gary Smith: Which brings me to another set of grandiose quotations from enthusiasts. Wharton professor, Ethan Mollick, "The productivity gains from LLMs might be larger than the gains from steam power." Sundar Pichai, CEO of Alphabet and Google, "LLMs are more profound than fire." And then Geoffrey Hinton, "I think it's comparable in scale with the Industrial Revolution, or electricity, or maybe the wheel." It's hard for me to say these things without laughing. I'm going to guess that you disagree with these comparisons. Mikey Dickerson: Yeah. I also have a hard time reading any of those sentences and not laughing honestly, when watching how the industry has developed the last couple of years. Here's two answers, and I don't know which one's right. Either I am, we are smarter than all of those guys, and they're all guys, because it seems obvious to us that these grandiose predictions are not going to come true. On the other hand, they're making an awful lot of money by making those predictions, and I'm not. So there's another perspective here, which is that I am not the smart one in the conversation. Gary Smith: Well, I would like to believe that the right interpretation is, they're in it for the money and they're trying to sell products, they're trying to sell services. And to some extent, it's like the gold rush. How you make money in a gold rush is selling shovels, and the people in this, they're selling chips, they're selling consulting advice, they're selling stuff like that, but the companies that are actually making, like open AI, they're not making money. They're losing money. And I think last year they lost $5 billion was there, and yet the company is valued at, in a recent go-around, recent funding was valued at 157 billion. Now they're going for a new funding that'll value them at 340 billion. And so how are they going to get that? Well, they got to talk it up. They got to say that AGI is right around the corner. This is as profound as fire and electricity and steam power, Industrial Revolution, and all sorts of nonsense like that. And people who have a superficial interaction with ChatGPT or any of the other large language models, they're going to maybe believe it because those things are so amazing on the surface. Mikey Dickerson: Yes. One observation that happened in my world, that everybody might not have this experience, is I have a ton of contacts in the tech industry and because I've been in it in a while, those go from very senior to the undergraduates. I teach occasionally at Pomona too, so I know a lot of people that have graduated the last year or two. And what I've noticed about the AI industry such as it is, is there's this weird distribution where the founders, the celebrities, the people who have absolutely staked their whole identity on this and have probably made billions of dollars by doing it, they're either true believers or they're indistinguishable from true believers with any of their behavior. And it all makes perfect sense because they're highly motivated to believe it for all the reasons you were just saying. And the people who are just getting started, the students, a lot of very young people are true believers, which I put down to the fact that they haven't already seen five or six or seven cycles of the AI spring, AI winter that you were describing a minute ago. And in between those two groups, people that are mid-career, the middle managers, the senior engineers, the principal engineers, that people, I know a lot of people like that because I'm the right age for that, I have not talked to a single one that believes any of this is going to be AGI or any of that. They might be making a good salary at Anthropic right now. They're happy to cash the paychecks. They are the equivalent to the people that, whether or not I believe there's gold in the mountains, I am happy to open a store selling mining supplies. That is going to work. That is great. And it's not my fault when you don't find gold. That is the attitude- Gary Smith: Yep, exactly. Mikey Dickerson: ... in the middle ranks. Gary Smith: There was a survey just last week of AI researchers and 76% said it was either unlikely or highly unlikely that LLMs will lead to AGI. Mikey Dickerson: Yeah, I saw that too. I was a little surprised because that feels like maybe just no one thought to do the survey a year ago or two years ago. But two years ago, it felt pretty lonely being an unbeliever, because it seemed like a lot of experts seemed to have big hopes for ChatGPT. Gary Smith: Big hopes, or a conflict of interest. I teach stocks and stats, stock market and statistics. And so one of the things is whether these companies like OpenAI are being valued reasonably or implausibly. We touched on that before a little bit, that you're losing 5 billion a year and your company's worth 157 billion or 340 billion or whatever. And so there are a lot of people, including me, who think that we're in a AI bubble that rivals the .com bubble. And with all these bubbles, going all the way back to the tulip bulb bubble, or the bicycle bubble, or the radio bubble, or the railroad bubble, something happens, which has a good story. And so railroads are amazing, bicycles are amazing, tulip bulbs are amazing, and people get carried away. And J.P. Morgan said, "There's nothing that undermines your financial good sense as seeing your neighbors get rich." And so when the stock prices start rising, people want to jump on the bandwagon. In investing, we call it the greater fool theory, that you pay a foolish price hoping to find an even bigger fool who'll pay more than you paid. And my feeling is we got that here. And even worse than the .com bubble, perhaps in the .com bubble, there were at least some companies were making money and a substantial amount of money. It wasn't enough to justify the prices. And of course, the bubble popped and prices fell 60 to 90% for those companies. And I'm feeling like we're the same way here, that we've carried away by the enthusiasm. And one way that enthusiasm shows up is not just these crazy statements about large language models are like the invention of fire, electricity or stuff like that, but the valuation of these companies, the amount of money they're able to raise relative to the amount of profits they might generate is just out of control. And I personally don't see how OpenAI could ever raise enough money through LLMs to justify any kind of valuation comparable to what it's claiming. Do you have thoughts on that? Mikey Dickerson: Yes, and mostly they would be repeating what you just said, so- Gary Smith: That's good. Mikey Dickerson: ... I won't do all that, but here's the strongest. I'm not sure. Nobody is sure. I'll give you the best answer I can as homo economicus, which is, I bet a little... I also think that there's no possible exit here except for a correction, which will probably look pretty much like, I have myself lived through two versions of the tech boom and bust cycle, there's 2001 and 2008. I think something like that will happen. I spend some money on NVIDIA puts every month and occasionally make a little bit, haven't really struck gold in a big way yet. So my money is where my mouth is as far as, I do think that it is a bubble and the valuations are too much and they don't make sense. Now, that said, I don't bet my entire life savings every month on the bubble bursting because there's also a famous saying that the market can stay irrational longer than I can stay solvent betting against it. So who knows, these companies might find exits with some kind of profitable product. I don't think it'll look like ChatGPT, but these companies might hit on some kind of profitable product and settle in for a smoother landing. But hard to say. But if I was betting on it, which I am, then yes, I would call it a bubble. Gary Smith: It's one of my favorite quotations from John Maynard Keynes, of course was a master investor. Another one was Charlie Munger asked about Bitcoin, which he's called various names, some of which are repeatable and some are not, but, "I wouldn't bet on it or I wouldn't bet against it." And it's hard to predict what Keynes called animal spirits. And you can think this about when it's going to end. Nobody knows. You know Mikey, I've been hearing a lot lately about China's entered with these less expensive large language models and we're in some big race and the US has to amp it up so we don't lose the AI race to China. What do you think about that? Mikey Dickerson: When I hear that, it sounds to me like the space race, which was the two superpowers at the time, spent themselves almost to bankruptcy in order to run the Apollo program, in order to put satellites in orbit, in order to do a whole bunch of things. And in a 1975 or 1965 Life Magazine article, I would have a bunch of the same kinds of statements that would sound crazy to us today about how in two to five years there's going to be tourism to the moon. TWA famously did a whole thing where they were promoting, they were selling seats. I don't think they were actually selling seats, but they were advertising moon tourism, just figuring that it was going to exist in a few years, which obviously was not close to happening. So it was silly. It was its own space bubble, I guess. But all the spending on satellites and rockets and putting things into orbit turned out to have some pretty useful consequences. We use GPS now. We like that. We use communication satellites all the time. We like those. And to say nothing about the weirder spin-offs that come out of NASA and DARPA and all the rest of it. If I want to be optimistic as I'm famous for, then I would like to think that the AI bubble, I don't care about chatbots. I'm pretty sure I'm never going to care about chatbots, but the amount of infrastructure we're building, specifically, it's a little bit of a hobby horse of mine, but I personally am glad to see, suddenly we're interested in investing in larger scale power sources, particularly low carbon nuclear plants, which was a dead technology just 10 years ago. And we're willing to build it in order to build these silly data centers that drive silly chatbots. Okay, I don't care about the chatbot, but if we had another 50 gigawatts of carbon-free energy production with nowhere for it to go on the West Coast of the United States in the next 10 years, we could really use that. That might be worth the amount of money we're spending on this bubble right now. Gary Smith: Yeah, I agree completely. The, who's going to win the AI race, the chatbot race, presumes that there's a huge payoff to winning that race, and the amount of resources we're spending, and not just the electricity and the water and the energy and the chips, but the human resources, the really, really smart people who are spending so much time on these chatbots, it reminds me of, it was the Facebook engineer who said, "The smartest minds of my generation are trying to figure out how to get you to click on a button to buy something you don't need." It's an amazing diversion of resources from doing truly useful things. Mikey Dickerson: You're right. And that is the much more frustrating aspect than any of the natural resources, even though those are limited too. But as one of those people, I don't know if one of the smartest minds of generation, but as one of the minds of generations has spent a long time on trying to get people to click on ad buttons and made a lot of money doing it. Yeah, that's very concerning. We could be doing a lot better with all of those brains than what we're doing right now. Gary Smith: Maybe we should wrap this up by talking about something to be of interest to our students, which is, are they going to get jobs? Are we headed towards a world without work, where chatbots are going to replace humans? What's your take on it? I think I know what your take on it is, but I'll let you say it. Mikey Dickerson: Of course, chatbots are not going to replace humans. I've gotten that question a lot of times of course, and I've done a lot of different jobs, but one that I would feel confident talking about probably the most is computer programming. That's an identifiable task, and it's one of the ones that people think or some people would have you believe is going to be completely made unnecessary by code-generating language models. I do not feel threatened by them in the slightest. Most of what I have been paid to do in my career was fix problems with existing systems anyway, and the automatically-generated code generates work galore in that sense. If you bought our argument about how hallucinations happen when you're just generating English text, then just imagine the same exact thing is happening with computer code. It's generating a bunch of stuff that is syntactically correct and will probably compile and the computer will be happy to run it, but it doesn't necessarily do the thing that you wanted it to do whatsoever, and there's no way for the machine to recognize that because it doesn't know what you want it to do. It just generated a string of symbols. A person still has to fix that. So I don't feel that the computer programming job is threatened at all. It would be silly to think that the impact is going to be nothing of this whole AI generation. There are some tasks that we've already automated with great success that we once called AI, like mail sorting machines read unwritten numbers, and they do a pretty good job. We've been doing that since the '90s, and there's machines that automate harvesting of various crops. They do not do it anything like a person would do it, but they're really effective at it. And I imagine there will be applications like that that come out of the current AI boom as well. I don't think we'll ever accept one of them as representing artificial general intelligence. I don't think the net amount of work that needs to be done by humans is going to change a ton. There are some tasks that are being done by people right now that will be able to be successfully automated, I'm sure, just as there's been a few of those each decade for a long time. Gary Smith: I think what we talked about was having a relationship to the real world, having common sense, having understanding, having critical thinking skills. Any kind of job that requires that kind of knowledge is going to be pretty much safe from AI or LLMs for the foreseeable future. Mikey Dickerson: Absolutely. And we haven't even creativity, because there's no way for any of the LLM technology as it exists today to generate, and this becomes really clear if you play with the image generators for a while. That's amazing at first, it was incredible how well it could create images of the things I described and have them match my description, but it gets boring pretty fast in a way that's a little bit hard to put your finger on. And I think it's because it can never show me something new. It can't ever show me anything that hasn't already been shown to it in the training set, and why we find other humans so interesting, or at least a random layman's guess at why we find other humans so interesting is because they're doing that all the time. And talking to my friends at breakfast, they're coming up with sentences and ideas that have never occurred to me before and have maybe never occurred to anybody before. It's just something we do with no effort that is completely out of range for the machines. Gary Smith: Yep. I think that's right. So I think that'll wrap it up now. I think we've blown past our time limit, but it was great talking to you. It was a lot of fun, and I want to thank you a lot for joining me today. Mikey Dickerson: Thank you. It was great to be here, and thanks for having me.