Menu

in: Career, Career & Wealth, Podcast

• Last updated: April 22, 2025

Podcast #1,065: Co-Intelligence — Using AI to Think Better, Create More, and Live Smarter

The era of artificially intelligent large language models is upon us and isn’t going away. Rather, AI tools like ChatGPT are only going to get better and better and affect more and more areas of human life.

If you haven’t yet felt both amazed and unsettled by these technologies, you probably haven’t explored their true capabilities.

My guest today will explain why everyone should spend at least 10 hours experimenting with these chatbots, what it means to live in an age where AI can pass the bar exam, beat humans at complex tests, and even make us question our own creative abilities, what AI might mean for the future of work and education, and how to use these new tools to enhance rather than detract from your humanity.

Ethan Mollick is a professor at the Wharton business school and the author of Co-Intelligence: Living and Working with AI. Today on the show, Ethan explains the impact of the rise of AI and why we should learn to utilize tools like ChatGPT as a collaborator — a co-worker, co-teacher, co-researcher, and coach. He offers practical insights into harnessing AI to complement your own thinking, remove tedious tasks from your workday, and amplify your productivity. We’ll also explore how to craft effective prompts for large language models, maximize their potential, and thoughtfully navigate what may be the most profound technological shift of our lifetimes.

Connect With Ethan Mollick

A hand reaching for an apple on a tree branch is depicted on the cover of "Co-Intelligence: Living and Working with AI" by Ethan Mollick, symbolizing humanity's evolving journey towards co-intelligence with AI.

Listen to the Podcast! (And don’t forget to leave us a review!)

Spotify.Apple Podcast.

Overcast.

Listen to the episode on a separate page.

Download this episode.

Subscribe to the podcast in the media player of your choice.

Read the Transcript

Brett McKay: Brett McKay here. And welcome to another edition of the Art of Manliness Podcast. The era of artificially intelligent large language models is upon us and isn’t going away. Rather, AI tools like ChatGPT are only going to get better and better and affect more and more areas of human life. If you haven’t yet felt both amazed and unsettled by these technologies, you probably haven’t explored their true capabilities. My guest today will explain why everyone should spend at least 10 experimenting with these chatbots, what it means to live in an age where AI can pass the bar exam, beat humans at complex tests, and even make us question our own creative abilities, what AI might mean for the future of work in education, and how to use these tools to enhance rather than detract from your humanity. Ethan Mollick is a professor at the Wharton Business School and the author of Co-Intelligence: Living and Working with AI. Today, on the show, Ethan explains the impact of the rise of AI and why we should learn to utilize tools like ChatGPT. As a collaborator, a co-worker, co-teacher, co-researcher, and coach, he offers practical insights into harnessing AI to complement your own thinking, remove tedious tasks from your workday, and amplify your productivity.

We’ll also explore how to craft effective prompts for large language models, maximize their potential, and thoughtfully navigate what may be the most profound technological shift of our lifetimes. After the show is over, check out our show notes @aom/ai. All right Ethan Mollick, welcome to the show.

Ethan Mollick: Thanks for having me.

Brett McKay: So I’m sure everyone listening to this episode has heard about or even used what’s called artificial intelligence. Or, you know, we’ll talk about the difference between that. In large language models like chatGPT is the most popular one, but I think popularly when people use the phrase artificial intelligence, they probably use that without really understanding what it means. You see, like AI this and AI that. This has AI. When computer scientists talk about artificial intelligence, what do they mean by artificial intelligence?

Ethan Mollick: So it is the world’s worst label, like one of many of them, because it actually came from the 1950s originally, and it has many different meanings. The two biggest meanings recently was before ChatGPT’s use. When you heard artificial intelligence being used, we were talking about machine learning, which are ways that computers can recognize patterns in data and make predictions about what comes next. So if I have all this weather data, I can predict what the weather is going to be tomorrow. If I have all this data about where people order products, I can figure out where to put my warehouse. If I have all this data on what movies people watch, I can use that to predict what movie you might like, given your watching history. So this sort of, you might have heard of big data or data as the new oil or algorithms, like all of that was this kind of what we’d called AI through most of the 2010s. And then OpenAI introduced ChatGPT and large language models became a big deal. Those use the same techniques as are used in the other forms of machine learning, but they apply them to human language and it turns out that creates a whole bunch of really interesting new use cases. So AI has meant many different things as a result.

Brett McKay: Okay, so let’s talk about large language models or LLMs, because I think when most people think about AI these days, that’s typically what they’re thinking about. So we mentioned ChatGPT, then there’s Claude, Gemini, Perplexity. How do these things work? Like whenever you type something into ChatGPT, what’s going on on the other end, that gives you whatever it spits out.

Ethan Mollick: So the right way to think about this is that we don’t actually know all the details. We know technically how they work, but we don’t know why they’re as good as they are. Technically, how they work is you basically give this machine learning system all the language that you can get your hands on. And so like the initial data sets these things trained on was all of Wikipedia, lots of the web, every public domain book, but also like lots of weird stuff, like there’s lots of semi pirated Harry Potter fan fiction in there. Also all of Enron, the accounting firm that went under for financial fraud, all of their emails went in because those were freely available. And so there’s this vast amount of data and then the AI goes through a process of learning the relationships between words or parts of words called tokens, using all this data. So it figures out how patterns of language work and it does that through complex statistical calculations and it figures that on its own. So when you actually use these systems, what it’s doing is doing all this complex math to figure out what the next most likely word or token in the sentence is going to be.

So it’s basically like the world’s fanciest autocomplete that happens to be right a lot of the time.

Brett McKay: Okay. But it can also create images. Like you can do that with ChatGPT and some of these other LLMs. So what’s going on there. Like how does that work?

Ethan Mollick: So that’s a really interesting situation because as of the time we’re recording this, there’s been actually a very big change. So prior to the last week or two, the way that AI’s generated images tended to be something called the diffusion model, which is kind of unrelated to large language models. And it involves taking random static and then kind of carving it away until you get an image. And those models which you’ve all seen, We’ve all seen sort of operate, tend to produce a lot of distortion. So they didn’t do language very well. If you tell them they’re, they’re not really that smart. And so when AIs were creating images, they were prompting one of these diffusion models to make an image for them. That all changed in the last week or so because two different systems, OpenAI’s ChatGPT 4.40 and Google’s Gemini, gained the ability to create images directly. That this is called multimodal image creation. So now what the AI does is, remember we talked about how it creates language by adding one word after another, one token after another. It now can do that with images. Basically, it’s painting little patches of images.

And just like words, it can create images or voice or any other thing that way. So when it makes an image now, it can actually make it accurately. So there’s been a huge change in a very short period of time.

Brett McKay: Okay, we’ll dig more into how people are using this on a practical basis. But let’s talk about the different LLMs that are out there. So there’s the popular ones, ChatGPT, that’s run by OpenAI. There’s Claude, there’s Gemini. What’s the difference between these different large language models?

Ethan Mollick: So there’s a lot of things that, that are different between them that probably don’t matter that much because they’re all evolving pretty quickly. So the most important thing to think about if you’re thinking about which AI to use is they all have different features, but they’re all adding features all the time and converging. It’s that you want to make sure you’re using at least when you’re trying to do hard problems that you’re using. The largest, biggest AI you have access to, we call these frontier models. So ChatGPT has a lot of options available. GPT4.O was just updated, but GPT 4.5 or 0.1, their most recent models tend to be better. So if you are listening to this and you last used AI 18 months ago or 12 months ago and thought, okay, it doesn’t do that well, right now, it makes a lot of mistakes. All of those things change as models get bigger. So as models get bigger, they get smarter at everything and more capable at everything. We call this the scaling law. And as a result, you want to have access to a tool with a that is actively being developed, so you have a very large model.

So anthropic ChatGPT and Google through their Gemini system are all very good choices because they all have a lot of options about what they can do and very big recent models to use.

Brett McKay: So researchers have given a lot of tests to these LLMs, the kind of tests that, you know, a human would take, try to figure out how good these things are. So how do the models do?

Ethan Mollick: So we’re getting to the point where it’s getting hard to test these things. So to give you one example, there is a famous test that’s used to evaluate these models called the GPQA, which stands for Google Proof, Question and Answer of all things. And it’s designed so that a human PhD student using Google and giving a half hour or more to answer each question will get around 31% right outside their area of expertise. And inside their expertise, they’ll get around 82%, right. So with Google access to tools, that’s what they get, right. What’s happened very recently is until like last summer, the average AI was getting around, you know, 35%, so better than a human outside of expertise, which is pretty impressive, but not as good as a human expert as of late this fall and into this spring now, the models are performing better than humans at that test. So they’re getting 85%, 84% beating humans at this. So they’ve gotten so good at tests that we’ve had to create new tests. So the most famous of these is something called Humanities Last Exam, where a company put together a bunch of human experts in everything ranging from like archeology and foreign languages to biochemistry to math.

And they’ve all created really hard problems, professors who created hard problems that they couldn’t solve or, you know, they would have trouble solving themselves. And when that came out in January, the best models were getting around 2 or 3%, right. Now they’re getting between 18 and 28%, right. Just about six or eight weeks later. So they’re doing really well on exams.

Brett McKay: Yeah, and ChatGPT, when it takes the bar exam, it’s passing it. When it takes the AP exam in biology and history and psychology, it’s scoring fours and fives. So, I mean, yeah, it’s really, it’s really impressive.

Ethan Mollick: Yeah, I mean, we’re in a place where the AI will beat most humans on most tests.

Brett McKay: So going back to this idea of how AI works, like a fancy autocomplete, like, so what’s going on? Like, if you give it a question, how is it figuring out the answer is just saying, well, the probability based on this question is, you know, this answer is that what’s going on?

Ethan Mollick: So two things are happening. The comforting thing that’s happening sometimes is they cheat, right? So they’ve already seen these questions, so they can predict the next answer because it’s already been in the data set before. But we find that if we create new questions the AI has never seen before, they still get things right. And the truth is, this is where we’re not 100% sure why they’re as good as they are at this. We’re actually trying to understand that right now. So we know how these systems work technically, but we don’t actually understand why they’re as creative and good and persuasive and interesting as they are. We don’t have great theories on that yet.

Brett McKay: People listening to this who have kept a pace of computer science, they’ve probably heard of the Turing Test. For those who aren’t familiar with the Turing Test, what is that? And have these large language models passed the Turing Test?

Ethan Mollick: So the Turing Test is one of a series of, like, kind of mediocre studies of what makes artificial intelligence artificial intelligence that we used to use to judge the quality of AI because it didn’t matter. No AI came close to it. So the Turing Test is this test by the guy who actually came up with the name for artificial intelligence, which is Alan Turing, who is a famous World War II scientist. And he came up with the idea of, we called the Imitation Game. So imitation may have even seen the movie about this. But the idea is that if you talk to an AI via typing and you talk to a human, could you tell which was the AI and which was the human in natural conversation? Until very recently, the idea of this was kind of laughable, right, that you could spend time talking to a computer, you would know it was a computer. And in some ways, it’s become kind of irrelevant, because I think everybody thinks that they could be fooled by AI, and they can be. So the Turing Test seems pretty decisively passed. In fact, what’s pretty funny is that at this point, humans, in some small studies, actually are more likely to judge the AI as human than human is human. So we’re still figuring this out. But I think the Turing test is passed.

Brett McKay: So AI, these large language models, they’re really good at a lot of things. What are the limitations that these LLMs have right now, and what do they not do well?

Ethan Mollick: It’s a good question because that’s changing all the time. We have this concept in our research we call the jagged frontier, which is AI is good at some things you wouldn’t expect and bad at some things you wouldn’t expect. So until very recently, for example, you could ask the AI to write a sonnet for you about strawberries where every line starts with a vowel and it has to also include, you know, a line about space travel, and you’d get a pretty good sonnet. But if you asked it to write a 25 word sentence about strawberries or even count the number of hours in strawberries, it would get that wrong. So the AI has these weird weak spots and weird strong spots. Now the other thing is, this is always changing. So that R Test, how many Rs are there in Strawberry? Worked really well until January 2025. And now it doesn’t work anymore because the AIs are good enough that they can count the number of hours in Strawberry. So this is an evolving standard.

Brett McKay: I’m sure people who have been keeping on top of what’s going on with large language models have heard of this idea of hallucinations. What are those? And is that still happening?

Ethan Mollick: So remember, what an AI is doing is predicting the next word in a sentence. It’s not looking things up in a database, it’s predicting. And so oftentimes what it predicts as the next word in a sentence may not be true. So if you ask it, you know, especially older models, if you ask them, like a book I’ve written, it might make up the title of a different book that could be something I wrote because it’s predicting something that’s likely to be true, but doesn’t know whether it’s true or not. We call these hallucinations. They’re basically errors the AI makes, but they’re kind of really pernicious or dangerous errors because the AI makes things up that sound real, right? If you ever ask for a citation or quote, it’s really good at making up quotes like, I bet you Abraham Lincoln did say that, but he never did. So it’s not just like an obvious error, like it makes something up. Like, you know, Abraham Lincoln said, the robots will rise and murder us all. It will say something that sounds like Abraham Lincoln, quote. So we call those things hallucinations. There’s sort of good news and bad news about hallucinations, which is they’re kind of how AI works.

Always making something up. That’s the only way. It’s always generating with probability, the next word in the sentence. So it’s always kind of hallucinating. The fact that hallucinations are right so much of the time is kind of weird. And also it’s what makes the AI creative. If it wasn’t making stuff up some of the time, the answer would be very boring. And the text is very boring. So it’s very hard to get rid of hallucinations entirely. But as AIs get bigger and better, they hallucinate less. So just last week, a new study out looked at hallucination rates on the AI answering questions about New England Journal of Medicine medical vignettes. And the hallucination rates used to be 25% of the vignettes that it talked about were hallucinated. Now the Latest models like O1 Pro are hallucinating 0% of the time. So that is changing over time. That doesn’t mean hallucinations go away, but again, that is a thing that decreases over time.

Brett McKay: Yeah. I remember a couple years ago I wrote this article for our website called why Are Dumbbells Called Dumbbells? And I wanted to see what ChatGPT had to say. So I asked it. I think this was maybe chatGPT 3.5 when I asked it, and it just gave me this nonsense answer. It was like, well, dumbbells are called dumbbells because Lord Dumbbell in 1772, blah, blah, blah. And I mean, it was well written. And if you didn’t know why dumbbells are actually called dumbbells, you’d think, okay, this, this sounds like a reasonable answer, you know, but there’s no Lord Dumbbell. That was totally made up. And I just typed the same question in now. So I’m using chatGPT 4.05 and actually gave me a closer answer about why dumbbells are called dumbbells. So, yeah, that’s a perfect example of a hallucination.

Ethan Mollick: That’s right. And it’s a great. And you kind of want it sometimes to tell you the Lord Dumbell story, because otherwise it wouldn’t be interesting or fun or, you know, come up with creative ideas. And these systems are actually creative, which is sort of goes back to when you asked me the question, what are they bad at? People want to hear the answer that they’re bad at creativity, for example, or bad at emotion. Except that they aren’t. So that’s what makes it kind of weird to talk about what AI is good or bad at.

Brett McKay: Yeah, yeah, they’ve, there’s like creativity tests that they’ve run on the LLMs and they do pretty well on those creativity tests.

Ethan Mollick: Yeah, I mean, there’s some colleagues of mine at Wharton who run a famous entrepreneurship class where they teach design thinking. One of the professors involved actually wrote the textbook on product development and they had their students generate 200 startup ideas. They had GPT4, which was the model at the time, generate 200 startup ideas. And they had outside human judges judge those ideas by willingness to pay. Of the top 40 ideas as judged by other humans, 35 came from the AI, only five from the human humans in the room, which is pretty typical of what we see, which is this is pretty good at creative ideas especially beats most people for coming with creative ideas. If you’re really creative, you’ll be more creative than the AI. But for a lot of people, you should start almost every ideation process, write down your own ideas first and then ask the AI to come with ideas for you.

Brett McKay: Before we get into the potential benefits of AI, let’s talk about the concerns people have about it. So in your research about artificial intelligence and you’re talking to companies, educators, what are the biggest concerns people have about artificial intelligence, particularly the LLMs?

Ethan Mollick: It’s a great question. I mean, there’s a lot of concerns. So first off, just to put this in context, we consider AI to be ironically, a GPT, which in this case stands for general Purpose technology. So these are those rare technologies that come around once in a generation or two, like the computers in the Internet or steam power that transform everything in ways good or bad. So there’s lots of effects when you have a general purpose technology that are good or bad. So we could talk in detail about all the little effects, right. I mean, they may not be that little to you, right. Is, you know, you can make fake images of people, you know, it can convince people to give them their money. Like there’s all kinds of effects that might be negative, job impacts, other stuff. A lot of AI researchers are also worried about long term issues. So they’re also concerned about what they call existential threats. The idea that what if an AI is powerful enough that it, you know, tries to control the world or kill everybody on earth or what happens if people can use AI to create weapons of mass destruction.

So there’s sort of these two levels of worry. There’s a worry about the kind of impacts that are already happening in the world. And then there’s worries that either some people dismiss as science fiction or other people think are very plausible that AI might be dangerous to all of humanity.

Brett McKay: On that existential threat. There’s this idea that the AI might become sentient. You hear about that is that an actual, like people actually think that’s going to happen potentially.

Ethan Mollick: I don’t think anyone knows. We don’t have a good sense of where things are going. And I think people’s predictions are often off. And I think you don’t even need sentience. We don’t even know what sentience is, but we don’t even need sentience to have this kind of danger, right. The classic example of the AI gone wild is called the paperclip problem, which is if you imagine you have an AI that’s programmed or given the goal of making as many paperclips as possible as part of paperclip factory, and this is the first AI to become semi sentient or self controlled, it becomes super smart, but still has the goal of making paperclips. Well, the only thing that’s standing in its way is the fact that not everything is a paperclip. So it figures out ways to manipulate the stock market to make more money so they can instruct humans to build machines that will mine the earth to find more metal for paperclips. And along the way a human tries to shut it off so it kills all the humans incidentally, without worrying about it, and turns them into paperclips because why would it take the risk that it gets shut off and it can’t make enough paperclips, so all it does is make paperclips without caring about humans one way or another. So that’s sort of this model of AI superintelligence. But you know, again, nobody knows whether this stuff is real or not or just science fiction.

Brett McKay: You write in the book that when people start using LLMs like ChatGPT or Claude, they’ll have three sleepless nights. Why is that?

Ethan Mollick: So this is an existentially weird thing. I mean, it is very hard to use these systems and really use them. I find, by the way, a lot of people kind of bounce off them precisely because they feel like this kind of dread and they sort of walk away. But like you’ve got a system that seems to think like a human being who can answer questions for you, who can often do parts of your job for you, that can write really well, that could be fun to talk to, that seems Creative and like, these are things humans did. Like no one else did this. There was no other animal that did this. And it really can provoke this feeling of like, what does it mean to think? What’s it mean to be alive? What will I do for a living given that this is, you know, I don’t know if you’ve seen Notebook LM create podcasts right on demand. Like, you start to worry, like, what does this mean if this gets good enough? What does it mean for my kids Jobs, for my job? And I think that that creates, you know, it’s some excitement, but also some real anxiety.

Brett McKay: No, I’d agree. If you haven’t had those sleepless nights while using AI, it’s because you haven’t used it enough or gone deep with it. Because, you know, both my wife and I, we have the podcast, but we also write for a living. That’s what we’ve done for the past 17 years. And sometimes, you know, we’ll go to ChatGPT and like, chatGPT will spit out some like, that was really good. Like why, why am I here? What am I doing? Or the Notebook LM. I’ve used that. So I’ve used Notebook LM to help me organize my notes, kind of create outlines and things like that. And I’ve used that podcast feature and it sounds just like two people having a back and forth conversation, a podcast that blows my mind.

Ethan Mollick: And you could jump in with a call in, by the way, there’s a call in button now, you know, and this will only get better. And so that, that is this existential moment of like, you know, I also write for a living and you know, of the AIs right now, the best writer is still probably Claude of the set, although some of them are getting better. And like, it’s kind of crazy. Like I ask it for feedback on my writing and it has really good insights. You know, I write everything myself. But then I do ask the AI, what am I missing here for a general audience? And sometimes it’s like this would be really good to tighten up this paragraph. I’m like, oh, that’s really good advice. And I’ve had editors for years and like, it is weird to have this AI be so good at these kind of very human tasks.

Brett McKay: You call AI a co intelligence. What do you mean by that?

Ethan Mollick: So as of right now, the most effective way to use AI is as a human working with it. Now that doesn’t mean that it isn’t better than us at some things, but part of what you need to think about is how to use AI to do better at what you to do more of what you love. So it’s not, you know, you’re not handing over your thinking to it. You’re working with it to solve problems and address things. And one of the really cool things about AI is it’s just pretty good at filling in your gaps, right? So we all have jobs that we have to do a lot of things at. Take the example of a doctor. So to be a good doctor, you have to be good at, you know, at doing diagnosis.

You have to be probably good at hand skills and being able to manipulate the patient, figure out what’s going on. You have to be probably be good at giving good bedside manner. You’re probably managing a staff. You have to do that. You have to keep up on medical research. You have to probably be a social worker for some of your employees and your patients. No one’s going to be good at all of those things, and probably nobody likes all of those things. The things you’re bad at, you probably like least. So those are things the AI can help you most with. So you can concentrate on the things like to do most. The question is whether this maintains itself in the long term. But for right now, AI really is a thing to work with to achieve more than it is something that replaces you.

Brett McKay: So in the book, you provide four guidelines for using AI. The first is always invite AI to the table. So what does that look like in practice, and why do you recommend doing that?

Ethan Mollick: So one of the things we’ve talked about is the idea that with AI, you need to know what it’s good or bad at. And it’s often hard to figure that out in advance, and it’s often uncomfortable to figure that out. So you kind of have to force yourself to do it. And the easiest way to do it is to use AI in an area you have expertise in. So the magic number seems to be around 10 hours of use. And if you use 10 hours of AI for 10 hours to try and do everything at your job you ethically can with AI, then you’re going to find pretty quickly where it can help you, where it can help you, if you learn to use it better, where it can help you more, where it’s not that useful, where it might be heading, and that lets you become good at using AI. So it’s hard to have you to give you rules that make you great at AI use other than use AI in your job. And you will figure it out. So the first rule and the rule that I think has become the most useful for people is just use it.

If you haven’t put 10 hours in because you’re avoiding it for some reason, you just need to do it.

Brett McKay: The second guideline is be the human in the loop. What do you mean by that?

Ethan Mollick: So this is an idea from control systems that there should always be a human making decisions. I’m using a little more loosely than that, which is that you want to figure out how you integrate AI into your work in a way that increases your own importance and control and agency over your own life. So you don’t want to give up important things or important thinking to the AI. You want to use it to support what you do to do it better. Oftentimes when people start using AI, they find out it’s good at some stuff that they actually thought they were good at and the AI is better than them. That is an okay thing to come to a conclusion of. And you then figure out, how do I use this in a way that enhances my own agency in control and doesn’t give it up?

Brett McKay: Yeah, I like to think of going back to that co intelligence idea. When I’m working with an LLM, I imagine myself like Winston Churchill, who had like a team of. When he was a writer, you know, he, Winston Churchill was a big writer, wrote histories. He’d have a team of research assistants. So I kind of think of like, me, I’m Winston Churchill. And the LLMs are like my research assistants. They go out and find things for me, compile things, summarize things. Then I take a look at it and like, okay, now I’m going to take this stuff and write things out myself.

Ethan Mollick: I love that analogy. The research team. I mean, that’s how I use it in my book for the same kind of purposes. Like, I got feedback from it. You know, did my jokes land in this section? It’s not that great at humor, but it actually is pretty good at reading humor. You know, when I got stuck, give me 30 versions of how to end the sentence, you know, did I summarize this research paper properly? So that kind of team of supporters is a really helpful way to think about things. Yeah.

Brett McKay: Then also, I mean, I’m still, you know, I know these LLMs are really good at things, but I still don’t trust it completely because same thing as, like, with a person. Like, I don’t like even I delegate a task to a person. Like, I trust, but I gotta verify. Like, well, you gave me this answer Let me make sure that’s right.

Ethan Mollick: Yeah, I mean, I think that that’s exactly right. You should be nervous about this because in the same way you kind of are nervous about a person, but you also kind of learn its idiosyncrasies, right. So you learn, oh, it’s actually pretty good at these tasks and I can pay less attention, but this one I’m going to be very nervous about.

Brett McKay: Yeah. So the third guideline is treat AI like a person. I think this goes back to our co-intelligence idea, correct?

Ethan Mollick: Well, a little bit. It’s also just general advice. So I think a lot of people think about AI as, you know, software, and it is software. But software shouldn’t argue with you, it shouldn’t make stuff up, it shouldn’t try and solve your marital issues when you’re discussing things with it. It shouldn’t give you different answers every time. But AI does all of those things.

And what turns out to be a pretty good model, even though it’s not a person, is if you treat it like a human being, you are 90% of the way there to prompting it. If you try and treat it like. We’ve actually found some evidence that computer programmers are actually worse at using AI than non programmers because they want to work like software code. But if you treat it like a person in the same way as you’ve been discussing here, right, what’s it good at? What do I trust it for? What’s its personality? If you use different models, you’ll find Claude has a different personality than GPT4, which has a different personality than GPT 4.5. And so treating like a person gets you a large part of the way there and also demystifies this a bit. And so if you’re a good manager, if you’re a good teacher, if you’re a good parent, you’re probably going to be pretty good at using AI.

Brett McKay: Well, I imagine people that are hearing this are thinking, well, AI is not a person like, and that’s ethically questionable to tell humans to treat this code like a living person. What’s your response to that?

Ethan Mollick: You’re absolutely right. And that battle is lost. So one of the first things people talked about in computer science is that it’s unethical to anthropomorphize AI to treat AI like a person. And yet every single computer scientist does that anyway, right? We all, we anthropomorphize everything around us, right? Ships are, you know, she, you know, we curse the weather like a person or name Storms, like, we do this anyway. So I think it’s really important to emphasize that it is not a person. This is a technique. But for better or for worse, all the AI companies are very happy to blend the line. So a lot of the models have voice modes where they talk to you like a person. They all talk in first person. They’re happy to tell you stories about their own lives, even though they don’t have lives. So I think it is important to remember this is a product, it’s a software product. So view this as a tip for getting things done, but don’t forget that you are talking to software.

Brett McKay: Yeah, I think the danger of anthropomorphism just treating like an actual human being. I mean, that you are seeing that at an extreme level where people are actually developing, like, emotional relationships with artificial intelligence. And like, that’s not good.

Ethan Mollick: I agree. I mean, I think it’s inevitable, but not good, right. Now, there is some evidence early on that people who have these relationships with AI may actually have. It may help them psychologically. We’re still unclear, but some early papers suggested that that may actually be the case for people who are desperately lonely. We don’t know. But I mean, as a general rule, I would be nervous about treating a technology like a person emotionally or having an attachment to it emotionally. It is software in the end. But, you know, I think that we can recognize both things are true, that there is a limit at which this becomes unhealthy to do. But as a useful tip or mental model, there’s value in that.

Brett McKay: Yeah, I know my use of these different LLMs, like treating it like a human. I don’t. Maybe I think I treat it like an alien, almost like it’s human, but not. I don’t know. Anyways, I’ve noticed that if it gives me like a bad answer, like that was. That’s a bad answer. If I’m kind of mean to it. If I’m like, I’m like a stern boss, like, that was not a good answer. That was terrible. I know you can do better. Do better. And like, it does better when I give that response.

Ethan Mollick: Yes. I mean, so it turns out that, you know, giving it clear feedback like a stern boss is actually very valuable. Now, the sternness or politeness doesn’t. We have a study that just. We put out a couple weeks ago that we found that being very polite to the AI had very mixed effects on some questions like you asked it. It would actually be more accurate math if you were very polite. But in Some questions, if you’re very polite, it would be less accurate at math. So I don’t worry so much about things like politeness per se. Although most people are polite to AI because they kind of fall into that. It feels like a person. But I think you hit a very big secret there, which is the interaction. It gives you a bad answer. You don’t walk away. You say, this is what you did wrong. Do better. And it will do better. Not so much because you’re being stern to it, but because you’re acting like an actual manager, right. You’re saying, this is what our boss or parent, this is what’s wrong. Please fix it or fix it. You don’t have to say the please, and you get better results.

Brett McKay: With the idea of being polite to the AI, it’s definitely weird because the AI, it’s typically really affirmative, even when it’s giving you a critique, and because it’s being nice to you, like, you feel like you need to be nice back to it. And I’ve noticed that sometimes when it gives me, like, a really good answer to a question I asked it, I feel this impulse to tell it, oh, hey, thanks. That was really helpful. That was great. And then you think, wait a minute, this is weird. Like, what does it mean to feel gratitude for a machine? Yeah, it can be a mind trip sometimes.

Ethan Mollick: It is. And it’s really hard to be rude to these things, especially when you use, like, a voice mode and it’s being like, hey, how are you doing today? Like, you want to answer it and you are being tricked. So, I mean, it’s. Why this. You treat it like a human is a technique for using AI. It is not a philosophy.

Brett McKay: Gotcha. Yeah, treat it like a human interacting with. But not emotionally, maybe.

Ethan Mollick: Absolutely. And don’t get fooled.

Brett McKay: Yeah, the fourth guideline for AI is assume this is the worst AI you will ever use. Why is that a guideline?

Ethan Mollick: Probably the most accurate thing I said in the book. We talked about test scores earlier. These systems are getting better faster than I expected a year ago. There’s been a whole bunch of innovations that have made development happen faster. And, you know, I know enough about what’s happening inside the AI labs themselves to say, like, I don’t think most of them expect the development to end anytime soon. So you should assume that if AI can’t do something now, that it’s probably worth checking a month or two to see if it can do it, then, you know, we’re talking about writing. I mean, that’s something I’ve been paying a lot of attention to as somebody who writes a lot also, right? That’s my job, both as a professor, as a blogger, or as somebody who’s on social media a lot. And, you know, a year ago, AI’s writing was absolute crap. And now when I use Claude, you know, like you said, it sometimes comes with the turn of phrase. You’re like, ooh, this is pretty good. You were talking about using GPT 4.5. Like, you could feel that model writes better and like, it’s, it’s cleverer.

And so there is this idea that, like, things that were impossible stop becoming as impossible.

Brett McKay: We’re going to take a quick break for a word from our sponsors. And now back to the show. So I think there’s this fear that, okay, you know, the AI, it can just do anything and humans are cooked. Like, we’re done. So there’s no point of knowing anything because all the AI knows everything. But studies have found that people with a humanities background, you know, they know a lot of history, philosophy, art, you know, things like that, are actually able to make the most of AI. Why is that?

Ethan Mollick: So AI systems are trained on our collective knowledge. The data that goes into building statistical models comes from everything humanity’s ever written, essentially. And all the art that goes into this comes from not just, you know, the most recent animations or what, you know, Simpsons or Studio Ghibli or whatever, but also from the entire history of art for humanity and part of what you can be successful at. Like, there was a sort of second caveat to the treat the AI like a person, which is also tell it what kind of person it is. You can invoke styles, Personas, approaches. Think about this like you are, you know, Marc Antony. Think about this as if you were Machiavelli. And you get very different kinds of answers because you’re priming the AI to find different physical connections than before. So if you have a wide set of knowledge to draw from, like, if you think about AI art, everybody knows about Studio Ghibli or the Simpsons or Muppet Style, but if, you know, you know, German Expressionism and boutique paintings and, you know, classic 1970s slasher posters, like, you could get the AI to work in those kind of styles.

And that gives you edges that other people don’t have, because you can create things that are different than what other people see, get different perspectives than other people. So having that wide knowledge of human endeavor is actually really Helpful.

Brett McKay: I’ve noticed that. So I have a humanities background and I have found that I just get a lot out of it because, like, I can make connections in my head and then I can prompt the LLM with this, you know, like, here’s this weird connection I want to make, or is there any connection there? Or how can we make that connection? And I imagine if you didn’t have that background, you can’t do that. Like, the AI is, is only as good as the prompt or the information you give it. And if you don’t have anything to give it, you’re just going to get kind of mediocre results.

Ethan Mollick: Yeah, I mean, it’s getting easier to prompt, right. So there’s not that many tricks to it. But there is this kind of core truth you’re pointing out, and it’s coming down not just in that first prompt, but in the interaction. The fact that you could see the results and be like, this is dull. Like, get, add me more variation in the sentences, or, you know, I told you to write this as if you were Stephen King. But I didn’t want you to add so many horror elements. So, like, let’s take those out, right? It’s an interactive experience where if, you know, connections and web, that’s what the AI is, a connection machine, you’ll be more effective at using it yourself.

Brett McKay: So we’ve talked about treatment, LLMs like a person, and I don’t. I think a lot of people don’t realize that because LLMs are trained on how humans think and write. If you talk to it not like a blunt Google search, but more like a person, you get better results. But beyond that, general advice, are there any other tips for prompt construction so you get better results?

Ethan Mollick: Yeah, there’s four things that sort of research backs up to do. And the first is really boring, which is be direct. If a human intern would be confused by your instructions, the AI will be too. So you want to be direct about what you want. I need a report for this circumstance under, you know, for this reason and that gets better results. So be very direct about what you want. The second thing you want to do is that you want to give the AI context. So the more context it has, the better context can be, here’s some documents I like or here’s, you know. But it can also be things like act like this kind of person or this is going to be used in this kind of way. The more context the AI gets, the better off it is. The Third is what’s called chain of thought prompting. This turns out to be a very powerful technique, and it’s become actually a key way that AI’s improved is that the newest models of AI do this automatically. So it’s no longer as important to do chain of thought, but it used to be the most useful way to do this, which is you literally, at the end, think step by step.

First do this, you know, come up with 300 ideas for an article. Two, rate the ideas on a scale of 1 to 10, and then pick the top 5. Then re-consolidate them together into a new paragraph. Now write the document. So that step by step reasoning. Both makes the AI work better. But if you think about how AIs work, right, they’re adding… They’re just predicting one word at a time. They don’t have a chance to pause and think. So the way they think is by writing. So if you have them write a bunch before giving you an answer, they’re going to end up with better answers. So chain of thought makes them write out some stuff and go through a logical process. It also makes it easier to figure out what’s going right or wrong. And the fourth tip is called Few shot. Give the AI examples of the kinds of things you want to see that are either good or bad, and it will deliver things that are more like the examples.

Brett McKay: Okay, Yeah. I think that earlier tip of just tell the AI, like what you want it to be can be really useful. So I used this the other day. So for the past couple of years, I’ve had like this pain in the back of my knee from squatting, from powerlifting, and it’s gotten better, but I’ve gone to an orthopedic surgeon, did an MRI, and they’re like, well, nothing’s going on there. Went to a physical therapist, and he really didn’t know what was going on. And so just the other day I was like, I haven’t asked ChatGPT this. What would chat GPT say? So I told it, I want you to be the world’s best physical therapist/orthopedic surgeon. I don’t know if this is actually very good, but I said, that’s what I want you to be. Here’s the situation I have. I took a picture and had it pointed to, like, where the pain was in the back of my knee. Here’s when I experienced the pain and etcetera, like, what’s going on there? And then generate a rehab protocol. And it generated this rehab Protocol. Then I started doing some of the exercises, and it actually feels like it’s working because I can feel it in the spot that it’s been hurting.

And I haven’t been able to do that with, like, the. The advice that my physical therapist gave me.

Ethan Mollick: Huh. I mean, listen, I think, you know, with all the qualifiers around this, that if you’re not using AI for a second opinion, like cheap second opinions, super easy, and you absolutely should be doing it. Like, all the research shows it’s a pretty good doctor, right? Do not throw out your doctor for this yet. But, like, that exact kind of use, I’ve used it for the same thing where I hurt my shoulder, you know, and I’m like, tell me what the issues could be. And it’s not bad, right? It’s certainly better than searching Google for this stuff. And the research on medicine shows it works pretty well. And the idea that you gave it the context you needed, what you actually did there was you both gave it a context and a Persona. Act like this person. That’s a very reasonable way to start that. That’s part of the advice in the book. Tell it what kind of person it is. And then you gave it all the background, including, I love that you gave it the picture with the arrow pointing to it. Because these things could see images. And so giving it that context made it more accurate.

Just like what a person would like, you could put in your medical, you know, history and numbers. I would not again, use this as your only physician, but as a backup to empower yourself. It’s incredibly powerful.

Brett McKay: Okay, so let’s talk about some practical ways you’ve been seeing companies and educators use AI. Let’s talk about work first.

So what are some, you know, brass tax ways people can use AI in their work? And we’ve kind of mentioned some things, but what are some things that a general worker, maybe someone who’s in management or something like that, how can they use AI for their workflow?

Ethan Mollick: So it’s pretty good for advice. There’s a really nice study that shows that of all people, small business entrepreneurs in Kenya who are already performing well, those who perform bad, they didn’t have the resources to do anything with it. Who just got advice from the AI? You had profits increase 12 to 18% just from advice. So it’s pretty good at giving you advice or helping you talk through issues. It’s obviously pretty good at writing and reading. Like, it’s pretty good at summarizing the entire meeting and telling you what action points people can take. Increasingly if you use the deep research modes, it writes an incredibly good market research report. There might still be some errors, but it’s a great starting point. It can save you 20 or 30 hours of work. And those deep research modes are available right now in Gemini OpenAI’s ChatGPT and in Grok from Elon Musk’s XAI. But those deep research are very, very useful. I mean, I’ve worked with them with lawyers and accountants and they’re also very impressed by the results. It’s very good if you can’t code. I build little coding tools all the time. Help me work through the financials here by building an interactive spreadsheet for me.

So you have to experiment. That’s that 10 hours thing. But there’s a lot of use cases. The thing I tend to point out to people in a work environment is two things. One is you will know what it’s good or bad for pretty quickly because you’re an expert at your own job. So if you’re like this is not good for that, great, you’ve learned something. If it is good, you often know how to give feedback to make it even better. The second thing I would tell people about using AI at work is the thing you have to overcome is this idea of working with a human. You only can get so many answers. I think you should take a maximalist approach to working with AI. Don’t ask it for one way to write this, this email, ask it for 30 and then pick the best one. Don’t ask it for one idea, ask for 200. Like it doesn’t get tired, it will never get annoyed at you. So part of what the value of it is is this abundance.

Brett McKay: You also talk about in the book how you got to figure out how to decide what to delegate to the AI and which task you should keep doing yourself. So is there a rubric you use to make that decision?

Ethan Mollick: So I think part of it is about personal responsibility and ethics. What do you think you ethically have to keep for yourself to do? Like for example, we actually know from research that AI is a better grader than I am, but I don’t use the AI to do grading on papers, even though it’s better. Because I feel like my job as a professor is that I am providing the feedback, right? Or if I’m using, you know, teaching assistance or something I would delegate to those humans. But like I don’t use AI to do that even though it could do a better job. On the other hand, you know, there are things I know, the AI is not going to be great at where I know I have to take over. And I know that because I’ve spent my 10 hours working with AI. So I think it’s either ethical or AI can’t do that. That creates that line.

Brett McKay: Gotcha. And I think, too, with this idea of thinking about AI in your work, I’ve read about this, maybe you talked about this in your book too, if I can’t remember. But you now are in the position where everyone who has access to AI can do a lot of jobs at an 80% level, whereas it used to be, you know, if you were bad at writing a memo or doing other kinds of tasks, then your career is going to be kind of stunted. But with AI, you can write a pretty decent memo, but everyone else can also write pretty decent memos. So now it’s like, okay, if I can get everybody 80% of the way on the more basic stuff, then you got to figure out how to do the. How to do the other 20% stuff super well. And, you know, that’s what’s going to separate you from the pack is if you can do that extra 20%. So you got to ask, like, what can I add to get all the way there? And that’s often the hardest part.

Ethan Mollick: So I’ll push back a little because I think when I say it does 80%, 80% level, that’s not always the easy part. Sometimes it actually does the hard part, and it’s very good at that. I think the question is how you attach it together and how you work together with it and focusing the areas where you’re definitely better than AI. You know, I think about this a lot. I’m a former entrepreneur myself. I teach entrepreneurship classes at Wharton. You know, fund company, work with companies. And one of the things that’s really interesting about being an entrepreneur is you generally are really good at one or two things and you suck at everything else. But you have to do all that other stuff to do the one or two things you’re good at. So you’re really good at coding, you’re really good at running a podcast like this. You write compelling content nobody else is able to write, but you also have to keep the books and fill out forms and give your employees performance reviews and all the other stuff that comes with running a business that you may not be good at writing emails, you know, writing marketing, repair.

So the idea is that if the AI does that is good, as an 8th percentile person, it’s not bad, right? That was stuff you were doing at the 20th percentile. So that lets you focus on the things you do really well and give up the stuff you don’t do well.

Brett McKay: That makes sense. Are there any like, specific prompts that you found useful for the world of work?

Ethan Mollick: So there’s a whole bunch of things you could think about. I find one really good thing is to ask the AI to have arguments on your behalf, like, what are some pros and cons of this? Another really nice piece of advice is think about frameworks I can use to address this problem. Examples of frameworks might be things like a two by two matrix or a strategy matrix. And give me two different frameworks that I can use to think about this problem and tell me what those frameworks would say. So you can force the AI to kind of think like a high end consultant on those kind of problems.

Brett McKay: But how do you think AI will affect more creative work? Like, what role do you think humans can play in a world where AI can create pretty good art, write good copy, even do a podcast? Like where do you think humans can fit in there?

Ethan Mollick: So I think if AI stayed at the level where it is right now, it’s quite good. But it’s not as good at podcasts as you. I’m trying to butter you up for good editing here. It’s not, you know, it’s, I don’t think as good as professor as me, right. Or a good writer. As a good writer is a good writer. I think analysts are like, if you’re whatever you’re best at, you’re probably better than AI. The question is whether that stays the same, right. It hasn’t, right. Next year it’s going to be better than it is now. At some point it might, you know, it might be a better podcast than you, it might be a better professor than me, or better writing research papers or whatever else. And I think that becomes the big question, what do we do in that world? And that’s a decision we get to make in some ways. Like AI is something being developed, but it’s not something that we don’t have any control over. And what I worry about is when people just sort of throw up their hands and be like, well, AI does stuff like what do we want the future to look like? We get to make decisions about that.

Brett McKay: Yeah. So I mean, you’ve talked about how you’re still using, you’re using AI in your own creative process. Like when you write, you know, you’re trying to figure out how to end a sentence, and you’re just thinking, thinking, thinking, and nothing’s coming to you. So you ask the AI, well, you know, what are some 30 different ideas, how I could end this and like spit some things out. Then you’re like, oh, well, that’s a good one. Or you start mish-mashing, you know, kind of different sentences that it spit out to you to get a good one.

Ethan Mollick: I mean, and that kind of method of working with it, that co intelligence piece, is ultimately the message here, at least for right now at the level AIs are at, it has weaknesses and your ability to use it as a starter for information, as a fill in, as ability to get more done, right? So, maybe there’s a world where the AI is very good at podcasting and you develop a way so that it’s doing personalized podcasts for everyone who downloads one, right? So this model is. Now you’re hearing the two of us talk, but we’re talking specifically about the issues that you, listener X, are experiencing. I mean, there are future models of more ambitious worlds where if everyone has a thousand PhDs, what do you do with those? So I don’t think this takes away all choice and agency for us. It does make us rethink how we work.

Brett McKay: Okay, so we’ve talked about using AI at work. Let’s talk about using AI at school. And you’re a professor, so you’ve got a front row seat to see how this is all playing out. But before we talk about some of the potential upsides of AI in the classroom, let’s talk about the disruption. It seems like AI has pretty much blown up homework. Like it’s caused the homework apocalypse. You know, like when a student gets an assignment, they can just go to AI, say, AI, write me an essay. AI, you know, here’s a picture of a math problem, the calculus problem. Solve it. So what do we do in a world where students can just get the answer right from AI? I mean, is school over?

Ethan Mollick: So, I mean, right now it’s absolute chaos, right? As of last July, 70% of undergrads and 70% of K12 students were using AI for “help with homework”. So everyone’s using it. AI detectors don’t work, by the way. All of them have a high false positive rate. Some people just write like AI and they get accused all the time of using AI and they could never prove they didn’t use it.

Brett McKay: Yeah, like, AI uses the word delve a lot. And before AI, I’d use that word. I’D use the word delve. And now I can’t use delve anymore because it’s kind of an AI thing.

Ethan Mollick: Yes.

Brett McKay: And I don’t want people to think that AI wrote it.

Ethan Mollick: Well, that was actually what was pretty funny is there’s actually a statistical analysis that shows that the use of delve is dropped off dramatically because the models no longer say delve that much and no humans want to use it anymore, right. So it’s very funny to react negatively to it. But you can’t ever prove that you’re not using AI, right. I’ve just kind of given up. Like, I mean, what you end up doing is leaving spelling errors in or something like that and hoping that that that proves it. But I mean, you’re facing the exact same problem we all are, right. You could be accused of using AI anytime. You can’t prove it. So teachers really have two choices. Choice number one is the same thing we dealt with in math classes after the calculator, because the 1970s, which is what you do is you go back to basics and you say, listen, you do the homework, don’t do the homework, the homework helps you with tests in class, we’re going to have active learning. I’m going to ask you questions about the essays you wrote, you’re going to do in class, assignments you’re going to do in class, blue book tests.

And that’s a completely reasonable way to respond to AI in the short term. That’s exactly what we did in math classes, right. Like you do the math homework, it might be graded, it might not be graded, but the big deal are the tests you do in class. And we could do that for other things like writing. We just don’t. The second option is you transform how you’re teaching. And like my classes are 100% AI based. Everything you do involves AI stuff. So you teach AI that pretends to be a bad student, you co write a case with it. The AI rules you about problems. Because I teach entrepreneurship, I’m also able to do incredibly impossible assignments like, you know, come up with a new idea and launch a working product by the end of the week. We can do things we didn’t do before. So we’ll figure this out. But schools are definitely in chaos right now.

Brett McKay: Well, I think going back to that idea, that point you made, that people with humanities degrees or humanities background do better with AI. I mean, I think that makes the case, like we still got to teach people or teach young people, like general knowledge, like that becomes more important. If you want to Actually make the, make this AI useful.

Ethan Mollick: Absolutely. General knowledge is more important than ever. Expertise is more important than ever. And we can teach people this. I mean, we really can. They’re in the classroom already. And the most effective way of teaching has always been active learning where people are doing things actively in the classroom and not just hearing a lecture. So the trend even before AI was how do we create flipped classrooms where you watch videos of lectures or read textbooks outside of class, then in class you apply that knowledge. That kind of approach is very AI proof. And there’s lots of ways we can use AI to make learning more engaging. I’ve been building games and simulations where you basically, you know, you don’t just learn how to negotiate, there’s an AI you negotiate with and that turns out to be really easy to build. You can use AI to do all kinds of really interesting teaching things. There’s a set of research out of Harvard that shows an AI tutor improves performance on scores. There’s another big study done by the World bank in Nigeria that shows that six weeks of after school AI tutoring with teachers in the room.

It’s actually important to have teachers involved because students, when they just use AI themselves to learn, it turns out they don’t learn very well at all. They just kind of cheat and don’t realize they’re not learning and they do worse on tests. But if you make it part of assignments and teachers work with you on this, then you actually get huge increases in learning outcomes. So there is like a really good future where AI supercharges learning, makes it more personal, makes it better. And I think we’re close to that. It’s just, you can’t just say to your kids, use AI and it’ll all work out, because that’s not actually the case. Like learning requires effort and letting AI skip that effort actually can hurt you. So we have a lot of potential for the future, but also a lot of misconceptions and sort of thinking to do about how to use this properly.

Brett McKay: Something my wife and I discuss quite a bit since we’re writers. And then we look at like what AI can do with writing. It’s like, is there even a point for like my kids learning grammar and how to diagram a sentence and whatnot? You’re a writer. Is there still a case to be made to learn those fundamentals of writing in the world where ChatGPT can just spit out something for you?

Ethan Mollick: I mean, again, I think that the key is really building true expertise. And I think that what this hopefully does is sharpen things for us. You know, math classes became a lot more organized after the calculator because people had to actually think about what do we want people to learn, like how much do they learn to do multiplication, division by hand and what’s that valuable for? And when should they switch over to using calculators. And I think we can do the same thing with writing education. I mean, I understand that it kind of sucks, right? Like essays used to be a great way to do things for teachers. They could just assign essays and assume people learned. A lot of people didn’t learn or were already cheating. By the way, prior to ChatGPT, there were 40,000 people in Kenya whose full time job was writing essays for American college students. So this isn’t a new problem. So I do think we need to learn how to, I mean, whether diagramming sentences is the right approach or just trying writing a lot with creative prompts. I think writing remains really important because we want people to learn to be good writers and readers and that’s what school’s for.

But we have to start approaching this a different way. We can’t just assume we give people a take home assignment, an essay and they’re learning something from it. But that also hasn’t been true for a long time. Since the Internet came out, people are already cheating. So I think we have to face the fact that, you know, this is something we have to learn about how to do better and actively work to do better.

Brett McKay: Any advice to parents who they’ve got. Maybe they got kids in middle school, high school, and they’re seeing their kids use AI for their homework, for homework help. Any advice on guiding them and how they can use that as not just like a way to cheat and just get the answer and get the homework done, but like, oh, we can actually enhance your learning. What are some like prompts or some guidelines for that?

Ethan Mollick: So we have a bunch of free prompts that you can use and you can find those at Generative AI Lab at Wharton. And there’s a prompt list that you can use of tutor prompts. But aside from those, I don’t think prompts are really, as, you know, they’re important. But I think the real key is thinking about, as a parent, how to use it. So for example, when you want to give your kids homework help, don’t let them use AI or try and suggest they don’t use AI. But what you do is you actually take your phone and you take a picture of that calculus problem and you ask the AI. Explain this to me In a way that I can teach my kid how to do this and they’re good at this or bad at this or even better, have an ongoing conversation where it knows the strength and weakness of your kids. When your kids do use AI, ask them to give practice help for quizzes. Generate problems for me for AP Social studies in this unit and quiz me on what I know or don’t know. Like the key is that it has to be effortful work.

So if they’re just getting answers from the AI, they’re not getting anything valuable. If they’re being quizzed by the AI, they’re asking questions and getting answers back. They’re indulging their curiosity. You’re the one using this to help you become a better teacher. We all are, you know, amateur teachers to our kids on lots of topics. And I mean I can’t remember calculus, but the AI does. And you can use those tools to do this. But it’s like any other form of media or experience. You need to be an active parent.

Brett McKay: And I think even if you don’t have kids and you’re just an adult and you want to continue your education, I think AI can be a really powerful co learner or co teacher with you. I’ve been using it my own sort of personal reading, right now I’m reading Invisible man by Ralph Ellison. Read it back in high school, decided to read it again as a middle aged guy and I’ve been reading it along with AI. So I’ll finish a chapter and I’ll say go to the AI, say hey, you’re an American literature professional professor, I want to talk to you by about Ralph Ellison’s Invisible Man. Let’s talk about chapter three. And it says, okay, yeah, here’s, here’s chapter three, here’s what happens. But then I’ll just start asking it more and more questions. Kind of drill down into more and more specific questions like you know, what do you think is going on this line? What does that mean? And it starts spitting out ideas and it just helps. It just gets me thinking about the text in a deeper way.

Ethan Mollick: And that by the way, the co thinking partner thing is often important. I spoke to a quantum physicist at Harvard and he said his best ideas come from talking to the AI. And I’m like, is it good at quantum physics? He said, no, no, not, not really. But it’s very good at asking me good questions and getting me to think. And I think you’re sort of spotting like the most ultimate form of co-intelligence. Is we just don’t have. Even with a, you know, a supportive spouse who’s doing the same work that you’re doing and is, you know, and is intellectually engaged with you, we still lack thinking partners in the world, right. Like, so it can help you spur your own thinking. I love your examples of use. Show you what happens when you get comfortable, this system, and you start to think about, how can I use AI to help? And what I love is all the examples you’ve given about how you help with your writing or how you help getting, you know, help with this reading project is about having it supplement your thinking, not replace it.

Brett McKay: Yeah, that’s the way I think. It’s supplementing, not replacing. So what do you think is the future of AI? Where do you see it going?

Ethan Mollick: So I think it’s worth noting something which is the big thing that’s happened over the last few months is there been a couple technical breakthroughs in AI that make it much smarter, that are pretty easy to implement, that people have been doing. So these are called reasoners, models that think before answering questions. Turns out that makes the AIs a lot smarter. And as a result of that, plus a few other breakthroughs, when you talk to people at the AI labs, and they talk about this publicly too, they genuinely believe that in the next couple of years, two to three years, they might be able to achieve AGI, Artificial General Intelligence, a machine smarter than a human at every intellectual task. I don’t know if they’re right. Nobody knows if they’re right. They might be, you know, high in their own supply, but they believe that this is true. The message you take away from that is that these systems will keep getting better. So I think there’s an advantage to kind of learning what they’re good or bad at right now. But I also think we need to be flexible. The future is changing. I mean, it’s a very good time to be an entrepreneur.

It’s a very good time to try and learn more about the world. It’s a very good time to use this in your job to become much more successful. Because a lot of people don’t realize what these things could do yet, but I don’t know what the future holds in the long term. I think these systems will keep getting smarter. They’ll still be jagged, not great at everything, but they are getting smarter.

Brett McKay: Well, Ethan has been a great conversation. Where can people go to learn more about the book and your work?

Ethan Mollick: So I’ve got a free substack called oneusefulthing.org that is probably the best way to keep up to date on AI. My book is available at every major bookstores. It’s called Co-Intelligence and I think that’s a fun read also. And I am very active on social media on Twitter and blue sky and LinkedIn so you can look for me there’s fantastic.

Brett McKay: Well, Ethan Mollick, thanks for having. It’s been a pleasure.

Ethan Mollick: Thank you. It’s been terrific.

Brett McKay: My guest today is Ethan Molech. He’s the author of the book Co-Intelligence. It’s available on Amazon.com and bookstores everywhere. You can learn more about his work @oneusefulthing.org also check out our shownotes @aom.is/AI where you find links to resources. We delve deeper into this topic.

Well that wraps up another edition of the AOM podcast. Make sure to check out our website at artofmanliness.com where you find our podcast archives. And make sure to sign up for a new newsletter. It’s called Dying Breed. You sign up @dyingbreed.net and it’s a great way to support the show directly. And if you haven’t done so already, I’d appreciate if you take one minute to give us your reading up a podcast or Spotify. It helps out a lot and if you’ve done that already, thank you. Please consider sharing the show with a friend or family member who you think we got something out of it. As always, thank you for the continued support. Until next time, Brett McKay, your Monoton listening Win podcast with Put what you’ve heard into action.

Related Posts