Photo-Illustration: Intelligencer; Photo: Getty Images

Artificial intelligence continues to make startling advances while becoming ever more integrated into millions of people’s lives. But as America’s most valuable companies race to out-compete each other in a high-stakes, extremely costly race to dominate the field, many fundamental questions remain unanswered. Foremost among them: Will AI actually be as disruptive an economic force as its adherents say? To get some perspective on that question, I spoke with Professor Ethan Mollick, a Wharton professor who frequently writes and comments about AI and its applications. Mollick is the author of the book Co-Intelligence: Living and Working With AI, and in a paper he co-authored, he pioneered the concept of AI’s “jagged frontier” — meaning that the technology excels in some areas while lagging, sometimes surprisingly, in others.

You recently wrote on your Substack **that “AIs have quietly crossed a threshold. They can now perform real, economically relevant work.” That was in reference to a study that tasked both humans and AI agents with doing a series of tasks across fields like finance, law, and retail, and comparing the results. The AIs were close to human level, and they’re getting better all the time. But you were careful to say that this doesn’t mean AI is very close to replacing the actual job so easily, because they’re performing “tasks” and not jobs. Can you explain more about those limitations, and how long you think such limitations will exist?**We coined the term “jagged frontier” in our paper from a while ago where we — my friends at Harvard, MIT, and the University of Warwick — did a study with Boston Consulting Group. AI is good at some stuff you wouldn’t expect and bad at some stuff you wouldn’t expect. What’s happened, which is what we kind of predicted, is that what it’s bad at has changed. It used to be terrible at math and now it’s winning math awards.

As researchers work on this and as the capabilities of the models increase more generally, I don’t think the jaggedness is going away. The shape of the jaggedness and how bad the jagged parts are is changing, but there’s still lots of unknown stuff about how to integrate AIs with a job. Can it go to a meeting for you? Do we still want to have meetings the way we did before? AI doesn’t have permanent memory at this point or the ability to learn on an ongoing basis. That holds back what it can do. So it’s hard to make predictions about where things end up in the medium term, but there are definitely still real gaps right now in AI ability.

**This probably gets into math I don’t understand, but why would that jaggedness be persisting? Why hasn’t AI figured out how to do everything equally well?**Part of that is that large language models are weird. We don’t actually know why they’re as good as they are. All they do is statistically predict what word comes next in a sentence, or “token” comes next in a sentence. We don’t actually have very good theories about why a system that statistically predicts the next token of language seems to be able to develop original thoughts and have theory of mind and all the other weird things AI can do. A few kinds of jaggedness we understand really well. The reason these systems were bad at math is just prediction alone wasn’t going to be good enough at that. But it turned out we could embed new forms of reasoning models, which solve the problem of being bad at math to a large extent. Hallucination, which is the problem of the AI making up stuff still there, has not gone away. But larger AI models hallucinate less, so it becomes less of a problem.

So we know some of the reasons why these systems are good or bad at particular tasks, but we don’t understand all of them. And we’re just really at the very beginning part of understanding what their biases might be, and how you integrate them into work.

You recently tweeted about a couple of papers, which predicted that true AGI-level AI equivalent to a human genius, if achieved, would eventually displace most human labor. It seems you don’t think this will happen anytime soon, so I’ll ask you a more meta question: What is the value of these kinds of bold predictions? This seems like something that is very popular in the AI world — to make very specific forecasts about its advancement. First of all, as everyone knows at this point, AI is big business. So there’s lots of people with lots of incentives. There’s people who want to raise money. There’s also just a lot of true believers. The people at the AI labs genuinely believe they’re building Artificial General Intelligence, a machine smarter than a human, over the next few years. Whether or not they can is a separate issue, but they believe it. So why are they saying this? Because they’re like, “we’re building this, and everyone has to be ready.” But they also can’t help themselves. AI people have, you know, famously predicted that there’d be no radiologists by now. I think there’s a misunderstanding of what jobs look like.

And by the way, when you look at the economists’ papers, it’s on a theoretical basis that there are machines smarter than a human or intellectual task. It can do all work that a human could do — that’s the assumption. But there’s a long road between here and there. A computer scientist looks at the issue like — what does a radiologist do? They say they read x-rays, but they don’t think about all the other things that are bundled into the job. They also have to be able to make judgement calls, and they have to be able to interface with other people in the organisation and deliver good and bad news and help troubleshoot issues when they arrive and all these other things besides just reading an x-ray.

**There are very few jobs that are so one-way, where it’s all about producing data and that’s it. I don’t know if one even exists.**Exactly. And that, I think, is the heart of why these predictions are so weird. It may very well be that you would never want only a human being reading an x-ray in a few years. But that doesn’t mean radiology disappears.

And in fact, there are more radiologist openings than ever, right? Yeah. This is complicated. The other thing that’s complicated is we have good models for replacing manual labour. We saw what happened in the Industrial Revolution. But we don’t have great models for large-scale, general-purpose technologies that replace intellectual labour or supplement intellectual labour. We just don’t know. The internet was a different thing. We could talk about spreadsheets and how they changed the job of accountants. We could talk about the replacement of telephone workers in the 1930s. But we don’t have a good general model, so we’re all sort of flailing a little bit in the dark here.

I’m just hoping journalism is safe for another year or two. Well, again, let’s say it can write a better article than you can. Let’s say it could do really good research. I think it’s worth asking, would that be enough to replace the job of journalists? And I would probably say no.

**The other thing the forecasts discount is people’s possible revulsion at that idea. I don’t think anyone wants to read New York for AI journalism. Maybe that’ll become normalised, but I don’t think anytime soon.**It’s the same way as — there’s some early evidence that AI is a better gradef than most humans, but I don’t let the AI grade my papers. I still grade them even though I’m a worse grader, because that’s my contract with my students. So social contracts are another piece that holds things back, and legal pieces. It’s a complicated world. At the same time, we shouldn’t discount — well, what does it mean when AI could write better than a human, and do the independent research piece, and interview someone. How do we start thinking about that? So there’s something happening. It’s hard to know exactly its dimensions.

**I don’t want to discount the impressiveness of what it can do, because what it can do is incredible. But the debate around it has become really polarized. A certain faction maintains that AI is actually completely worthless, and then there’s another side that acts like a hype machine.**“All jobs will disappear.”

**Yeah. It’s difficult to be in the middle.**That’s right. And then you’re pushed to make bold predictions. But the answer is we don’t know. And anyone who tells you they have certainty is probably wrong.

**You teach and study innovation, and you’re very well-versed in these AI models. So I imagine you get a lot of solicitations for advice from CEOs and business types. What do they most want to talk about these days? Do they not fully understand this technology? Are they unsure how or whether to deploy it?**People are still worrying about whether there’s going to be a return on investment from this — there’s still those conversations happening, and here’s a whole adoption curve. But increasingly what I’m hearing is: “Look, we’re seeing early value.” You can’t help but look at programming and say, okay, there’s actually a pretty big impact from AI right now. And then the question is “What do we do with this?” Founders are competing against the AI labs. So they’re asking “What can I do that’s sustainable in the long term?” And big companies are asking the same question. “If the AI does a lot of stuff, what’s our competitive advantage against other people?” And I think a lot of more forward-looking ones are thinking about not just viewing AI as an automation tool that replaces people, but as some way of making us all perform better, happier kinds of work. It’s a little more rare than I’d like.

I imagine a lot of CEOs are more wondering how they can use it to get rid of people. Both things are happening. The issue with getting rid of people is that no one knows how AI works for jobs very well. The actual experts inside your company are the ones who are going to figure out how to use AI. And if they think they’re going to get fired because they made themselves more efficient, they just won’t become more efficient, right? One of the answers to the puzzle about why we’re not seeing bigger AI impacts on productivity — which is partially timing and partially that it always takes time to do these things — is that if you were smart and AI wrote this article for you, you wouldn’t tell anyone that, because why would you want anyone to know? So leaders also have to realize that the innovation has to be incentivized and shared, there’s no incentive to innovate.

What are the big AI labs and companies missing when they think about how their creation will affect work? I think the core people at these companies are coders at heart or startup people, and they don’t understand how most work operates. I don’t think they understand innately that work is complicated and contingent. And that being the smartest person in the room isn’t always the way to succeed at work. There are complicated interconnections between jobs and tasks and processes. There’s politics, there’s a change in management, there are a thousand things that make the real world complicated. And that is what slows down adoption of technology. That’s what is important for making technology matter. And I think that gets missed.

The idea is that if we just build the smartest tool, work will change overnight, and we can have AGI. Some economists, like Tyler Cowen, have said we have AGI already, that o3 and GPT-5 are smart enough already to count as AGI. But it still wouldn’t immediately change everyone’s work. Right now there is this hypothetical superintelligence where the machines can do anything, and that’s a very different beast. But I think it takes a long time for the actual impact — a long time being five or ten years. And there’ll be a lot of disruption during that time. But technology alone doesn’t change the world. It’s technology plus people and social systems.

**What you just said reminds me of the way these same kinds of people, with the coder mindset, approached the internet in the first place. It was like “Just connect the world and things will work out.”**We were all optimists back then, thinking that if we gave everybody access to all the information in the world, obviously good things would happen. So we made some mistakes. I mean “we” collectively.

No, it’s true, that was more of a society-wide thing. But I agree with you. I think the world’s complicated, but we also have a new technology that really could create a fast change. And it’s a general-purpose technology, which means it affects everything. So we might see very rapid change in some industries, and a very slow change in others. This is a long-term, or medium long-term, experience.

**One thing strikes me is that there’s far less optimism about this than there was about the internet at the beginning, because of all those mistakes you mentioned. There’s a lot more dread. Some people are very happy about this stuff, but there’s definitely a general wariness of technology, too.**Yeah, and also there are books coming out about how AI will murder you, and if enough people tell you that it’s a legitimate thing to worry about. I think a lot of people see what’s coming here in a way that was hard with the internet. There’s a lot of possibilities for incredibly good things, but I think we’re going to see a mix of incredibly good and incredibly bad things. And our big work for the next five to 10 years is mitigating the bad and encouraging the good.

**You’re a professor. I keep reading things about how students can’t get through books anymore and whatnot. What do you think AI is doing to kids?**First of all, education is always in crisis. So it’s not something new.

And the world is always in crisis, too. Right, the end is always near. And maybe this is the time, but the end is always near. But I think there is a real effect on education. And I think it’s an interesting one. First of all, people were cheating before AI. We actually have evidence that the internet made cheating endemic in university settings and high school settings, because you could look up information. So AI has made cheating easier. Sometimes people get deluded as they’re learning ,when the AI tells them something and they’re not actually thinking about themselves. But I do think that there is a positive to this, too. We have early evidence that AI can be an incredible tutor when used properly.

So on the one hand, we have a technology that is undermining a lot of how we do teaching today. Take-home essays and stuff are now much weaker than they were before. But at the same time, a universally available one-on-one tutor, used with guidance, has been the dream of education forever. We have absolute disruption in classrooms right now, but we also know what things will look like at the other end, which is we’ll do flipped classrooms, which has always been the way to do this — active learning, inside of class. Not passive lectures, but active experiences.

Outside of class, you’ll work with an AI tutor to make sure you’re up to speed on a concept. And education can be better as a result. And we can use blue books — I can make you write an essay live in class, with no computer access. We’ll figure that out. And I think the end result could be very good if we do a good job with this in the short term.

This interview has been edited for length and clarity.

More From This Series

Peace in Gaza May No Longer Be a FantasyThe Dark Side Driving Pete HegsethAre Democrats About to Walk Into a Devastating Trap?


From Intelligencer - Daily News, Politics, Business, and Tech via this RSS feed