University education as we know it is over
Take-home assignments are dead, "one prompt away" is one prompt too far, and what we should do next

I’m talking, of course, about AI.
Some context first. I have a PhD in economics and have taught courses ranging from venture capital to macroeconomics. I left academia to lead a data team in a tech unicorn in June 2022—a quainter, pre-ChatGPT era. Last year, academia pulled me back in, and I began teaching undergraduate econometrics part-time.
My main job, however, is Research Manager at the Forecasting Research Institute. In this role, I constantly think about AI progress. The progress has been wild. In a recent study, we found that even superforecasters—folks with proven track records of accurate predictions—were caught off guard by AI progress. For example, they gave just 2.3% to AI winning a gold medal at the International Mathematical Olympiad by 2025; several AI companies achieved this in July 2025.
Teaching at a university while tracking AI progress closely provides an interesting vantage point.
The view isn’t that pretty.
TL;DR: AI now solves university assignments perfectly in minutes. Students often use LLMs as a crutch rather than as a tutor, getting answers without understanding. To address these problems, I propose a barbell strategy: pure fundamentals (no AI) on one end, full-on AI projects on the other, with no mushy middle. Universities should focus on fundamentals.
I. View from the ground
I.A. Take-home assignments are dead
AI is now so good that take-home assignments are obsolete.
Last year (2024), I assigned a data project worth 30% of the final grade. My tactic was to make the assignment hard enough that AI couldn’t solve it but a smart undergraduate could. I spent many hours designing the data project; it’s here, if you’re interested.

This approach kind of worked last year. AI models couldn’t yet solve the problem in one go. Some students produced hilarious Frankenstein solutions—prompting AI for part 1(a), then separately for 1(b), then once more for 1(c), etc., because LLMs couldn’t handle the full assignment at once. The code was a mess with replicated steps everywhere.
That’s no longer true. I gave the data project PDF to two models (Claude Sonnet 4.5 and GPT-5). Both solved it more or less perfectly in minutes.1 Claude was particularly excellent, providing both the code and a lovely report.
Here’s Claude’s own takeaway:
Of course, I could make the data project harder. LLMs aren’t superhuman (yet). Given enough time, I could design something they’d struggle with. But that defeats the purpose of a take-home assignment. My goal is to provide something students can actually do and learn from. Making the assignment more ambiguous or ill-defined wouldn’t help either—students need scaffolding to learn.
With a heavy heart, I dropped take-home assignments entirely this year.
I.B. LLMs are great learning partners, but they’re mostly used as crutches
In theory, LLMs could be infinitely patient, infinitely knowledgeable tutors. You could talk to them about material you’re struggling with, request custom-generated quizzes, or turn boring course notes into exciting podcasts. Most leading LLMs, in fact, provide learning modes designed specifically for this.
Maybe that’s true for 5% of students. The rest just want to get stuff done as quickly as possible.
I saw clear examples of this in my classroom. I assigned two problem sets and asked students to solve them at home, then present solutions at the whiteboard. Students provided perfect solutions but often couldn’t explain why they did what they did. One student openly said “ChatGPT gave this answer, but I don’t know why.”
A single prompt would have resolved that! But many students don’t bother. “One prompt away” is often one prompt too far.
II. How should universities respond?
That depends on what you think university education is for.
If you buy the view that “university education is mostly signaling,” you don’t need to worry much. Make courses tough enough to filter students. AI doesn’t change that.
That’s not my view. University education serves multiple goals, including:
Provide practical skills for professional life;
Train critical thinking;
Build an informed citizenry;
Expose people to humanity’s greatest achievements.
Advanced AI does reduce the value of teaching practical skills (goal 1). However, that was never the main reason universities exist. For example, I teach asymptotic theory of least-squares estimators. Maybe 1–2 students will do a PhD where this matters. A few more will do data analysis professionally and might appreciate where t-statistics come from. But for most students, this stuff just isn’t that practically useful.
Goals 2–4, though, remain valid even in a world with extremely advanced AI (say, artificial general intelligence, or AGI). Even if humans no longer push the knowledge frontier forward, you can still appreciate the beauty of the Central Limit Theorem. You want people who understand causality, even if AI runs all the analyses. You need citizens who can think critically about how we should live in an AGI world.

III. Why standard responses fail
A common response I get when I complain about AI is this: “Don’t work against AI; show students how to use it.”
This argument is appealing. People have worried that new tech will rot human brains since time immemorial. Socrates argued that books would make us forgetful. Socrates was wrong, and so are the AI naysayers.
I disagree. AI isn’t eliminating some boring, mechanical part of learning. It’s replacing the very core.
Consider coding. Pre-ChatGPT, you had to understand the problem, break it into steps, write code, debug, etc. Now you just ask Claude. Maybe you read the output. Maybe you don’t. The entire learning process—the entire difficult part—gets bypassed.
That’s why AI isn’t like books. Books require you to do the cognitive work. You have to read, understand, synthesize. Books are tools for thinking. AI makes thinking optional.
IV. Modest proposal: Barbell strategy
Waging a crusade against LLMs would be self-defeating. I use LLMs all the time, and they’re great. However, there’s a real trade-off between short-term gains and long-term costs. “No pain, no gain” is often exaggerated, but there’s truth to it. You need cognitive friction to train your mental muscles.
Here’s what I propose instead: a barbell strategy.

One end of the barbell: courses that are deliberately non-AI. Work through proofs by hand. Read academic papers. Write essays without AI. It’s hard, but you build mental strength.
The other end of the barbell: embrace AI fully for applied projects. Attend vibecoding hackathons. Build apps with Cursor. Use Veo to create videos. Master these tools effectively.
Universities should focus on the fundamentals. That’s our comparative advantage. Fundamentals don’t change quickly. The AI landscape, on the other hand, changes so rapidly that any specific lessons become obsolete within months. Last year, chain-of-thought prompting was state-of-the-art; now, with reasoning models, it’s likely counterproductive. Meanwhile, tech companies have every incentive to encourage frictionless adoption. We don’t teach TikTok—we shouldn’t teach ChatGPT, either.2
A barbell has heavy weights on both ends and nothing in the middle. The nothing-in-the-middle part is crucial. You don’t want the mushy middle where students “use AI responsibly” or instructors teach basic prompting as an afterthought. That’s the worst of both worlds. Students don’t build thinking skills, but they also don’t learn the full potential of AI.
Implementation details will vary, and I don’t have the answer for every possible edge case. Like, what about bachelor’s theses? Maybe they continue unchanged (with the understanding that students will heavily use AI), or maybe they get stricter oral defenses, or maybe they’re dropped altogether and replaced by comprehensive exams. But the principle should hold: fundamentals without AI, applied work with full AI, no mushy middle.
V. Conclusion
What happens if we get this wrong?
You may have seen that viral Financial Times graph showing reading and math scores declining worldwide:
A 2024 OECD study found that “literacy and numeracy skills among adults have largely declined or stagnated over the past decade in most OECD countries.” If you want more depression, read James Marriott’s Substack post, titled menacingly “The dawn of the post-literate society.”
We don’t know definitively what’s causing this decline. But suggestive evidence points to smartphones being partially to blame. If smartphones have caused this much trouble, the coming wave from AI will be far worse. TikTok and Snapchat may have messed with our attention spans, but that’s not the same ballgame as a wholesale replacement of thinking.
The barbell metaphor is a useful mental model, and not just for education. The more AI develops, the more important it will be to have spaces where we think without AI help and experience the full spectrum of cognitive friction. In the office, too, we’ll need to sometimes code without Cursor and write difficult emails without ChatGPT.
Universities are uniquely positioned to become a cognitive gym, a place to train deep thinking in the age of AI. But this requires taking a stance. By default, we’ll get a “mushy middle” compromise that serves no one in particular.
We can do better than that. Let’s pick a side of the barbell.
The models’ solutions had some tiny issues. For example, the data project requires some data that, while easily accessible on the internet, wasn’t automatically retrieved by the LLMs. However, it’s trivial for a student to attach a CSV file to the prompt.
Obviously, teaching the technical foundations of AI—neural networks, transformers, attention mechanisms—is valuable, even in a rapidly changing field. I’m talking about teaching tool usage, not technical understanding.



Thanks for a very thoughtful piece and a pithy take away: "We don’t teach TikTok—we shouldn’t teach ChatGPT, either."
What confounds me is how a professor like me (and many others) who teaches at a large, public, state university with undergraduate classes that are typically no smaller than 100 students (with some but not a lot of TA help) can feasibly implement the no-AI, all deep thinking and fundamentals, approach (with reasonable assessment to assign grades).
Would love to hear your (or anyone else who reads this) thoughts what this could look like in practice for a professor at an R1 research university operating under the constraints of large class sizes, insufficient TA resources, and other expectations (i.e., publishing for continued advancement).
I've been coming around to this point of view, partly based on some of my own teaching. When I do digital storytelling the class has two parts: high tech for video editing, sound recording, image work, etc; no tech, for generating story ideas and working with classmates. It's effective.
Historian Niall Ferguson has a similar approach which he calls the Starship and Cloister model. Check my Substack for an open description, as his is behind paywalls.