LLMs and the end of the Soviet era of academic research
Facing up to the coming status collapse
To a first approximation, large language models have made it free to generate research. Our current generation of reasoning models can create shockingly good research outputs. Are they perfect? No. Is innovation in AI finished? Also no. With near-free production we can expect abundance.
Tyler Cowen has a nice framework for thinking through the meaning of events that considers which professions, organisations or practices rise and fall in status. The pandemic raised the status of healthcare workers, and lowered the status of the World Health Organisation, for instance. The FTX collapse raised the status of mainstream finance and truly decentralised blockchains, and lowered the status of effective altruism.
Abundant automated research production seems very bad for the status of researchers.
AI is disrupting research everywhere in the economy, across consulting, public policy, finance, marketing, law, and product management and development. But it is academic research that is most vulnerable to status collapse. Because academia is a profession obsessed with status: it is a field where intelligence is flaunted and on which we have overlaid with even more markers of status (the PhD that grants its holders right to a privileged honorific, and the medieval orders of academic rank all the way to full professor).
The rituals of academic research have left them more exposed than any other research industry to disruption.
If producing research is trivial, then what happens to university research?
Research metrics for a research workforce
Universities have been in continual operation since at least the 11th century. Now, I’ve hung around enough conservatives to know not to bet against ancient institutions – there is information in survival. But the modern university is nothing like the University of Bologna in 1088. Indeed, it is nothing like the university 100 years ago.
In 1939 there were just under 14,000 students at Australian universities. In 2023 there were 1.6 million students.
As the student cohort has grown so has the staff. I’ve been reading some of the Menzies-era inquiries into education, and they are replete with observations about the remarkable growth in academic staff to service the growth in students. In 2025 there are around 45,000 academics (level B and above) working at Australian universities - plus the professional administrative staff.
The sheer size of the universities makes them a qualitatively different institution than they were prior to the Second World War. Mass education has affected everything that the university does and how it operates. Our practices of academic research are a reflection of this scale: the system has to manage an enormous workforce of employees who expect to do research as a condition of their employment as educators.
Many of the perversities that characterize academic research come from this scaling problem. Universities are not designed to optimize the quality or the impact of research. What matters if you have a large workforce and you want to manage the performance is metrics and indices by which you can judge productivity in a pseudo-objective manner.
When I joined academia, I was naively shocked by the lack of interest that universities expressed about the content of the research that is done by their researchers.
With very few exceptions (those exceptions being the sorts of politically attractive innovations that can be touted by senior leadership to the education minister and parliamentarians) for academic researchers, the most important aspect of research is the journal in which it is published.
The very understandable reason for this is that journals can be ranked. We can come up with metrics that describe a good journal and a bad journal. Through rankings, research productivity is made legible to research managers who are otherwise not interested in the content of the research.
We’ve all read James C Scott. We all know how making things legible changes how well those things function.
Likewise, our grant agencies, like the Australian Research Council, are implicitly structured to function less as subsidies to research and innovation, and more as vehicles for the management of academic careers. Practicing academics have extremely fine-tuned and nuanced understandings of which grants are most prestigious, because the point of a grant is to receive the grant. Again, this is another mechanism to sort an academic workforce without having to have managers decide for themselves what constitutes valuable research or valuable time spent researching.
There have been many attempts to reverse engineer better incentives into the research ecosystem in Australia.
For example, there are grant programs that are only available if a researcher can get a non-university partner (what we euphemistically call “industry”) to share the cost. There are incentives for academics to encourage their PhD students to take internships. There are promotions processes where an academic can make a case that they have translated their research into real-world outcomes.
But none of these mechanisms have made a dent in the basic calculation faced by academics with a research allocation in their workplan. The thing that matters is publishing in venues and receiving grants that are recognized by line managers as high quality.
The system is horrendously wasteful. The amount of time spent working to rule is astonishing. My conservative friends have had great fun complaining about the content of academic research. But what is shocking from the inside is how little the content actually matters.
We talk about research as if it is the pursuit of passion projects, but we assess it as if we are measuring crop yields.
No surprise that the status of university research has been going down for a long time.
Research needs to become more about coordination than production
With these incentives, academic researchers have been pretty quick to adopt AI (as I urged in 2023!). Most of the peer reviews I’ve received in the last two years have had the telltale signs of chatbots – the formal language, the vagaries, the cliches, the delving.
AI referee reports didn’t cause the peer review system to be absurd, but they make those absurdities particularly stark. When authors return autonomously generated responses to autonomously generated peer reviews it seems silly to have academics in the loop at all.
LLMs cannot displace all academic research. They cannot run clinical trials, or test material fire resistance. But in the social sciences (and the humanities, and law, and …) there are whole genres of research that LLMs can displace.
In economics, for instance, there are lots of scholars who have made a good career regressing one public set of data against another public set of data and then reverse engineering any correlations into the literature. This has been possible to do automatically at scale since ChatGPT’s Code Interpreter was released in May 2023. Robots are just more efficient p-hacking than humans. The same is true for a lot of theoretical and conceptual work.
It is so much easier to pump stuff out in this technological environment that the gating mechanisms we have used for half a century cannot hold. Research managers are going to have to radically redefine how they assess research productivity or simply submit to a world where academic research is much lower status.
Maybe we could say this is only a continuation of trend: academic research has already been disrupted by 21st century communications. In some disciplines, publication is more of a formality, where prestige comes directly from releasing preprints on arXiv or SSRN. And universities have been trying to separate out research from education for a while (first with the use of sessional staff and now with the development of “teaching intensive” full time roles with little research responsibility).
I think that one of the sources of this problem is the way we think about the purposes of academic research. Academic research is almost entirely production-centric. It is based around Sovietesque targets and quotas that are structured for the needs of research managers than the needs of research users. More technology can help researchers hit their quotas better. But creating value from research requires aligning knowledge with human needs.
Economic value comes from coordination, not just production. What matters is not merely creating things, but ensuring they reach those who value them.
The Australian government has for a very long time been trying to push universities towards things like research commercialization. It is mostly has failed to do so.1 There are simply too few incentives for academic researchers to focus on having their work read, let alone adopted, by the external world.
There are two paths from here. The first is that academic research is simply commoditized by AI. Researchers compete to produce research at lowest cost, using as many robots as they can to help. Research managers respond by increasing quotas. In this world there will still be research superstars – those who can adapt to this environment better than their peers. I expect the standards will go up even further for the top journals.
But overall the status of academic research will decline. Those who can’t keep up will be shuffled into more education intensive roles. Eventually the research-education nexus that has defined academia’s self-image will break.
The second path is that researchers move up the value chain and focus on how the commodity they produce is used in the economy. The best researchers will be those who are the best at communicating their research to practitioners and the wider world - the researchers that specialise in coordination. They find value not from research production but from research use. They remain specialists in their field but generalists in application.
For me, the point about bond voting isn’t that it is an interesting idea, or a novel theoretical finding - let alone that it is published in a top management journal. The point is that it is an idea that can be used by the crypto industry right now to achieve their own goals. It came from the industry and I am working to deliver it back to the industry.
Successful research is research that is framed for and delivered to the audiences that can use it. This is something that the social sciences have been extraordinarily poor at. We don’t all have to start firms or build products. But to the extent that academic research has a future, that future must be one where research is more entrepreneurial.
Artificial intelligence is great for the economy of ideas. But an economy is about coordination, not production. If academic research cannot pivot to that, then it will have to accept its decline.
In 1993, it was decided by a committee of vice chancellors that the intellectual property for academic research should be vested in the university rather than with individual academic researchers - what used to be called the professor's privilege.
The vice chancellors’ argument was that only the university had the capability and infrastructure to support the development and commercialization of products that came out of research, and that universities needed to be able to find some revenue to support their research program.
You can see how this argument would be practically and politically appealing. But you will be shocked to learn that it did not usher a great age of university commercialization.