Innovation, science, technology research in Australia
A scrapbook of notes about how research is done in Australia
2024-04-13 — 2025-06-16
Australia and research is weird, which is a thing you rapidly learn from inside it. Here are my notes on the economics and politics of research in Australia, with notes on how it got this way for my own understanding.
I am not an economist of scientific research. This notebook is for some thinkpieces I’ve collected along the way, but I have not done a thorough review and examined them for bias, quality etc.
For now I will take it as given that being good at science and technology is a broadly “good idea”; but if you don’t think science matters, or if you think state policy has minimal role in innovation etc, you will likely find this uninteresting.
At the risk of sounding opinionated about something I’ve already indicated I am not deeply qualified to comment on at a policy level, I think that it is relatively uncontroversial that science research funding in Australia is widely considered to be substandard for an advanced economy. You find a lot of people mentioning how the system is broken and few cheerleaders for the status quo. That knowledge is folkloric, though. We all say that the publication incentives are “bad” and the funding is “inadequate”, but we, or at least I, don’t have a good idea of how bad it is, or what the alternatives are, or how much they cost.
That’s why I have this notebook. It’s a learning exercise for me to document my evolving understanding and learn what is going on.
1 Priors
I think the current consensus is probably not too far from
- we are broadly agreed about what is broken in the system but
- we have difficulty agreeing on how to fix it.
I assume that this means more that the effort of fixing the problems does not seem to the decision-makers to be “worth” it, in terms of popular enthusiasm, or political capital or whatever metric you are supposed to use. Or even just risk: every big upheaval is a chance to make things worse, not better. I can imagine it might be hard to fix a very large and very complicated system with many vested interests and many stakeholders, and with little political enthusiasm. Even if everyone wants some kind of change it would be tedious to get them all to agree on which changes. Maybe it is completely reasonable to spend a limited supply political effort on fixing the hospitals or the schools or defence or whatever and hope that the research system will sort itself out, or acquire greater public salience later.
2 Funding channels
- DISR & Dept of Education: Direct grants (Discovery, Linkage, Industrial Transformation) sit with DISR; block-grant schemes for universities (e.g. Research Training Program) with Education (arc.gov.au).
- CSIRO: Beyond its own mission research, CSIRO runs fellowships and accelerators to spin technologies into markets
- Universities: Performed $14 billion of R&D in 2022, subsidised by international-fee cross-subsidies of about $3 billion annually.
3 Tax Incentives & Industry Engagement
- R&D Tax Incentive: Refundable offset = company tax rate + 18.5 % premium (for < $20 m turnover); non-refundable + 8.5 % above that (australianreview.net).
- Total government R&D support: Forecast at $14.4 billion (0.52 % of GDP) in 2024–25.
Top universities rely on fee income—roughly 25–34 % of total revenue—to keep labs running and maintain global rankings.
4 Australia’s research spend
AFAICT we account for nation’s research spend in terms of both how much the state directly spends on it and how much business claims to be spending on it.
Australia channels about 1.68 % of GDP into R&D, which is a long way below the OECD average of 2.7 %—with $38.8 billion spent in 2021–22.
So we under-perform with respect to advanced economies. But how much you spend on research is not the only measure of effectiveness. Maybe Australia is twice as efficient as everyone else so we do more research even with half the funds? Lets find out!
drum roll
5 Bureaucratic overheads
Uh oh, that heading looks ominous.
Australia’s research is reputedly dogged by heavy administrative overheads that sap academic time and push up costs. Critics from the Australian Academy of Science and independent reviews warn that grant bodies like the ARC struggle under their own bureaucracy, while universities have hired an ever-rising tide of non-academic staff. Empirical and anecdotal studies describe growing “administrative burden” that directly pulls researchers away from core teaching and discovery work. Calls from Universities Australia and the OECD’s peer reviewers urge consolidation of programs and streamlining of grant processes if Australia is to lift its R&D efficiency closer to OECD peers.
5.1 Funding processes are complicated and wasteful
OECD urges Australia to simplify grant processes and bolster university-industry collaboration to lift productivity and innovation (OECD 2024)
A 2022 consultancy review (the Sheil review) found that:
“The ARC had only ‘limited’ ability to run an effective ‘principles-based’ national grants programme,” —Research Professional News, Nov 2023
Meanwhile, the ARC’s own discussion paper notes that the National Competitive Grants Program spans dozens of schemes and requires institutions and applicants to navigate a maze of guidelines and online systems. Professor Chennupati Jagadish, President of the Australian Academy of Science, warns:
“For years, Australia’s R&D system has been groaning under the weight of 14 government portfolios and 151 programs that, whilst well-intentioned, overlap and struggle to support our scientists”… —Australian Academy of Science, Dec 2024
5.2 High, Rising Administrative Burden
Several recent studies and commentaries document how universities have ballooned their back-office functions, often at the expense of academic efficiency:
Woelert (2023):
“Anecdotal evidence suggests that there is growing concern about increasing administrative burden within universities around the world. …some of the key changes that were meant to make universities more efficient may have inadvertently increased levels of administrative burden”.
I don’t know Woelert, but they write a lot about this.
The think tank, IPA says:
“The higher education sector already has a bureaucracy problem. …non-academic staff numbers have grown 70 per cent faster than academic staff, and now cost Australia’s elite tertiary institutions … more than $3 billion in salaries each year”.
Universities Australia itself has recommendations
“Reducing administrative overheads and creating synergy through appropriate Machinery of Government changes and program consolidation” (see UA’s response to Strategic Examination of Research and Development Discussion Paper)
There is a miniature academic field ranting about how much time is spent on administrative tasks (Woelert 2023; Woelert et al. 2025; Woelert and Stensaker 2025). Note however that this is not unique to Australia; see e.g. similar material from the US: Pearce (2025).
Also, solving bureaucracy is hard. I’m not actually sure that there is anything special for Universities here, except that maybe we should try all the solutions that work in other bureaucracies, and see if they work here.
5.3 Impacts on Academics
The administrative drag shows up in multiple ways:
- Time Lost: Preparing, submitting and managing grants through systems like RMS demands detailed forms, budgeting exercises and compliance checks—tasks outside a researcher’s expertise.
- High Costs: Universities often allocate millions each year to hire compliance officers, research managers and reporting teams—funds that could otherwise support labs or PhD candidates.
- Duplication of Reporting: Academics must report on the same project to multiple agencies (ARC, NHMRC, institutional bodies), inflating overheads and delaying project milestones (Woelert 2023)
6 Start-up & VC Headwinds
6.1 VC Drought
Also received wisdom: it’s hard to bankroll startups here. Australia has seen a slowdown in IPOs (e.g., only 29 in 2024 vs. 87 in 2022 reported by some sources), potentially making exit pathways for startups more challenging. (Source: The Australian).
TODO: Update with most recent IPO data if available. Explore trends in early-stage vs. late-stage VC funding.
6.2 Tech companies leave
Or do they actually?
This is received wisdom so I thought it would be easy to quantify, but I failed to find good data. Let’s look at anecdotes, I guess Examples of Australian-founded tech companies (roughly 2000–2024) whose substantive R&D and/or strategic control reportedly shifted overseas, primarily after acquisition by foreign firms, include:
- Kaggle (Data Science): Founded 2010, acquired by Google in 2017, with operations and platform subsequently integrated into Google Cloud US.
- Afterpay (Fintech): Founded 2014, acquired by US-based Block (Square) in 2022. Post-acquisition, reports indicated significant global layoffs and brand consolidation, with a downsized Sydney presence.
- Radiata (Wireless Technology): Developed early Wi-Fi tech with CSIRO, founded 1997, acquired by Cisco (US) in 2000, with its Sydney R&D team and tech absorbed into Cisco’s global operations.
- Aconex (Construction Software): Launched 2000, acquired by Oracle (US) in 2017, and subsequently integrated into Oracle’s global construction software suite.
- Other reported examples from a similar period include: Lake Technology (Audio, acquired by Dolby US), Conry Tech (Green HVAC, reportedly relocated to Canada for funding opportunities), Fathom (Financial Analytics, acquired by The Access Group UK), Lucidity Software (Safety Tech, acquired by Ideagen UK).
Common themes:
- Acquisition-Driven Relocations: A frequent pattern is acquisition by larger US or UK firms, followed by the centralization of R&D and strategic decision-making overseas, sometimes leaving Australian offices as sales/support hubs or leading to their eventual phase-out.
- Funding & Scale: For some, access to larger overseas venture capital markets or the scale of global acquirers was a key factor. (e.g., Conry Tech cited Canadian government funding as a pull).
- Talent & Specialisation: Integration into global entities was sometimes linked to accessing broader talent pools or the specialized ecosystems of larger tech hubs.
This pattern of successful local innovation leading to acquisition and then a potential reduction of local R&D intensity raises questions. What are the trade-offs? Are there policy settings or investment strategies that could better incentivize the retention and growth of core R&D in Australia post-acquisition? The “Delaware Flip” phenomenon, where startups restructure into a US entity early on to attract US investment probably need analysing.
Overall, that was not quite the story I was expecting TBH. Maybe companies do start here but they fail to scale here? Do they fail to scale here relative to the size of the market? Maybe companies fail to start here so there are few companies that leave.
Work in progress, I guess.
7 Public attitudes
Biddle (2023):
This paper uses data collected as part of the ANUpoll series of surveys, collected in April 2023 with information on almost four-and-a-half thousand Australian adults. The paper describes Australians’ interests in science; their knowledge about science; support for and concerns about science; and their views on scientists and science policy. Data presented in the paper suggest that Australians have a deep and long-standing interest in science and that many are confident in their own knowledge, though not necessarily with some of the more recent technologies like AI. Although the vast majority of Australians think science has a positive impact, far fewer think science had a positive impact with regards to COVID-19. In addition, only 17.4 per cent of Australians agree or strongly agree that ’Artificial intelligence and automation will create more jobs than they eliminate.’ Australians still think that AI will have a positive effect but there is far less support for AI having a positive effect compared to solar or wind energy, or vaccines. They think that politicians should rely more on the advice of scientists, that scientists should be free to comment on government policies, and that governments should be funding science. But, they also don’t think that research conducted by industry is well controlled or regulated, and they certainly think that there should be limits on what science is allowed to investigate.
From the same poll, this cute explainer Who pays the cost for science? For the record, I have yet to see anything useful be deduced from “willingness to pay”-type surveys, but it’s what we have to work with.
8 CSIRO
CSIRO will get its own write up.
9 Data ecosystem
See Australia in data for my own notes on data of the kind I have been using.
10 Policy suggestions
An OECD peer review (OECD 2024) and domestic stakeholders alike advocate:
- Program Consolidation: Merging overlapping grant schemes to reduce the number of applications researchers must complete.
- Principles-Based Oversight: Shifting from prescriptive rules to outcome-focused frameworks to cut red tape.
- Strengthened Research Offices: Equipping universities with unified grant-support teams, rather than scattered units for each funding body.
Universities Australia claims that these steps are critical if Australia is to reverse its decline in R&D intensity (1.68 % of GDP vs. 2.7 % OECD average) and better leverage its research talent pool.
11 Publication Incentives
Academic careers and university funding in Australia are significantly shaped by publication metrics. What are the main drivers? What do they incentivise? Common wisdom is that these metrics are problematic, widely gamed and possibly counter-productive but they are still widely used. This problem is not unique to Australia, and I am curious to know how other places fare.
Excellence in Research for Australia (ERA): National assessment run by the Australian Research Council (ARC). ERA ratings influence university reputation and some block funding. Assesses publications and other outputs by discipline. [Source: ARC website on ERA].
Global University Rankings (ARWU, THE, QS): These heavily influence university strategy. Many ranking methodologies prioritize publication counts in specific indexed international journals and citation metrics.
Grant Applications (ARC & NHMRC): A strong track record of “high-quality” publications (often judged by venue prestige) is critical for funding success.
Internal University Metrics: Promotion and performance assessment within universities often explicitly reward publications in “high-impact” or “Q1” journals.
The 2023 ARC Review (the “Trusty Review”) explicitly recommended the ARC should:
“Discontinue use of journal impact factor or other journal-level metrics in grant assessment processes” and “Discontinue use of university rankings in grant assessment processes, and advocate for similar change across the sector.” (Recommendation 9).
This signals official acknowledgement of problems, but the metrics are already baked into many layers of bureaucratic and personal performance indicators so there is no sign of the inertia being overcome any time soon.
- Journal Impact Factor (JIF) & Metric Over-reliance: Despite criticisms (e.g., DORA principles, which many Australian institutions endorse) and the ARC Review’s stance, JIF and journal quartiles are still widely used as proxies for research quality.
- Neglect of Local/National Relevance?: A strong focus on “world-leading” defined by international journal metrics can de-incentivize research primarily focused on Australian issues or specific local contexts, especially if these topics aren’t of immediate interest to top-tier global journals.
- The Universities Australia submission to the Strategic Examination of R&D noted: “Current incentives may not adequately reward research that addresses national priorities but [which] may not be published in the highest impact factor journals.” Source
- “Publish or Perish” Pressure: Remains intense, especially for Early and Mid-Career Researchers (EMCRs), driven by the need to meet these metric-focused expectations.
- The ARC Review also highlighted the need for “greater recognition of the diverse forms of research and a broader range of research outputs and impacts” (Key Finding 7).
- Focusing more on the intrinsic quality and significance of the research content, rather than predominantly on publication venue metrics would be nice. But I bet the question of how to measure that is a hotly contested one where no-one trusts anyone else not to do a worse/more biased job than is already done. This is essentially how the spoils are divided. Epistemic communities are difficult. Peer review is difficult.
I should probably mention the battleground that is the whole academic publishing industry.
Are there good international examples of national assessment systems that better balance these competing demands?
12 Incoming
TIL that natural supercomputing centres etc are managed by the department of Education (!): National Collaborative Research Infrastructure Strategy (NCRIS) - Department of Education, Australian Government.