Scientist’s Survival Guide

How to navigate the research labyrinth without losing your mind and if selling your soul, at least getting a good price

2015-06-28 — 2025-07-27

Wherein the quotidian manoeuvres of research are described: tactics for navigating funding labyrinths, global post‑COVID seminars and networking rituals, and practical habits of mind are set forth.

academe
adaptive
collective knowledge
diy
how do science
mind
Figure 1

This is a guide to the messy, practical business of being a researcher. It’s less about the grand philosophy of science and more about the day-to-day tactics for surviving—and perhaps even thriving—within the strange institutions we’ve built. We’ll cover everything from navigating the funding labyrinth and the art of networking to the quieter, internal habits of mind that foster discovery.

1 Surviving the Institution

Details on surviving academia and adjacent institutions.

1.1 Post-COVID-19 seminars

Yes, research seminars are now even more global. See researchseminars.org.

1.2 General menus

Figure 2

1.3 Specific morsels

1.4 Funding

See research funding.

1.5 Communicate better

See communication and science communication.

1.6 Strategic ignorance

See strategic ignorance.

1.7 Networking hacks

Figure 3

Ah, networking — the part of the job that makes even the most extroverted researcher feel like a rare specimen being traded at a pet show, where for some reason the currency isn’t dollars but awkwardness. For those of us who’d rather be pipetting, the art of the schmooze can feel like a dark art. Here are some thoughts and tools on how to make connections without turning into a snake‑oil merchant, and how to do it efficiently enough that we can get back to the bit that feels more like actual work.

2 Habits of Mind

In which I collect tips from esteemed and eminent minds about how to proactively discover stuff. More meta-tips than specific agendas for discovery.

2.1 Habits of highly effective scientists

Classic:

  • Curiosity

According to SLIME MOLD TIME MOLD:

  • Stupidity
  • Arrogance
  • Laziness
  • Carefreeness
  • Beauty
  • Rebellion
  • Humour

Hardtowrite:

In science, the process is just as important as the discoveries. Improving our scientific processes will speed up our rate of discovery. Feyerabend claims contemporary research has over-indexed on processes such as the scientific method and this rigidity has restrained innovation. The crux of his book Against Method (Feyerabend and Hacking 2010) is that paradigm shifts in science stem from epistemological anarchism. Epistemology refers to the formulation of beliefs. This anarchy, to any Thomas Kuhn fans, is what is necessary to achieve Kuhn’s Stage 4 phase of science, the ’evolutionary phase’ in which new paradigms are created. In recent decades we have placed too much importance on science being consistent, while forgetting that paradigm shifts often come from those who refute mainstream assumptions. In other words, the geniuses who generated scientific paradigm shifts were anarchists to their contemporaries.

Jan Hendrik Kirchner, Via productiva

Be wary, be very wary indeed, of engaging with a Hydra problem. You might be able to identify a Hydra problem by keeping track of the branching factor: how many subproblems does each step of your derivation introduce? If this continues to be larger than 1, your approach is probably incorrect or at least infeasible. This might appear obvious, but history is full of famous examples of researchers getting stuck on epicycles. Researchers produced “pseudo-answers” to questions, but they opened more questions than closed. I observe the pattern in my colleagues, students, and myself.

2.2 How precious is my idea?

My understanding of which parts of research are hard and which are valuable has changed.

Let us suppose I was worried about being “scooped” — beaten to publication — which is something people in science tend to worry about a little, but not as much as I imagined before I joined in for myself. How valuable is an insight? It depends. An insight like “this might be worth trying, who knows” isn’t valueless, but the opportunity cost of trying it can be months of specialist time or years of a grad student’s time. Let us call such intuitions. Even if certainty is high, the exercise cost is also high, so it’s not worth keeping this kind of thing quiet; rather, we want to share this kind of thing as widely as possible so the work and risk can be shared among multiple researchers. Having good intuitions is useful but if anything we want to cultivate the [TODO clarify]

If my idea is “I can definitely solve this particular problem in a particular way,” the value of this intelligence is higher. Let us call these solutions. Often, simply knowing a solution exists in a particular domain can lead someone else rapidly to a solution, because it constrains the search enough to let other people find it quickly. This is because most of mathematics, at least for me, involves trying bone-headed things that turn out to be silly; merely narrowing the search space can make things drastically easier. So if we were in a competitive, idea‑stealing environment, knowledge of existing solutions tends to be valuable intelligence. It’s not the whole road to a paper that will get us credit for being clever (and I personally value a paper that has more than one great idea), but a good solution is an important chunk.

On the other hand, academia can often operate like a cooperative endeavour, or even a competitively cooperative potlatch, and giving away valuable things is a way to signal status and attract collaborators and funding.

Overcoming Bias: The InDirect-Check Sweet Spot

Tony Kulesa, in Tyler Cowen is the best curator of talent in the world gives a fannish profile of Tyler Cowen, including ideas about how we would become expert at identifying underexploited talent. Part of that process, interestingly, is not having a committee process, and being shamelessly personal-judgment-driven. Insert speculation about wisdom of crowds versus madness of committees. Also, equity concerns.

2.3 Sidling up to the truth

As a researcher motivated by big-picture ideas (How can we survive on planet earth without being consumed in battles over dwindling resources and environmental crises?) as much as aesthetic ideas, I am sometimes considered damaged goods. Many claim a fine scientist is safely myopic; the job of discovery is detailed piecework.

On one hand, the various research initiatives that pay my way are tied to various real world goals (“Predict financial crises!” “Tell us the future of the climate!”). On the other hand, researchers tell me it’s useless to try to solve these large issues wholesale; instead we should identify small, retail questions where we can hope to make progress. Meanwhile, they’ve just agreed to take a lot of money to solve big problems. In this school, then, the presumed logic is that we take a large research grant to shed light on lots of small problems that lie in the penumbra of the large issue, in the hope that one will flare up to illuminate the shadow. Or burns the lot to the ground. The example given by the Oxonian scholar who most recently expounded this to me was Paul David and the path dependence of the QWERTY keyboard. It’s a deep issue — the contingency of the world — seen through the tiny window opened by substandard keyboard design.

Truth, in these formulations, is a cat: don’t look at it directly or it will perversely slope off to rub against someone else’s leg. Our acceptance is all in the sidling-up, the feigning disinterest, and waiting for truth to come up and show us its belly. I’m not sure I am persuaded by this. It’s the kind of science that would be expounded in an education film directed by Alejandro Jodorowsky.

On the other hand, I’m not sure I buy the grant-makers’ side of this story either, at least the version they seem to tell in Australia, which is that they give out money to go find something out. There are productivity outcomes on the application form where we fill out the goals that our research will fulfil; This rules out much research by restricting us largely to marginally refining a known-good idea rather than trying something new. I romantically imagine that in much research, we would not know what we were discovering in advance.

The compromise is that we meet in the middle and swap platitudes. We will “improve our understanding of X”, we will “find strategies to better manage Y”. We certainly don’t mention that we might spend a while pondering keyboard layouts when the folks ask us to work out how to manage a complex non-linear economy.

2.4 Disruption by field outsiders

Figure 4

Are fields plagued by hyperselection? Can we find fields ripe for disruption by outsiders with radical, out-of-the-box ideas? Do we need left-field eccentrics roaming about, asking if the emperor has clothes?

Is it just stirring the pot? How many physicists, to take one example, can get published by ignoring everyone else’s advances?

How do we know that our left-field idea is a radically simple idea that causes the entire field to advance? And how do we know that it’s not the crazed ramblings of someone who’s missed the advances of the last several decades, an inmate wandering out of the walled disciplinary asylum in a dressing gown, railing against the Vietnam War?

2.5 Optimizing versus satisficing

2.6 History and philosophy of [TODO clarify]

Check out the amusing curmudgeon DC Stove, Popper and after: Four modern irrationalists.

3 Research Prioritization

Figure 5: Thepracticaldev, Half listening to conference talks

A current meta-question. One starting point is John Schulman’s Opinionated Guide to ML Research, which discusses things like this:

4 Idea-Driven vs Goal-Driven Research

Roughly speaking, there are two different ways that you might go about deciding what to work on next.

  1. Idea-driven. Follow some sector of the literature. As you read a paper showing how to do X, you have an idea of how to do X even better. Then you embark on a project to test your idea.
  2. Goal-driven. Develop a vision of some new AI capabilities you’d like to achieve, and solve problems that bring you closer to that goal. (Below, I give a couple case studies from my own research, including the goal of using reinforcement learning for 3D humanoid locomotion.) In your experimentation, you test a variety of existing methods from the literature, and then you develop your own methods that improve on them.

John links to some other articles on this theme. Richard Hamming’s You and your research, based on his work at Bell Labs, offers some useful ideas about social engineering, genius, and effort.

On this matter of drive Edison says, “Genius is 99% perspiration and 1% inspiration.” He may have been exaggerating, but the idea is that solid work, steadily applied, gets you surprisingly far. The steady application of effort with a little bit more work, intelligently applied is what does it. That’s the trouble; drive, misapplied, doesn’t get you anywhere. I’ve often wondered why so many of my good friends at Bell Labs who worked as hard or harder than I did, didn’t have so much to show for it.

James Propp on genius:

The notion of stereotype threat has gotten some press lately; I want to also bring people’s attention to the slightly less-discussed notion of solo-status.

This essay jumps off from Moon Duchin’s work on the sexual politics of genius (Duchin 2004).

Figure 6

Michael Nielsen, Principles of Effective Research:

People who concentrate mostly on self-development usually make early exits from their research careers. They may be brilliant and knowledgeable, but they fail to realize their responsibility to make a contribution to the wider community. The academic system usually ensures that this failure is recognised, and they consequently have great difficulty getting jobs. Although this is an important problem, in this essay I will focus mostly on the converse problem, the problem of focusing too much on creative research, to the exclusion of self-development.

4.1 Fostering Innovation at Project-level

See science projects

4.1.1 Consultant as co-genius

Edward Kmett seems popular.

5 Incoming

6 References

Alon. 2009. How to Choose a Good Scientific Problem.” Molecular Cell.
Arbesman, and Christakis. 2011. Eurekometrics: Analyzing the Nature of Discovery.” PLoS Comput Biol.
Azoulay, Fons-Rosen, and Zivin. 2015. Does Science Advance One Funeral at a Time? Working Paper 21788.
Burroughs Wellcome Fund, and Howard Hughes Medical Institute. 2006. Making the right moves: a practical guide to scientific management for postdocs and new faculty.
Devezer, Nardin, Baumgaertner, et al. 2019. Scientific Discovery in a Model-Centric Framework: Reproducibility, Innovation, and Epistemic Diversity.” PLOS ONE.
Duchin. 2004. “The Sexual Politics of Genius.”
Dyba, Kitchenham, and Jorgensen. 2005. “Evidence-Based Software Engineering for Practitioners.” IEEE Software.
Feyerabend, and Hacking. 2010. Against Method.
Gosztyla. 2022. How to Manage Your Time as a Researcher.” Nature.
Kearns, and Gardiner. 2011. The Care and Maintenance of Your Adviser.” Nature.
Montagnes, Montagnes, and Yang. 2022. Finding Your Scientific Story by Writing Backwards.” Marine Life Science & Technology.
Newman. 2009. The First-Mover Advantage in Scientific Publication.” EPL (Europhysics Letters).
Nissen, Magidson, Gross, et al. 2016. Publication Bias and the Canonization of False Facts.” arXiv:1609.00494 [Physics, Stat].
Rekdal. 2014. Academic Urban Legends.” Social Studies of Science.
Schwartz. 2008. The Importance of Stupidity in Scientific Research.” Journal of Cell Science.
Smaldino, and O’Connor. 2020. Interdisciplinarity Can Aid the Spread of Better Methods Between Scientific Communities.”
Thagard. 1993. “Societies of Minds: Science as Distributed Computing.” Studies in History and Philosophy of Modern Physics.
———. 1997. “Collaborative Knowledge.” Noûs.
———. 2005. “How to Be a Successful Scientist.” Scientific and Technological Thinking.
Weng, Flammini, Vespignani, et al. 2012. Competition Among Memes in a World with Limited Attention.” Scientific Reports.
Westrum. 2013. Sidewinder: Creative Missile Development at China Lake.