Scientist’s Survival Guide

How to navigate the research labyrinth without losing your mind and if selling your soul, at least getting a good price

2015-06-28 — 2025-07-27

academe
adaptive
collective knowledge
diy
how do science
mind
Figure 1

This is a guide to the messy, practical business of being a researcher. It’s less about the grand philosophy of science and more about the day-to-day tactics for surviving—and perhaps even thriving—within the strange institutions we’ve built. We’ll cover everything from navigating the funding labyrinth and the art of networking to the quieter, internal habits of mind that foster discovery.

1 Surviving the Institution

On the details of surviving the environment of academia and adjacent institutions.

1.1 Post COVID-19 seminars

Yes, research seminars are now even more global. See researchseminars.org.

1.2 General menus

Figure 2

1.3 Specific morsels

1.4 Funding

See research funding.

1.5 Communicate better

See communication, science communication.

1.6 Strategic ignorance

See strategic ignorance.

1.7 Networking hacks

Figure 3

Ah, networking. The part of the job that makes even the most extroverted researcher feel like a rare specimen being traded at a pet show, where for some reason the currency is not dollars but awkwardness. For those of us who would rather be pipetting, the art of the schmooze can feel like a dark one. Here are some thoughts and tools on how to make connections without turning into a snake oil merchant, and how to do it efficiently enough that you can get back to the bit that feels more like actual work.

2 Habits of Mind

In which I collect tips from esteemed and eminent minds about how to go about proactively discovering stuff. More meta-tips than specific agendas for discovery.

2.1 Habits of highly effective scientists

Classic:

  • Curiosity

According to SLIME MOLD TIME MOLD:

  • Stupidity
  • Arrogance
  • Laziness
  • Carefreeness
  • Beauty
  • Rebellion
  • Humour

Hardtowrite:

In science, the process is just as important as the discoveries. Improving our scientific processes will speed up our rate of discovery. Feyerabend claims contemporary research has over-indexed on processes such as the scientific method and this rigidity has restrained innovation. The crux of his book Against Method (Feyerabend and Hacking 2010) is that paradigm shifts in science stem from epistemological anarchism. Epistemology refers to the formulation of beliefs. This anarchy, to any Thomas Kuhn fans, is what is necessary to achieve Kuhn’s Stage 4 phase of science, the ’evolutionary phase’ in which new paradigms are created. In recent decades we have placed too much importance on science being consistent, while forgetting that paradigm shifts often come from those who refute mainstream assumptions. In other words, the geniuses who generated scientific paradigm shifts were anarchists to their contemporaries.

Jan Hendrik Kirchner, Via productiva

Be wary, be very wary indeed, of engaging with a Hydra problem. You might be able to identify a Hydra problem by keeping track of the branching factor: how many subproblems does each step of your derivation introduce? If this continues to be larger than 1, your approach is probably incorrect or at least infeasible. This might appear obvious, but history is full of famous examples of researchers getting stuck on epicycles. Researchers produced “pseudo-answers” to questions, but they opened more questions than closed. I observe the pattern in my colleagues, students, and myself.

2.2 How precious is my idea?

My understanding about which parts of research are hard and which things are valuable has changed.

Let us suppose that I was worried about being “scooped,” beaten to publication, which is a thing that people in science tend to worry about a little bit but not so much as I imagined before I joined in for myself. How valuable is an insight? It depends. An insight like “this might be worth trying who knows” is not valueless, but the opportunity cost of trying it is possibly months of specialist time or years of grad student time. Let us call such intuitions. Even if the certainty is high, the exercise cost is also high, so it is not worth keeping this kind of thing quiet; rather, you want to share this kind of thing as widely as possible so the work and risk can be shared between multiple researchers. Having good intuitions is useful but if anything you want to cultivate the

If my idea is “I can definitely solve this particular problem in a particular way,” the value of this intelligence is higher. Let us call these solutions. Often, simply knowing a solution exists in a particular domain can lead someone else rapidly to a solution because it constrains the search for solutions enough to allow for other people to find it quickly. This is because most of mathematics, at least for me, is trying bone-headed things that turn out to be silly, and merely narrowing the search space can make things drastically easier. So if you were in a competitive, idea stealing environment, existence of solutions tends to be valuable intelligence. It is not the whole way to a paper which will get you credit for being clever (and I personally value a paper which has more than one great idea in it) but a good solution is an important chunk.

On the other hand, academia can often operate like a cooperative endeavour, or even a competitively cooperative potlatch and giving away valuable things is a way to signal status and attract collaborators and funding.

Overcoming Bias: The InDirect-Check Sweet Spot

Tony Kulesa, in Tyler Cowen is the best curator of talent in the world gives a fannish profile of Tyler Cowen including ideas about how you would become expert at identifying underexploited talent. Part of that process, interestingly, is not having a committee process, and being shamelessly personal-judgment-driven. Insert speculation about wisdom of crowds versus madness of committees. Also, equity concerns.

2.3 Sidling up to the truth

As a researcher motivated by big picture ideas (How can we survive on planet earth without being consumed in battles over dwindling resources and environmental crises?) as much as aesthetic ideas, I am sometimes considered damaged goods. A fine scientist, many claim, is safely myopic, the job of discovery being detailed piecework.

On one hand, the various research initiatives that pay my way are tied to various real world goals (“Predict financial crises!” “Tell us the future of the climate!”). On the other, researchers involved tell me that it is useless to try and solve these large issues wholesale, but that one must identify small retail questions that one can hope to make progress on. On the other hand, they’ve just agreed to take a lot of money to solve big problems. In this school, then, the presumed logic is that one takes a large research grant to strike a light on lots of small problems that lie in the penumbra of the large issue, in the hope that one is flares up to illuminate the shade. Or burns the lot to the ground. The example given by the Oxonian scholar who most recently expounded this to me was Paul David and the path dependence of the QWERTY keyboard. Deep issue of the contingency of the world, seen through the tiny window opened by substandard keyboard design.

Truth, in these formulations, is a cat: don’t look at it directly or it will perversely slope off to rub against someone else’s leg. Your acceptance is all in the sidling-up, the feigning disinterest, and waiting for truth to come up and show you its belly. I’m not sure I’m am persuaded by this. It’s the kind of science that would be expounded in an education film directed by Alejandro Jodorowsky.

On the other hand, I’m not sure that I buy the grant-maker’s side of this story either, at least the story that grant-makers seem to expound in Australia, which is that they give out money to go out and find something out. There are productivity outcomes on the application form where you fill out the goals that your research will fulfil; This rules out much of the research done, by restricting you largely to marginally refining a known-good idea rather than trying something new. I romantically imagine that in much research, you would not know what you were discovering in advance.

The compromise is that we meet in the middle and swap platitudes. We will “improve our understanding of X”, we will “find strategies to better manage Y”. We certainly don’t mention that we might spend a while pondering keyboard layouts when the folks ask us to work out how to manage a complex non-linear economy.

2.4 Disruption by field outsiders

Figure 4

Are fields plagued by hyperselection? Can we find fields that are ripe for disruption by outsiders with radical out-of-the-box ideas? Do we need left-field eccentrics to roam about asking if the emperor has clothes?

Is it just stirring the pot? How many, to choose an example, physicists, can get published by ignoring everyone else’s advances?

How do you know that your left field idea is a radically simple left-field idea that causes the entire field to advance? And how do you know that it is not the crazed ramblings of someone missing the advances of the last several decades, an asylum inmate wandering out of the walled disciplinary asylum in a dressing gown, railing against the Vietnam War?

2.5 Optimising versus satisficing

2.6 History and philosophy of

Check out amusing curmudgeon: DC Stove, Popper and after: Four modern irrationalists.

3 Research Prioritization

Figure 5: Thepracticaldev, Half listening to conference talks

A current meta-question. One starting point: John Schulman’s Opinionated Guide to ML Research, which discusses stuff like this:

4 Idea-Driven vs Goal-Driven Research

Roughly speaking, there are two different ways that you might go about deciding what to work on next.

  1. Idea-driven. Follow some sector of the literature. As you read a paper showing how to do X, you have an idea of how to do X even better. Then you embark on a project to test your idea.
  2. Goal-driven. Develop a vision of some new AI capabilities you’d like to achieve, and solve problems that bring you closer to that goal. (Below, I give a couple case studies from my own research, including the goal of using reinforcement learning for 3D humanoid locomotion.) In your experimentation, you test a variety of existing methods from the literature, and then you develop your own methods that improve on them.

John links to some other articles on this theme. Richard Hamming, You and your research based off his work at Bell labs, voices some useful ideas on social engineering and genius and effort.

On this matter of drive Edison says, “Genius is 99% perspiration and 1% inspiration.” He may have been exaggerating, but the idea is that solid work, steadily applied, gets you surprisingly far. The steady application of effort with a little bit more work, intelligently applied is what does it. That’s the trouble; drive, misapplied, doesn’t get you anywhere. I’ve often wondered why so many of my good friends at Bell Labs who worked as hard or harder than I did, didn’t have so much to show for it.

James Propp, on genius:

The notion of stereotype threat has gotten some press lately; I want to also bring people’s attention to the slightly less-discussed notion of solo-status.

This essay jumps off from Moon Duchin’s on the sexual politics of genius (Duchin 2004).

Figure 6

Michael Nielson, Principles of Effective Research:

People who concentrate mostly on self-development usually make early exits from their research careers. They may be brilliant and knowledgeable, but they fail to realize their responsibility to make a contribution to the wider community. The academic system usually ensures that this failure is recognised, and they consequently have great difficulty getting jobs. Although this is an important problem, in this essay I will focus mostly on the converse problem, the problem of focusing too much on creative research, to the exclusion of self-development.

4.1 Fostering Innovation at Project level

See science projects

4.1.1 Consultant as co-genius

Edward Kmett seems popular.

5 Incoming

6 References

Alon. 2009. How to Choose a Good Scientific Problem.” Molecular Cell.
Arbesman, and Christakis. 2011. Eurekometrics: Analyzing the Nature of Discovery.” PLoS Comput Biol.
Azoulay, Fons-Rosen, and Zivin. 2015. Does Science Advance One Funeral at a Time? Working Paper 21788.
Burroughs Wellcome Fund, and Howard Hughes Medical Institute. 2006. Making the right moves: a practical guide to scientific management for postdocs and new faculty.
Devezer, Nardin, Baumgaertner, et al. 2019. Scientific Discovery in a Model-Centric Framework: Reproducibility, Innovation, and Epistemic Diversity.” PLOS ONE.
Duchin. 2004. “The Sexual Politics of Genius.”
Dyba, Kitchenham, and Jorgensen. 2005. “Evidence-Based Software Engineering for Practitioners.” IEEE Software.
Feyerabend, and Hacking. 2010. Against Method.
Gosztyla. 2022. How to Manage Your Time as a Researcher.” Nature.
Kearns, and Gardiner. 2011. The Care and Maintenance of Your Adviser.” Nature.
Montagnes, Montagnes, and Yang. 2022. Finding Your Scientific Story by Writing Backwards.” Marine Life Science & Technology.
Newman. 2009. The First-Mover Advantage in Scientific Publication.” EPL (Europhysics Letters).
Nissen, Magidson, Gross, et al. 2016. Publication Bias and the Canonization of False Facts.” arXiv:1609.00494 [Physics, Stat].
Rekdal. 2014. Academic Urban Legends.” Social Studies of Science.
Schwartz. 2008. The Importance of Stupidity in Scientific Research.” Journal of Cell Science.
Smaldino, and O’Connor. 2020. Interdisciplinarity Can Aid the Spread of Better Methods Between Scientific Communities.”
Thagard. 1993. “Societies of Minds: Science as Distributed Computing.” Studies in History and Philosophy of Modern Physics.
———. 1997. “Collaborative Knowledge.” Noûs.
———. 2005. “How to Be a Successful Scientist.” Scientific and Technological Thinking.
Weng, Flammini, Vespignani, et al. 2012. Competition Among Memes in a World with Limited Attention.” Scientific Reports.
Westrum. 2013. Sidewinder: Creative Missile Development at China Lake.