Survival analysis and reliability

Estimating survival rates

Here’s the set-up: looking at a data set of individuals’ lifespans you would like to infer the distributions—Analysing when people die, or things break etc. The statistical problem of estimating how long people’s lives are is complicated somewhat by the particular structure of the data — loosely, “every person dies at most one time”, and there are certain characteristic difficulties that arise, such as right-censorship. (If you are looking at data from an experiment and not all your subjects have died yet, they presumably die later, but you don’t know when.)

Handily, the tools one invents to solve this kind of problem end up being useful to solve other problems, such as point process inference, and look not so far from the densities/intensities relations.

So let’s say you have a random variable $$X$$ of positive support, according to which the lifetime of your people (components, machines, whatever) are distributed, which possesses a pdf $$f_X(t)$$ and cdf $$F_X(T)$$.

We define several useful functions:

The survival function (which is also the right tail CDF)
$$S(t):=1-F(t)$$
the hazard function
$$\lambda(t):=f(t)/S(t)$$
the cumulative hazard function
$$\Lambda(t) :=\int_0^t\lambda(s) \textrm{d} s.$$

Why? Because it happens to come out nicely if we do that, and these functions acquire intuitive interpretations once we squint at them a bit. The survival function is the probability of an individual surviving to time $$t$$ etc. The hazard function will turn out to be the rate of deaths at time $$t$$ given that one has not yet occurred.

Using the chain rule we can find the following useful relation:

$S(t)=\exp[-\Lambda (t)]={\frac {f(t)}{\lambda (t)}}$

The hazard function can be pretty much any non-negative function of non-negative support (or more generally, a Schwartz distribution, but let’s ignore that possibility for the moment.)

Life table method

Over intervals of time $$[t,u]$$ we define the cumulative hazard increment

$H(t,u) :=\int_t^u h (s) \textrm{d} s = H(u)-H(t)$

and the survival increment

$\chi(t,u) :=\frac{\chi(u)}{\chi(t)}$

The following relations are useful

$\chi(t)=\exp[-H (t)]={\frac {f(t)}{h (t)}}.$

and

$\chi(t,u)=\frac{\exp[-H (u)]}{\exp[-H (t)]}=\exp[H (t)-H (u)]=\exp[-H (t,u)]$

and so

$-\log\chi(t,u)=H (t,u).$

We estimate hazard via the life table method. Given a time interval $$[t_{i}, t_{i+1})$$ and survival counts $$N(t_{i})$$ and $$N(t_{i+1})$$ at, respectively, the beginning and end of that interval, (assuming no immigration) the life table estimate of a survival increment is

$\hat{\chi}(t_i, t_{i+1}):= \frac{N(t_{i+1})}{N(t_{i})}$

Plugging this in, we obtain cumulative hazard increment estimates

\begin{aligned} \hat{H} (t_i, t_{i+1})&=-\log \hat{\chi}(t_i, t_{i+1})\\ &=\log \frac{ N(t_{i}) }{ N(t_{i+1}) } \end{aligned}

From this we construct further point estimates of $$H$$ at $$t\in[0, t_1, t_2,\dots]$$ as

$\hat{H} (t)=\sum_{t_i\leq t}\hat{H}(t_{i},t_{i+1})$ By introducing assumptions on the functional form, can estimate the entire hazard function. For example, we can take $$h (t)$$ to be piecewise constant, so that

\begin{aligned} h (t)=\sum_i\mathbb{I}\{t_{i}<t<t_{i+1}\} h_i \end{aligned}

This corresponds to the assumption that $$H$$ is piecewise linear and continuous; we are constructing a piecewise linear interpolant. Thus, for $$t\in(t_i,t_{i+1}],$$ we such an interpolant $$\hat{H}$$ for $$t\in[0,t_M]$$ by a first order polynomial spline with knots $$0,t_1,t_2,\dots, t_M$$ and values $$\hat{H}(0), \hat{H}(t_1), \hat{H}(t_2) \dots,\hat{H}(t_M).$$

Nelson-Aalen estimates

a.k.a. Empirical Cumulative Hazard Function estimator.

The original Aalen paper on this is notoriously beautiful because of clever construction of a life point process and associated martingale. Clear and worth reading. Spoiler: despite the elegant derivation, the actual estimator is something a high-school student could discover by guessing.

TBC.

Other reliability stuff

Reliawiki has handy stuff, e.g. comprehensive docs on the Weibull law. It’s in support of some software package their are trying to sell, I think?

We can calculate an “effective age” if we want an intuitive risk measure .

Score function versus hazard function

• The score function and log-hazard rates are similar beasts. We can exploit that, e.g. in a Langevin dynamics algorithm? But would we gain anything useful from that?

Incoming

Social Desirability Bias: How Psych Can Salvage Econo-Cynicism

Social desirability bias is the tendency of respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting “good behavior” or under-reporting “bad,” or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports, especially questionnaires. This bias interferes with the interpretation of average tendencies as well as individual differences.

References

Aalen, Odd O. 1978. The Annals of Statistics 6 (4): 701–26.
Aalen, Odd O., Ørnulf Borgan, and S. Gjessing. 2008. Survival and Event History Analysis: A Process Point of View. Statistics for Biology and Health. New York, NY: Springer.
Andersen, Per Kragh, Ornulf Borgan, Richard D. Gill, and Niels Keiding. 1997. Statistical models based on counting processes. Corr. 2. print. Springer series in statistics. New York, NY: Springer.
Andersen, Per Kragh, and Niels Keiding. 2014. In Wiley StatsRef: Statistics Reference Online. American Cancer Society.
Andersen, Per K., and Michael Vaeth. 2015. In Wiley StatsRef: Statistics Reference Online, 1–14. American Cancer Society.
2011. In Applied Survival Analysis, 355–58. John Wiley & Sons, Ltd.
Bagnoli, Mark, and Ted Bergstrom. 1989. “Log-Concave Probability and Its Applications,” 17.
Brenner, H., O. Gefeller, and S. Greenland. 1993. Epidemiology (Cambridge, Mass.) 4 (3): 229–36.
Cox, D. R. 1972. Journal of the Royal Statistical Society: Series B (Methodological) 34 (2): 187–202.
Cox, D. R, and D. O Oakes. 2018. Analysis of Survival Data.
Cutler, S. J., and F. Ederer. 1958. Journal of Chronic Diseases 8 (6): 699–712.
Deddens, James A., and Gary G. Koch. 2014. In Wiley StatsRef: Statistics Reference Online. American Cancer Society.
Efron, Bradley. 1988. Journal of the American Statistical Association 83 (402): 414–25.
Fink, Scott A, and Robert S. Brown. 2006. Gastroenterology & Hepatology 2 (5): 380–83.
Griffiths, Thomas L., and Zoubin Ghahramani. 2011. Journal of Machine Learning Research 12 (32): 1185–1224.
Hjort, Nils Lid. 1990. The Annals of Statistics 18 (3): 1259–94.
———. 1992. International Statistical Review / Revue Internationale de Statistique 60 (3): 355–87.
Hjort, Nils Lid, Mike West, and Sue Leurgans. 1992. In Survival Analysis: State of the Art, edited by John P. Klein and Prem K. Goel, 211–36. Nato Science 211. Springer Netherlands.
Hosmer, David W., and Stanley Lemeshow. 1999. Applied Survival Analysis: Regression Modeling of Time to Event Data. Wiley Series in Probability and Statistics. New York: Wiley.
Hosmer, David W., Stanley Lemeshow, and Susanne May. 2008. In Applied Survival Analysis: Regression Modeling of Time-to-Event Data. Wiley Series in Probability and Statistics. Hoboken, NJ, USA: John Wiley & Sons, Inc.
Klein, John P. 2014. In Wiley StatsRef: Statistics Reference Online. American Cancer Society.
Kleinbaum, David G. 2010. Survival Analysis: A Self-Learning Text. Statistics for Biology and Health 1.0. Springer.
Laird, Nan, and Donald Olivier. 1981. Journal of the American Statistical Association 76 (374): 231–40.
Lu, W., Y. Goldberg, and J. P. Fine. 2012. Biometrika 99 (3): 717–31.
Nelson, Wayne. 1969. Journal of Quality Technology 1 (1): 27–52.
———. 2000. Technometrics 42 (1): 12–25.
Omi, Takahiro, Naonori Ueda, and Kazuyuki Aihara. 2020. arXiv:1905.09690 [Cs, Stat], January.
2011. In Applied Survival Analysis, 244–85. John Wiley & Sons, Ltd.
Peng, Limin. 2021. Annual Review of Statistics and Its Application 8 (1): 413–37.
Peterson, Arthur V. 1977. Journal of the American Statistical Association 72 (360): 854–58.
Pölsterl, Sebastian. 2020. Journal of Machine Learning Research 21 (212): 1–6.
Schoenberg, Frederic Paik. 2003. Journal of the American Statistical Association 98 (464): 789–95.
Shaked, Moshe, and J. George Shanthikumar. 1988. Journal of Applied Probability 25 (3): 501–9.
Sill, Joseph. 1997. In Proceedings of the 10th International Conference on Neural Information Processing Systems, 661–67. NIPS’97. Cambridge, MA, USA: MIT Press.
Simon, Noah, Jerome Friedman, Trevor Hastie, and Rob Tibshirani. 2011. Journal of Statistical Software 39 (5).
Sy, Judy P., and Jeremy M. G. Taylor. 2000. Biometrics 56 (1): 227–36.
Taleb, Nassim Nicholas. 2020. International Journal of Forecasting, April.
Tibshirani, Robert. 1997. Statistics in Medicine 16 (4): 385–95.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.