A few months ago, a paper was published in the International Journal of Epidemiology that caused a sensation in it's home country - Denmark. Using registry data, and the diagnosis of non-melanomatous skin cancer as a proxy for sun exposure, the authors found that the OR for mortality in individuals who developed skin cancer compared to those who did not was 0.53. The p-value was an unfeasibly low 2x10E-308. However, when the results were stratified by age, the OR was a much more reasonable 0.97 in the sun-exposed group. This fact was conveniently forgotten when the study was reported in the media which uncritically stated that people who get more sun will live longer - something which upset the Danish cancer societies immensely.
So why was there such a disparity between the two results and how could it be possible that individuals with cancer could live so much longer (8.5 years on average) than those who did not get cancer. This month, a mea culpa editorial was published in IJE explaining their mistake along with an article discussing the entire issue . What the authors did in this case was fail to account for the immortal time bias. In this study, individuals entered the cohort when they were 40 years old. However, most people did not develop skin cancer until they were in their 60s or 70s. As a result, for the skin cancer cohort, there were approximately 20 years where they could not have died (they had to survive until they developed cancer and were "immortal" until that time). Someone who did not develop skin cancer could have died at any time during that 20 year period. To demonstrate how this works, the authors of the follow-up study, using the same dataset, randomly allocated a "lottery prize" to a proportion of individuals with a mean age of 68 who lived in Denmark over the last 20 years. Again, in this case, a person who received the prize would have had to have lived until the time that it was awarded while those who did not get the prize could die at any time. The results of the simulation study were very similar to the skin cancer study with an OR for all-cause mortality of 0.5 for the prizewinners and a similarly outrageous p-value.
This issue was first described in the 1800s when it was noticed that generals and bishops live longer than lieutenants and curates. Again, this is because one has to survive to an older age to become a bishop or a general and not because there is something inherent in these ranks that lead to an improvement in mortality.
This example is particularly egregious and the editors of IJE have to be congratulated for the way that they dealt with it. However, there are more subtle examples, one of which, highlighted in the follow-up article, appeared in JASN in 2010 . This paper found that survival after transplant failure was improved by nephrectomy. However, the mean follow-up in this paper was only 2.93 years and the mean time to nephrectomy was 1.66 years. Thus, about half of the follow-up in individuals who had a nephrectomy was immortal time - they could not die during that follow-up period because they had to survive to the time of surgery. Most of the difference could be accounted for by this bias.
Similarly, a study in JASN in 2007 , found that individuals enrolled in a multidisciplinary care clinic were more likely to survive than those who were not. Again, the time of the first MDC clinic was about 1 year after enrollment in the study. Individuals who entered the MDC program had a year of immortal time compared to individuals who did not enter MDC. This was pointed out in a follow-up article in KI . The authors of the original paper reanalyzed their data to account for this bias and found that MDC was still associated with improved survival although the magnitude of the effect was substantially less.
This is a fascinating issue and probably affects more cohort studies than we think. As a reviewer, I'll certainly try to look out for it in the future.