“False positive” data from the European screening trial and other matters
Posted Jan 11 2010 12:00am
The “game” of retrospectively analyzing every possible piece of data from the European Randomized Screening Study for Prostate Cancer (ERSPC) has been under way for a while now … and we can expect this game to produce lots and lots of controversial information and media coverage. But much of the data from these subset analyses will need to be treated with great caution.
Let’s look at just two current examples.
A new analysis of data from the Finnish section of the trial (where the men received a PSA test once every 4 years) has claimed that for every eight Finnish men who met the criteria for a biopsy, one man had a “false positive” result. But if you read even the abstract of the actual paper, you wonder whether this is really what the authors meant to state.
Writing in the British Journal of Cancer, Kilpeläinen et al. note that Finland was the largest recruiter of patients for the ERSPC. (Finland actually enrolled 80,379 of the total of 162,243 men in the ERSPC, nearly half of the total enrollment.) The Finnish men who were recruited had a PSA test every 4 years and (to date) have been followed for an average of 9.2 years, including three rounds of PSA tests. Patients were referred for a biopsy if they had a PSA of 4.0 ng/ml or higher or a PSA of 3.0 to 3.9 ng/ml and “a positive auxilliary test” (a digital rectal examination until 1998 and calculation of the ratio of the free PSA value to the total PSA value — with a value of ≤0.16 — starting in 1999, according to the original report of the results of the trial in the New England Journal of Medicine).
The researchers defined a false positive result as “a positive screening result without cancer in [biopsy cores taken] within 1 year from the screening test.” But why on Earth would one define this as a “false positive” result? It is well understood that PSA levels rise for all sorts of reasons. It is perhaps shocking that as many as 7 men in 8 out of all those biopsied in Finland in this trial would have had a positive biopsy result! It is the biopsy that defines a diagnosis of cancer, not the PSA test. The PSA test only indicates the possibility of the cancer!
Unsurprisingly, the Finnish researchers go on to state that more than half of the men who had a negative biopsy for prostate cancer after a finding of an elevated PSA level in their first or second round of screening had another negative biopsy result after an elevated PSA at a subsequent round of screening. Is it possible they just had BPH, or prostatitis? That would be a shock!
So let us be clear … There is nothing wrong with this paper except that the term “false positive” has been used to describe the biopsy results where “negative” might have been a more appropriate term. This could be no more than a language issue resulting from Finnish epidemiologists writing a paper for an English language journal (although it is mildly surprising that someone at the British Journal of Cancer didn’t notice this!)
Perhaps the two more important things that we learn from this paper are as follows:
The prevalence of undiagnosed prostate cancer in Finland at the time the trial was conducted was very high (if 7/8 men with a PSA level meeting the study criteria went on to have a positive biopsy).
That the men who had a negative biopsy after an elevated PSA at their first or second round of screening were 1.5 to 2.0 times more likely to not participate in subsequent rounds of screening compared with men with a normal screening result.
But the second of these two facts suggests a further possibility … which is that these men really were being told that they had had a false negative biopsy result in the prior screening round, as opposed to just being told that their biopsy result showed no signal for cancer at that time. If that was what was happening, then the expectations of the Finnish researchers would seem to have been sadly out of synch with the entire premise behind using the PSA test to screen for prostate cancer!
There are data “missing” from the report of the PLCO trial in the New England Journal of Medicine. It would have been nice to know what percentages of men enrolled in the screening arm of this trial, and who had a PSA meeting the criteria for a biopsy, actually went on to have a positive and negative biopsy results. Unfortunately the study report does not provide these data, but we suspect it was significantly less than 7/8 or nearly 88 percent
Roobol and her colleagues have re-analyzed data from 1,850 men who were initially screened (for a PSA level of ≥3.0 ng/ml) and biopsied in the Rotterdam section of the ESPRC. They then used these data to calculate the probabilities that individual patients would have a positive biopsy [P(biop+)] and the probabilities that they would have an indolent cancer [P(ind)] if prostate cancer was, in fact, detected at biopsy. In fact, 541/1,850 men in the first round of screening and in 225/1,201 men in the second round had positive biopsies in the Rotterdam section of the ESPRC (and it is worth contrasting that with the 7/8 men in the Finnish section who were getting positive biopsies!).
Now Roobol and her colleagues were trying to develop strategies that would potentially reduce the number of unnecessary biopsies while still detecting most of the clinically important cases of prostate cancer.
What Roobol et al. did, therefore, was to say, “Well what would have happened if we apply an additional [P(biop+)] of 12.5 percent to these 1,850 men?” which would have meant that, using the algorithm they had constucted, 613/1,850 men would not have been biopsied at all. In that case you get the following results:
The positive predictive value of a biopsy in the first screening round would increase from 29 percent (541/1,850) to 38 percent (468/1,237).
At repeat screening, the positive predictive value of a biopsy would again increase — from 19 percent (225/1,201) to 25 percent (188/760).
14 percent of prostate cancer cases (110/776) would not have been diagnosed.
Of these 110 cases, 70 percent could be considered potentially indolent in the initial screening round and 81 percent in the repeat screening rounds.
None of the “deadly” cases would have been missed.
According to Roobol and her colleagues, if a PSA cut-off of ≥ 4.0 ng/ml had been used (instead of the cut-off of ≥ 3.0 ng/ml that was actually used in the Rotterdam section), similar numbers of biopsies cases could have been saved but considerably higher numbers of missed diagnoses would have resulted.
Roobol et al. conclude that their individualized screening algorithm, which is based on the patient’s actual PSA level together with other, available prebiopsy information, can significantly reduce the number of unnecessary biopsies currently being carried out. They further note that very few important cases of prostate cancer (i.e., cases for which diagnosis at a subsequent screening visit might be too late for treatment with curative intent) would be missed.
The abstract of the paper by Roobol and her colleagues does not give the details of how they calculated the [P(biop+)] and [P(ind)] values that underlie this analysis, but the idea that one could use such a model to avoid unnecessary biopsies in men at minimal risk for clinically sign ificant prostate cancer is, at least, enticing.