Findings suggested that conventional mammography screening performance metrics underestimate the interval cancer rate of a mammography screening episode.
Research published in Cancer indicated that conventional mammography screening performance metrics underestimate the interval cancer rate of a mammography screening episode, especially in women with dense breasts or an elevated risk of breast cancer.
Given these findings, the investigators suggested that women, clinicians, policymakers, and researchers consider screening outcome measures based on the final assessment to support informed decisions about routine screening and the need for supplemental breast cancer screening.
“These differences have consequences for women, health care providers, and policymakers considering primary and supplemental breast cancer screening strategies,” the authors wrote.
Using data from 2,512,577 screening episodes among 791,347 individual women taken from 2005 to 2017 across 146 facilities in the US participating in the Breast Cancer Surveillance Consortium, researchers compared screening performance metrics based on the final assessment of screening episode with conventional metrics defined with the initial assessment.
During a 1-year period following the mammogram screening, a total of 12,131 cancers were found to occur. Notably, a higher proportion of women diagnosed with breast cancer were older, postmenopausal, had a first-degree family history of breast cancer, had heterogeneously or extremely dense breasts, and had an elevated BCSC 5-year risk in comparison with women not diagnosed with breast cancer.
Overall, the cancer detection rates were similar for the final assessment (4.1 per 1000; 95% CI, 3.8-4.3 per 1000) and the initial assessment (4.1 per 1000; 95% CI, 3.9-4.3 per 1000).
However, the interval cancer rate was 12% higher when it was based on the final assessment (0.77 per 1000; 95% CI, 0.71-0.83 per 1000) versus the initial assessment (0.69 per 1000; 95% CI, 0.64-0.74 per 1000), and this led to a modest difference in sensitivity (84.1% [95% CI, 83.0%-85.1%] vs 85.7% [95% CI, 84.8%-86.6%], respectively). Moreover, absolute differences in the interval cancer rate between the final and initial assessments increased with breast density and breast cancer risk (e.g., a difference of 0.29 per 1000 for women with extremely dense breasts and a 5-year risk >2.49%).
“The difference reflects cancers that had a positive initial screening mammogram but were resolved to a negative final assessment upon additional imaging,” the authors wrote. “These results indicate that the mammography screening process has a higher failure rate than previously appreciated on the basis of established screening mammography performance metrics that are based on the initial assessment alone.”
In sensitivity analyses, researchers also found that classification of category 3 final assessments as negative, as opposed to positive, resulted in a slightly lower cancer detection rate, increases in the interval cancer rate and specificity, and a moderate decrease in sensitivity.
“Our findings are particularly relevant to women with dense breasts, who in most US states are now informed of the limitations of mammography and advised to discuss supplemental screening options with their health care providers,” the authors continued. “Although a federal US law is pending, widespread debate and uncertainty remain regarding the appropriateness of supplemental screening for women with dense breasts.”
Importantly, researchers were unable to definitively determine whether a cancer diagnosis was directly attributable to the screening episode. However, in the study, 97.3% of the cancers within 1 year of a category 4 or 5 final assessment were diagnosed within 90 days of the screening examination, suggesting that the impact of this limitation was minimal.
Reference:
Sprague BL, Miglioretti DL, Lee CI, Perry H, Tosteson AAN, Kerlikowske K. New Mammography Screening Performance Metrics Based on the Entire Screening Episode. Cancer. doi:10.1002/cncr.32939.