An ongoing debate exists in the literature about the risk of carcinogenesis from radiation associated with medical imaging tests. Some position papers and societal guidelines advocate that the choice of a diagnostic imaging modality for any clinical indication be influenced by whether the test is associated with ionizing radiation.1 However, there is sparse scientific evidence underlying the current estimates of cancer risk from low-level (doses <100 mSv) radiation, which are based on the linear no-threshold (LNT) model. Although the Biological Effects of Ionizing Radiation (BEIR) VII report2 from the US National Academies recommends the use of the LNT model as the best simple model for purposes of radiation protection, as do reports from the United Nations3 as well as the leading international4 and US5 radiological protection organizations, several other widely divergent models have also been posited, including the theory of hormesis (protective effect of low dose radiation) on one extreme to a theory of hypersensitivity (to low dose compared to high dose) to the other.6 A major confounding factor in analyzing the observational data on this subject is the high population incidence of cancer, which makes it difficult to identify any small additional risk attributable to radiation from medical tests. Figure 1 from the BEIR VII report illustrates this point.