Abstract
Purpose: Artificial intelligence (AI) is rapidly being integrated into radiation oncology, yet a comprehensive understanding of its adoption, user perceptions, and implementation challenges remains lacking. Though a recent survey reports AI usage rates between 35-60%, actual utilization patterns among engaged clinical users may differ substantially. This study, conducted by the Radiation Oncology Education Collaborative Study Group’s (ROECSG) AI Working Group, aimed to survey the current landscape of AI use to quantify adoption rates, assess clinical impact, identify key barriers, and inform future educational and integration strategies.
Methods: A 32-item anonymous survey was distributed at the 2025 AAPM Annual Meeting and through the Wayne State University Medical Physics listserv to radiation oncology professionals, primarily medical physicists. The instrument employed a combination of Likert-scale questions (1–6 scale) with free-text responses to evaluate the prevalence of AI use and applications, the perceived impact on clinical workflow and decision-making, user attitudes regarding AI understanding, trust and transparency. Descriptive statistics and thematic analysis of qualitative responses were performed.
Reuslts: Among 34 respondents (33 medical physicists, 1 physician), a significant majority (82%) reported using AI tools in their clinical practice - substantially exceeding rates in prior surveys. The predominant application was auto-contouring (79%), with treatment planning (15%) and quality assurance (6%) being less common. Respondents reported substantial efficiency gains, estimating an average time savings of 4.7 hours per week, representing approximately 12% of clinical time. Regarding clinical integration, most users (58%) indicated AI currently serves a task-automation and workflow efficiency function rather than decision support, while 29% reported AI informing or driving clinical decisions. Despite this limited decision-support role, enthusiasm for expanded AI integration was overwhelming, with 94% welcoming greater use. However, this enthusiasm was contrasted with moderate self-reported understanding of AI development (60%, mean score 3.6 on a 6-point scale) and trust in AI outputs (62%), while clarity of intended use (73%) and workflow integration (70%) scored higher. Qualitative feedback echoed this dichotomy, highlighting AI’s potential to improve consistency and reduce workload while raising concerns about algorithmic transparency, potential overreliance, and the need for robust validation and user training.
Conclusion: This survey reveals a bifurcated AI implementation landscape, with early adopters achieving high integration while the broader community lags behind. The stark contrast between near-universal enthusiasm and modest technical understanding represents the primary barrier to widespread adoption. Current applications remain confined to task automation, suggesting untapped clinical potential. Realizing AI’s full value requires formal education curricula in training programs and transparent, explainable solutions from vendors to bridge the fundamental trust-understanding gap.
