Back to Journals » Journal of Multidisciplinary Healthcare » Volume 17

Knowledge, Attitude and Practice of Radiologists Regarding Artificial Intelligence in Medical Imaging

Authors Huang W, Li Y, Bao Z, Ye J, Xia W, Lv Y, Lu J, Wang C, Zhu X

Received 22 November 2023

Accepted for publication 18 June 2024

Published 4 July 2024 Volume 2024:17 Pages 3109—3119

DOI https://doi.org/10.2147/JMDH.S451301

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Scott Fraser



Wennuo Huang,1,* Yuanzhe Li,2,* Zhuqing Bao,3,* Jing Ye,1 Wei Xia,1 Yan Lv,1 Jiahui Lu,4 Chao Wang,1 Xi Zhu1

1Department of Radiology, Northern Jiangsu People’s Hospital Affiliated to Yangzhou University, Yangzhou, Jiangsu, 225002, People’s Republic of China; 2Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, 362000, People’s Republic of China; 3Department of Emergency, Northern Jiangsu People’s Hospital Affiliated to Yangzhou University, Yangzhou, Jiangsu, 225002, People’s Republic of China; 4School of Medical Imaging, Hangzhou Medical College, Hangzhou, Zhejiang, 310053, People’s Republic of China

*These authors contributed equally to this work

Correspondence: Xi Zhu, Department of Radiology, Northern Jiangsu People’s Hospital Affiliated to Yangzhou University, Yangzhou, 225002, Jiangsu, People’s Republic of China, Tel +8618051062318, Email [email protected]

Purpose: This study aimed to investigate the knowledge, attitudes, and practice (KAP) of radiologists regarding artificial intelligence (AI) in medical imaging in the southeast of China.
Methods: This cross-sectional study was conducted among radiologists in the Jiangsu, Zhejiang, and Fujian regions from October to December 2022. A self-administered questionnaire was used to collect demographic data and assess the KAP of participants towards AI in medical imaging. A structural equation model (SEM) was used to analyze the relationships between KAP.
Results: The study included 452 valid questionnaires. The mean knowledge score was 9.01± 4.87, the attitude score was 48.96± 4.90, and 75.22% of participants actively engaged in AI-related practices. Having a master’s degree or above (OR=1.877, P=0.024), 5– 10 years of radiology experience (OR=3.481, P=0.010), AI diagnosis-related training (OR=2.915, P< 0.001), and engaging in AI diagnosis-related research (OR=3.178, P< 0.001) were associated with sufficient knowledge. Participants with a junior college degree (OR=2.139, P=0.028), 5– 10 years of radiology experience (OR=2.462, P=0.047), and AI diagnosis-related training (OR=2.264, P< 0.001) were associated with a positive attitude. Higher knowledge scores (OR=5.240, P< 0.001), an associate senior professional title (OR=4.267, P=0.026), 5– 10 years of radiology experience (OR=0.344, P=0.044), utilizing AI diagnosis (OR=3.643, P=0.001), and engaging in AI diagnosis-related research (OR=6.382, P< 0.001) were associated with proactive practice. The SEM showed that knowledge had a direct effect on attitude (β=0.481, P< 0.001) and practice (β=0.412, P< 0.001), and attitude had a direct effect on practice (β=0.135, P< 0.001).
Conclusion: Radiologists in southeastern China hold a favorable outlook on AI-assisted medical imaging, showing solid understanding and enthusiasm for its adoption, despite half lacking relevant training. There is a need for more AI diagnosis-related training, an efficient standardized AI database for medical imaging, and active promotion of AI-assisted imaging in clinical practice. Further research with larger sample sizes and more regions is necessary.

Keywords: artificial intelligence, medical imaging, knowledge, attitude, practice, radiologists, cross-sectional study

Introduction

Medical imaging serves as a prevalent modality for medical diagnosis and treatment, with computer technology emerging as a paramount technical support for its advancement.1,2 In recent years, the enhancement and implementation of artificial intelligence (AI) within the domain of medical imaging have garnered significant attention.3–5 Within the healthcare sphere, medical imaging has emerged as a pivotal arena for potential AI breakthroughs due to its vast image data and adoption of the universally standardized DICOM storage format.6 Currently, AI’s clinical application in medical imaging predominantly revolves around enhancing imaging diagnosis, with a primary focus on tasks encompassing lesion detection,7 identification,8 and the distinction between benign and malignant conditions.9 On one hand, AI’s perceptual and cognitive capabilities facilitate medical image identification, extraction of vital information, and provision of assistance to less experienced radiologists;10 on the other hand, the integration of copious image data and clinical insights through machine learning enables training and refinement of AI.11 This equips the system with the competence to diagnose diseases, thereby potentially reducing radiologists’ diagnostic oversight. In comparison to the existing operational mode of imaging departments, the AI system remains unaffected by external influences, maintaining an efficient and continuous operational state. This perpetual functionality contributes to the enhancement of radiologists’ image interpretation efficiency and quality.

The Knowledge, Attitude, and Practice (KAP) model stands as the most frequently employed framework for elucidating the impact of personal knowledge and beliefs on health behavior change.12 This theory delineates human behavioral transformation into three continuous stages: knowledge acquisition, belief formation, and behavior adoption. KAP research represents a systematic and scientific survey methodology, predominantly involving the design of questionnaires tailored to research subjects and objectives. These questionnaires are employed to gauge the pertinent knowledge, beliefs, and behavioral patterns of the study population.13 Through questionnaire analysis, comparative group research, interventions, and pragmatic recommendations are proposed. Subsequent to meticulous planning, implementation takes place, and outcomes are examined, culminating in experiential insights for wider adoption. In recent times, KAP research focusing on the perspectives of radiologists or radiographers towards AI-assisted medical imaging has captured researchers’ attention.14–16 Overall, the majority of radiologists or technicians show a positive inclination towards the integration of AI technology in medical imaging.

Nevertheless, various factors such as age, educational background, years of experience, and comprehension of AI exert a discernible influence on radiologists’ attitudes and practices.15,17 Moreover, differing perspectives on the application of AI in imaging persist. As the principal users of AI, radiologists’ KAP in relation to AI within medical imaging play a pivotal role in shaping AI’s capacity to capitalize on strengths and mitigate weaknesses in the medical field, thus optimizing its positive influence. Thus, this study aimed to investigate the KAP of radiologists concerning AI in medical imaging.

Methods

Study Design and Participants

This cross-sectional study was carried out among radiologists in the Jiangsu, Zhejiang, and Fujian regions from October to December 2022. Ethical approval was obtained from the Clinical Medical College and the University’s Ethics Committee (2022sbky221). Before participating, all individuals provided informed consent. Inclusion criteria were as follows: 1) Registered medical practitioners specializing in radiology. 2) Radiologists with at least 6 months of professional experience. Exclusion criteria included trainee radiologists and individuals undergoing standardized resident training or rotation.

Questionnaire and Quality Control

The questionnaire design was informed by previous studies.14,18,19 After incorporating input from four experts, the initial draft of the questionnaire underwent limited distribution (166 copies). Reliability and validity testing resulted in a high Cronbach’s α value of 0.842, confirming the questionnaire’s reliability.

The final questionnaire, in Chinese, consisted of four dimensions comprising 37 items. These dimensions encompassed general information (10 items), knowledge (10 items), attitude (13 items), and practice (6 items). Within the knowledge dimension, correct responses were assigned 1 point, while incorrect or ambiguous answers received 0 points. The achievable score ranged from 0 to 20. Using a five-level Likert scale, the attitude dimension ranged from very positive (5 points) to very negative (1 point), yielding a score range of 13 to 65. Responses of “strongly agree” and “agree” were combined as positive, “neither agree nor disagree” as neutral, and “strongly disagree” and “disagree” as negative. Within the practice dimension, assign 1 point for a “Yes” response and 0 point for a “No” response for items 1–4; Items 5–6 will not be assigned values.

The questionnaire was distributed to participants through a questionnaire platform, and QR codes were circulated across various provinces and prefecture-level cities via the Medical Imaging Society of the Chinese Medical Association, with each province being supervised by a dedicated research assistant. After data collection, the research team carefully reviewed the questionnaire responses for quality assurance. Any data showing apparent discrepancies underwent telephonic verification. Questionnaires displaying redundant or incomplete responses were excluded when it was not feasible to establish contact via phone.

Statistical Analysis

The analysis was conducted using Stata 17.0 (Stata Corporation, College Station, TX, USA). Continuous variables were presented as mean and standard deviation. Student’s t-test was employed for comparing two groups, while ANOVA was used for comparing three or more groups. Categorical variables were reported as n (%). Multivariate regression analysis was performed to identify independent risk factors related to KAP. Knowledge, attitude, and practice scores equal to or exceeding 70% were considered indicative of “sufficient knowledge”, a “positive attitude”, and “proactive practice”, respectively.20 Variables that demonstrated a significance level of P<0.05 in univariate logistic regression were included in the multivariate regression analysis. The Pearson correlation test was used to evaluate the association between KAP scores, and a structural equation model (SEM) was used to assess the relationship between KAP. The SEM hypotheses were: 1) knowledge positively influences the participants’ attitude; 2) knowledge positively influences participants’ practice; 3) attitude positively influences participants’ practice. The model fitting was evaluated with discrepancy divided by degree of freedom (CMIN/DF), root mean square error of approximation (RMSEA), incremental fit index (IFI), Tucker–Lewis index (TLI), comparative fit index (CFI), and goodness of fit index (GFI). A two-sided P-value less than 0.05 was considered statistically significant.

Results

A total of 506 questionnaires were collected, out of which 452 valid questionnaires (89.34%) were included in the study. This was done after excluding 50 questionnaires with entirely duplicated responses and 4 questionnaires sharing the same option within the KAP dimensions. The demographic characteristics of the participants are presented in Table 1. The majority of participants (69.91%) were male, while the remaining 30.09% were female. Among the radiologists, the largest segment (43.81%) fell within the 36–50 age range, followed by 27.65% in the 20–35 years category, and 28.54% were aged over 50 years. In terms of education, a majority of respondents (59.51%) held bachelor’s degrees, while 29.65% possessed master’s or higher degrees, and only 10.84% had junior college qualifications. Most participants (66.59%) were employed in Class III public hospitals, whereas fewer than 5% were associated with private hospitals. A significant proportion (79.87%) worked in hospitals that had already adopted AI-related technologies for image-assisted diagnosis, and a substantial number of participants (82.74%) utilized AI diagnostic systems. However, less than half of the respondents (45.35%) had received training on the principles and operation of AI-assisted diagnosis, and a minority (32.52%) had engaged in AI-assisted diagnosis research.

Table 1 Participants’ Demographics and KA Score

The mean knowledge score was 9.01±4.87. This score exhibited no significant association with participants’ gender, but displayed notable correlations with their age, educational attainment, hospital tier, professional designation, and years of work experience (P<0.05). Among the knowledge-related questions, the query with the highest correct response rate was “Model performance of deep learning is independent of the quality and quantity of the training dataset”, achieving 71.46%, while the query with the lowest correct response rate was “During the establishment of an AI model to assist image diagnosis, no matter which model is used, it is necessary to verify robustness”, with a rate of 31.64% (Supplementary Table 1). Master or above (OR=1.877, 95% CI=1.086–3.245, P=0.024), 5–10 years of experience in radiology (OR=3.481, 95% CI=1.345–9.013, P=0.010), AI diagnosis-related training (OR=2.915, 95% CI=1.776–4.786, P<0.001) and research (OR=3.178, 95% CI=1.881–5.371, P<0.001) were associated with sufficient knowledge (Table 2).

Table 2 Multivariate Analysis of Sufficient Knowledge

The mean attitude score was 48.96±4.90. This score displayed no significant correlation with gender and educational attainment. However, it showed associations with age, hospital tier, professional designation, and years of work experience (all P<0.05). Notably, the majority of participants (96.46%) supported the introduction of AI-assisted diagnostic systems in their respective departments. Similarly, the largest percentage of participants (73.9%) disagreed with the notion that AI would replace radiologists in the future (Supplementary Table 2).The multivariate regression analysis showed that junior college degree (OR=2.139, 95% CI=1.084–4.220, P=0.028), 5–10 years of radiology experience (OR=2.462, 95% CI=1.013–5.983, P=0.047), and AI diagnosis-related training (OR=2.264, 95% CI=1.430–3.586, P<0.001) were associated with positive attitudes (Table 3).

Table 3 Multivariate Analysis of Positive Attitude

When their own assessments aligned with those of AI, 96.24% of radiologists chose to trust their own judgment. Conversely, when their evaluations differed from AI, 96.00% of respondents opted to place their trust in AI. Importantly, a significant 75.22% of participants actively sought out relevant knowledge and research advancements in medical imaging AI. A majority of participants acquired information related to AI-assisted diagnosis through avenues such as training lectures, medical literature, and online media. Primary factors influencing radiologists’ acquisition of AI-assisted diagnosis knowledge included a lack of authoritative learning materials and demanding work schedules (Figure 1).

Figure 1 Distribution of practice dimension. (A) Who do you believe more when your own judgment is consistent with that of AI; (B) Who do you believe more when your own judgment is inconsistent with that of AI; (C) Whether you actively understand the relevant knowledge and research progress of AI medical imaging; (D) Understand the information channels related to AI-assisted diagnosis; (E) Factors influencing imaging physicians’ learning of AI-assisted diagnosis related knowledge. AI, artificial intelligence.

Multivariate regression analysis demonstrated that knowledge scores (OR=5.240, 95% CI=2.391–11.486, P<0.001), associate senior (OR=4.267, 95% CI=1.187–15.337, P=0.026), 5–10 years of experience in radiology (OR=0.344, P=0.044), AI diagnosis use (OR=3.643, 95% CI=1.739–7.631, P=0.001) and AI diagnosis-related research (OR=6.382, 95% CI=2.949–13.812, P<0.001) were associated with proactive practice (Table 4). A SEM was used to analyze the relationships between KAP (Figure 2). The SEM showed that knowledge had a direct effect on attitude (β=0.481, P<0.001), and attitude had a direct effect on practice (β=0.135, P<0.001), which indicated an indirect effect of knowledge on practice. Also, the knowledge had a direct effect on practice (β=0.412, P<0.001) (Table 5). The indices showed that SEM fitting was good (Supplementary Table 3).

Table 4 Multivariate Analysis of Proactive Practice

Table 5 Results of SEM

Figure 2 SEM for KAP.

Discussion

In the modern context, medical imaging functions as a digital diagnostic field in which computers play a crucial role. This discipline primarily focuses on the skillful utilization of advanced technical tools, such as AI, to accurately detect lesions, pinpoint their precise locations, and evaluate their conditions. These capabilities are immensely valuable in improving the diagnostic efficiency of radiologists.

While previous research has explored patient attitudes towards AI within healthcare contexts21,22 scant attention has been directed towards its specific application among radiologists.23,24 This cross-sectional study endeavors to gauge the KAP levels among radiologists concerning the integration of AI within medical imaging—a critical undertaking for the development of AI systems within the domain of medical diagnosis. Our study’s multivariate analysis results highlight that higher educational attainment, 5–10 years of experience in radiology, and engagement in AI-related diagnostic training collectively contribute to elevated knowledge and attitude scores. Moreover, active participation in AI-assisted diagnosis research is positively associated with higher knowledge and practice scores. Furthermore, factors such as knowledge proficiency, professional designation, years of professional experience, and familiarity with AI diagnostic systems significantly impact practice scores.

Although our study largely supported previous researches, there may also be discrepancies with previous findings due to some underlying reasons. Huisman et al emphasized the specific need for radiologists to acquire expertise in AI.25 The findings of our multivariate regression analysis underscore that radiologists who partook in training focused on the principles and operation of AI-assisted diagnosis, or engaged in research related to AI-assisted diagnosis, displayed a propensity for obtaining comprehensive knowledge about medical imaging AI technology. Participation in AI medical imaging training or related research stands as an instrumental avenue for acquiring professional insights into AI-assisted diagnosis. A preceding questionnaire-based investigation demonstrated radiologists’ keenness to engage in vocational training or AI research, with over three-quarters expressing their intent to bolster their AI-related knowledge.26 Our study observed that the majority of participants were not well-acquainted with the fundamental principles and applications of deep learning, diverging from Qurashi et al’s findings.14 To conclude, the integration of AI within radiologist training is a requisite measure. Both undergraduate and continuing education should incorporate a structured and professional program to adequately prepare for the effective implementation of AI.27,28 The findings unequivocally demonstrate a prevailing positive attitude among the participants towards AI-assisted diagnosis. This outcome is consistent with a previous KAP study conducted by Waymel et al, which explored radiologists’ perceptions and anticipations of AI within radiography, thus revealing a universally favorable disposition.29 Similarly, Coakley et al’s research unveiled radiographers’ widespread enthusiasm and constructive outlook towards the proliferation of AI.15 In consonance with these findings, a significant majority of our study’s participants exhibited a distinct inclination towards the introduction of AI-assisted diagnostic systems and voiced concurrence with the inevitability of AI-assisted diagnostic systems within the medical sphere. Furthermore, our KAP study revealed that radiologists possessing junior college degrees were more likely to adopt a positive attitude towards AI medical imaging. This may be attributed to the potential enhancement in diagnostic proficiency achieved through the aid of AI systems. Additionally, participants with associate senior titles demonstrated more proficient practice, while those without professional titles exhibited comparatively lower levels of practice. However, this inference may be influenced by the limited pool of participants without professional titles, numbering only 16. Notably, radiologists who had utilized the AI diagnostic system or engaged in AI-assisted diagnosis research showcased elevated practice capabilities. This could be attributed to their more comprehensive grasp of AI-assisted diagnosis systems. A prior study accentuates the recognition of AI’s presence within medical imaging practice.30 Moreover, the results unearthed that the primordial drivers influencing practice awareness of AI were demanding work schedules and the scarcity of authoritative learning materials.

However, this survey does exhibit several limitations. Primarily, owing to our utilization of a cross-sectional design, we are precluded from establishing definitive causal relationships from the outcomes. Furthermore, the self-administration of questionnaires by participants could entail deliberate omission of certain information, potentially resulting in a social desirability bias and compromising the findings’ validity. Moreover, our study’s scope was confined to the Jiangsu, Zhejiang, and Fujian regions, which, while economically developed within China, might not accurately represent the broader populace despite the study’s multicenter nature. The pronounced economic disparities in China engender varying levels of technology adoption and expertise across regions. Affluent areas, such as Beijing, Shanghai, Zhejiang, Jiangsu, Guangdong, and Fujian, tend to draw individuals with higher educational attainments and swiftly integrate advanced technologies like AI-assisted diagnostic systems. Conversely, regions with limited economic development might lack requisite equipment and training, thus engendering disparities in KAP levels. Consequently, the generalizability of our findings should be judiciously interpreted and might not be universally applicable. Lastly, the modest sample size constitutes a weakness of this study.

AI-assisted diagnosis brings enormous potential to the medical field, but it also faces some security risks and existing issues. Data privacy and security are crucial issues. Due to the sensitivity of medical data, ensuring the privacy and security of patient data is essential. And algorithm interpretability is another challenge. Doctors and patients need to understand how AI-assisted diagnosis systems work to ensure trust and comprehension of diagnostic results. Additionally, data bias is a potential problem. If the training dataset is not comprehensive or biased, AI systems may produce inaccurate diagnostic results, especially for minority groups or specific disease types. Finally, the reliability and stability of the technology are also considerations. If AI systems have vulnerabilities or errors, it may severely impact diagnostic results, even leading to serious consequences. Therefore, to fully harness the potential of AI-assisted diagnosis, measures need to be taken to address these security risks and issues to ensure patient safety and diagnostic accuracy. Due to workload constraints, most previous studies have also been based on limited or specific populations, lacking thorough consideration of the limitations of AI-assisted diagnosis and comprehensive evaluation of clinical utility, neglecting some challenges and constraints in actual medical settings. Conducting research in more diverse populations in the future would be more conducive to real clinical application.

Overall, the future trend of AI-assisted diagnosis in the medical field is diversification and popularization. With continuous technological advancements and data accumulation, the performance and accuracy of AI algorithms will further improve. This will enable diagnostic assistance systems to better identify diseases, predict the progression of patient conditions, and provide personalized treatment recommendations, thereby assisting physicians and enhancing diagnostic accuracy and efficiency. Additionally, AI-assisted diagnosis will promote the rational allocation of medical resources, help healthcare institutions optimize diagnosis and treatment processes, and improve the quality and efficiency of medical services. Lastly, as understanding of AI-assisted diagnosis technology deepens, acceptance by both physicians and patients will gradually increase. AI-assisted diagnosis will become an important direction for development in the medical field, providing more intelligent and personalized healthcare services.

Ethics Approval

This research adhered to the principles outlined in the Declaration of Helsinki. Ethical approval was obtained from the Clinical Medical College and the University’s Ethics Committee (2022sbky221).

Author Contributions

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

Funding

This research received no external funding.

Disclosure

The authors report no conflicts of interest in this work.

References

1. He J, Luo A, Yu J, et al. Quantitative assessment of spasticity: a narrative review of novel approaches and technologies. Front Neurol. 2023;14:1121323. doi:10.3389/fneur.2023.1121323

2. Babic A, Rosenthal MH, Sundaresan TK, et al. Adipose tissue and skeletal muscle wasting precede clinical diagnosis of pancreatic cancer. Nat Commun. 2023;14(1):4317. doi:10.1038/s41467-023-40024-3

3. Föllmer B, Williams MC, Dey D, et al. Roadmap on the use of artificial intelligence for imaging of vulnerable atherosclerotic plaque in coronary arteries. Nat Rev Cardiol;2023. 51–64. doi:10.1038/s41569-023-00900-3

4. Nurmohamed NS, Bom MJ, Jukema RA, et al. AI-guided quantitative plaque staging predicts long-term cardiovascular outcomes in patients at risk for atherosclerotic CVD. JACC Cardiovasc Imag. 2023;17:269–280. doi:10.1016/j.jcmg.2023.05.020

5. Dey D, Arnaout R, Antani S, et al. Proceedings of the NHLBI Workshop on Artificial Intelligence in Cardiovascular Imaging: translation to Patient Care. Cardiovasc Imag. 2023. doi:10.2139/ssrn.4391146

6. RN V, Chandra SS V. ExtRanFS: An automated lung cancer malignancy detection system using extremely randomized feature selector. Diagnostics. 2023;13(13):2206.

7. Joskowicz L, Szeskin A, Rochman S, et al. Follow-up of liver metastases: A comparison of deep learning and RECIST 1.1. Eur Radiol. 2023;33:9320–9327. doi:10.1007/s00330-023-09926-0

8. Hong Y, Fu C, Xing Y, et al. Delayed (18)F-FDG PET imaging provides better metabolic asymmetry in potential epileptogenic zone in temporal lobe epilepsy. Front Med Lausanne. 2023;10:1180541. doi:10.3389/fmed.2023.1180541

9. Jiang J, Chao WL, Culp S, Krishna SG. Artificial intelligence in the diagnosis and treatment of pancreatic cystic lesions and adenocarcinoma. Cancers. 2023;15(9):2410. doi:10.3390/cancers15092410

10. Zhang R, Wang P, Bian Y, et al. Establishment and validation of an AI-aid method in the diagnosis of myocardial perfusion imaging. BMC Med Imag. 2023;23(1):84. doi:10.1186/s12880-023-01037-y

11. Roberts M, Hinton G, Wells AJ, et al. Imaging evaluation of a proposed 3D generative model for MRI to CT translation in the lumbar spine. Spine J. 2023;23:1602–1612. doi:10.1016/j.spinee.2023.06.399

12. Tang H, Wang R, Yan P, et al. Dietary behavior and its association with nutrition literacy and dietary attitude among breast cancer patients treated with chemotherapy: A multicenter survey of hospitals in China. Patient Prefer Adh. 2023;17:1407–1419. doi:10.2147/PPA.S413542

13. Siddiquea BN, Shetty A, Bhattacharya O, Afroz A, Billah B. Global epidemiology of COVID-19 knowledge, attitude and practice: a systematic review and meta-analysis. BMJ Open. 2021;11(9):e051447. doi:10.1136/bmjopen-2021-051447

14. Qurashi AA, Alanazi RK, Alhazmi YM, et al. Saudi radiology personnel’s perceptions of artificial intelligence implementation: a cross-sectional study. J Multidiscip Healthc. 2021;14:3225–3231. doi:10.2147/JMDH.S340786

15. Coakley S, Young R, Moore N, et al. Radiographers’ knowledge, attitudes and expectations of artificial intelligence in medical imaging. Radiography. 2022;28(4):943–948. doi:10.1016/j.radi.2022.06.020

16. Botwe BO, Antwi WK, Arkoh S, Akudjedu TN. Radiographers’ perspectives on the emerging integration of artificial intelligence into diagnostic imaging: the Ghana study. J Med Radiat Sci. 2021;68(3):260–268. doi:10.1002/jmrs.460

17. Sun Q, Yu C, Zheng Z, et al. Knowledge, attitude, and practices on COVID-19 prevention and diagnosis among medical workers in the radiology department: a multicenter cross-sectional study in China. Front Public Health. 2023;11:1110893. doi:10.3389/fpubh.2023.1110893

18. Park SH, Han K. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology. 2018;286(3):800–809. doi:10.1148/radiol.2017171920

19. Abuzaid MM, Elshami W, Tekin H, Issa B. Assessment of the willingness of radiologists and radiographers to accept the integration of artificial intelligence into radiology practice. Acad Radiol. 2022;29(1):87–94. doi:10.1016/j.acra.2020.09.014

20. Lee F, Suryohusodo AA. Knowledge, attitude, and practice assessment toward COVID-19 among communities in East Nusa Tenggara, Indonesia: a cross-sectional study. Front Public Health. 2022;10:957630. doi:10.3389/fpubh.2022.957630

21. Ongena YP, Haan M, Yakar D, Kwee TC. Patients’ views on the implementation of artificial intelligence in radiology: development and validation of a standardized questionnaire. Eur Radiol. 2020;30(2):1033–1040. doi:10.1007/s00330-019-06486-0

22. Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med. 2021;4(1):140. doi:10.1038/s41746-021-00509-1

23. Sharma M, Savage C, Nair M, et al. Artificial intelligence applications in health care practice: scoping review. J Med Internet Res. 2022;24(10):e40238. doi:10.2196/40238

24. Haan M, Ongena YP, Hommes S, Kwee TC, Yakar D. A qualitative study to understand patient perspective on the use of artificial intelligence in radiology. J Am Coll Radiol. 2019;16(10):1416–1419. doi:10.1016/j.jacr.2018.12.043

25. Huisman M, Ranschaert E, Parker W, et al. An international survey on AI in radiology in 1041 radiologists and radiology residents part 1: fear of replacement, knowledge, and attitude. Eur Radiol. 2021;31(9):7058–7066. doi:10.1007/s00330-021-07781-5

26. Ooi SKG, Makmur A, Soon AYQ, et al. Attitudes toward artificial intelligence in radiology with learner needs assessment within radiology residency programmes: a national multi-programme survey. Singapore Med J. 2021;62(3):126–134. doi:10.11622/smedj.2019141

27. Abuzaid MM, Tekin HO, Elhag IR M, Elhag IR, Elshami W. Assessment of MRI technologists in acceptance and willingness to integrate artificial intelligence into practice. Radiography. 2021;27(1):S83–S87. doi:10.1016/j.radi.2021.07.007

28. Abuzaid MM, Elshami W, Fadden SM. Integration of artificial intelligence into nursing practice. Health Technol. 2022;12(6):1109–1115. doi:10.1007/s12553-022-00697-0

29. Elshami W, Waymel Q, Badr S, Demondion X, Cotten A, Jacques T. Impact of the rise of artificial intelligence in radiology: What do radiologists think? Diagn Interv Imag. 2019;100(6):327–336. doi:10.1016/j.diii.2019.03.015

30. Ng CT, Roslan SNA, Chng YH, et al. Singapore radiographers’ perceptions and expectations of artificial intelligence - A qualitative study. J Med Imag Radiat Sci. 2022;53(4):554–563. doi:10.1016/j.jmir.2022.08.005

Creative Commons License © 2024 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.