Back to Journals » International Journal of General Medicine » Volume 17

Transforming Healthcare with AI: Promises, Pitfalls, and Pathways Forward

Authors Shuaib A 

Received 31 December 2023

Accepted for publication 17 April 2024

Published 1 May 2024 Volume 2024:17 Pages 1765—1771

DOI https://doi.org/10.2147/IJGM.S449598

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Dr Woon-Man Kung



Ali Shuaib

Biomedical Engineering Unit, Department of Physiology, Faculty of Medicine, Kuwait University, Safat, 13110, Kuwait

Correspondence: Ali Shuaib, Biomedical Engineering Unit, Department of Physiology, Faculty of Medicine, Kuwait University, P.O. Box 24923, Safat, 13110, Kuwait, Email [email protected]

Abstract: This perspective paper provides a comprehensive examination of artificial intelligence (AI) in healthcare, focusing on its transformative impact on clinical practices, decision-making, and physician-patient relationships. By integrating insights from evidence, research, and real-world examples, it offers a balanced analysis of AI’s capabilities and limitations, emphasizing its role in streamlining administrative processes, enhancing patient care, and reducing physician burnout while maintaining a human-centric approach in medicine. The research underscores AI’s capacity to augment clinical decision-making and improve patient interactions, but it also highlights the variable impact of AI in different healthcare settings. The need for context-specific adaptations and careful integration of AI technologies into existing healthcare workflows is emphasized to maximize benefits and minimize unintended consequences. Significant attention is given to the implications of AI on the roles and competencies of healthcare professionals. The emergence of AI necessitates new skills in data literacy and technology use, prompting a shift in educational curricula towards digital health and AI training. Ethical considerations are a pivotal aspect of the discussion. The paper explores the challenges posed by data privacy concerns, algorithmic biases, and ensuring equitable access to AI-driven healthcare. It advocates for the development of comprehensive ethical frameworks and ongoing research to guide the responsible use of AI in healthcare. Conclusively, the paper advocates for a balanced approach to AI adoption in healthcare, highlighting the importance of ongoing research, strategic implementation, and the synergistic combination of human expertise with AI technologies for optimal patient care.

Keywords: artificial intelligence, healthcare transformation, AI integration strategies

Introduction

The healthcare industry faces pressing challenges, including inefficient systems, suboptimal decisions, and strained physician-patient relationships.1 This situation has led to substantial interest in leveraging artificial intelligence (AI) to improve healthcare quality, access, and delivery. AI collectively refers to computational techniques that enable systems to perform tasks that normally require human intelligence, such as visual perception, speech recognition, and decision-making.2 By analyzing complex medical datasets beyond human capabilities, AI has the potential to transform clinical care and healthcare operations.

However, realizing AI’s full potential in enhancing health outcomes necessitates grappling with the complex practical barriers and risks involved in integrating these rapidly evolving technologies into embedded healthcare workflows, systems, and practices. For instance, AI-powered mobile health applications have the potential to provide vital healthcare information and services in remote areas, addressing the challenges of diverse healthcare systems globally.3 These applications can leverage machine learning, cloud computing, and wearable technologies to gather and analyze health data, detect diseases, and improve healthcare prediction systems.4 The integration of health services into technology-based systems, such as mobile health applications, offers convenience and better health services for individuals and healthcare personnel.5 Additionally, the use of AI algorithms in healthcare can aid in diagnosis, drug development, personalized medicine, patient monitoring, and care.6 However, it is important to consider the positive and negative effects of mobile health applications, especially regarding privacy and trust.7 By developing adaptable AI solutions that consider the challenges and needs of diverse healthcare systems, as discussed recently in-depth by Wang and Preininger8 and Chakraborty et al9 these applications can have a transformative impact on healthcare access and outcomes globally. Understanding and addressing these varied global needs is crucial for the responsible and effective implementation of AI in healthcare worldwide.

This perspective paper aims to provide an in-depth and balanced analysis of the major opportunities as well as the risks and challenges associated with effectively incorporating AI capabilities into healthcare environments. It integrates insights from emerging evidence regarding AI applications in medicine, drawing from research studies, expert opinions, and real-world examples of AI implementation in healthcare settings. By critically evaluating the evidence and considering both the potential benefits and limitations of AI in healthcare, this paper offers a measured perspective on how healthcare organizations can tap the transformative potential of AI while proactively addressing its pitfalls and unintended consequences. Understanding and addressing these varied global needs is crucial for the responsible and effective implementation of AI in healthcare worldwide.

Administrative Burdens in Healthcare

Healthcare administrators face steadily growing clerical burdens from excessive documentation, coding, insurance approvals, and other regulatory requirements that detract from direct patient care activities.1,3 According to some estimates, for every hour spent on direct face-to-face patient care, physicians spend nearly two hours on electronic health record (EHR) and desk work related to administrative responsibilities.1 Such burdens are a key contributor to widespread physician burnout, which in turn exacerbates the strain on patient-physician relationships and the overall quality of care delivery.4,5

Significantly, AI in EHRs has been shown to reduce the time clinicians spend on documentation compared to traditional methods. This efficiency not only streamlines administrative processes but also contributes to alleviating physician burnout associated with excessive administrative tasks.10 This not only enhances efficiency but also contributes to reducing physician burnout associated with excessive administrative tasks. AI-powered virtual assistants and chatbots can perform substantial amounts of patient appointment bookings, insurance authorizations, and other clerical tasks.11

However, significant barriers exist in integrating such tools into entrenched legacy systems and convincing users to adopt new workflows. For instance, the implementation of AI scheduling tools at the Mayo Clinic encountered adoption issues owing to poor integration with existing health IT systems.12 The prevailing obstacles to AI implementation include the lack of interoperability standards, cost constraints in system upgrades, and cultural resistance to disruptive changes. However, with deliberate efforts to adapt systems and persuade users, thoughtfully implemented automation has vast potential to optimize workflows and allow clinicians to refocus on core patient care activities.

Clinical Decision Support Systems

In addition to indirect workflow enhancements, AI can perform direct clinical functions through clinical decision support (CDS) systems, aiding physicians in tasks such as diagnosis and treatment planning.2 For instance, deep learning techniques allow computers to rapidly analyze massive sets of diverse clinical data from sources such as medical records, imaging studies, and laboratory test results to provide doctors with evidence-based diagnostic and treatment recommendations at the point-of-care. In certain diagnostic specialties, such as radiology and pathology, AI has demonstrated accuracy rivaling or exceeding human clinicians in interpreting medical images and tissue slides.12,13 In addition, an AI-powered CDS utilizing comprehensive patient information could predict future disease trajectories to guide targeted preventive care. Conversely, to effectively implement AI - CDS in healthcare, it’s essential to understand the various determinants impacting their success.14

However, legitimate risks exist around the over-trusting of AI guidance. For example, well-documented cases reveal that AI diagnostic models can inadvertently perpetuate and amplify race, gender, and other biases due to unrepresentative training data and poor generalizability, or simply reflect existing societal prejudices.15–17 Errors can also emerge in CDS systems due to a range of technical factors.17 Therefore, AI should aim to augment clinician capabilities rather than replace human expertise and responsibility in medical decision-making. In addition, healthcare teams should maintain the perspective that AI tools are dependent on the quality of underlying data. The thoughtful integration of AI entails guarding against its overreliance and emphasizing that physicians’ professional judgments should take precedence over algorithmic outputs in the case of conflicts. Ongoing audits of AI system performance after deployment and recalibration will also be key to ensuring patient safety and positive outcomes.18

Impact on Physician-Patient Relationships

In addition to direct clinical applications, the thoughtful implementation of AI offers indirect potential benefits for physician-patient relationships and the overall quality of care delivery. By automating burdensome administrative duties, AI can allow physicians to devote more time to holistically engaged interactions with patients.19 More attentive and personalized care can in turn enhance patients’ trust and satisfaction with their treatments.20 The increased time available for patient interaction also provides a valuable opportunity for physicians to demonstrate empathy, a key factor in patient satisfaction and treatment adherence.21,22

Empirical research, including studies by Gidwani et al, underscores that AI, such as medical scribes, markedly significantly augment the time physicians allocate to patient care, thus improving the quality of these interactions. Findings from Holtz et al further reveal that AI-assisted consultations are perceived as more comprehensive and attentive, enhancing patient satisfaction levels.

However, the impact of AI on physician-patient relationships varies significantly, underscoring the need for context-specific evaluations and adaptations. Sauerbrei et al23 highlight the importance of redefining doctor-patient dynamics within an AI-enhanced environment. This redefinition involves incorporating digital health and AI training in medical education to adeptly navigate the evolving technological landscape. Addressing user acceptance of AI, along with its unintended consequences, is critical and hinges on optimal integration into existing workflows and adaptive change management.24 A key consideration is the potential shift in physicians’ focus from building deeper patient relationships to managing increased patient volumes, a change facilitated by AI-driven administrative efficiencies. Proactive engagement with potential AI pitfalls, especially those that could undermine patient trust, is paramount. Ongoing monitoring and quality assurance of AI systems are essential to maintain and bolster this trust, ensuring AI serves as an augmentation rather than a detriment to healthcare relationships. Ultimately, maintaining realistic expectations regarding the limits and capabilities of AI technology is crucial. This balanced approach ensures that AI is leveraged to enhance, rather than inadvertently impair, the fundamentally human aspects of healthcare, preserving the essence of patient-centered care in an increasingly tech-driven world.25

The integration of AI in healthcare is not only revolutionizing clinical practices but also calling for fresh competencies in data comprehension and technological proficiency for healthcare providers. This change has implications for the educational curricula of healthcare professionals, underlining the need for digital health and AI training. AI is transforming the patient care landscape, necessitating a novel approach to understanding how technology can complement, not replace, human-centered care. The effects of AI on healthcare professionals are significant, necessitating a reassessment of roles and obligations in a tech-driven healthcare ecosystem.

Data Privacy and Security Challenges

The use of vast amounts of patient data to develop and implement AI algorithms also raises critical challenges in term of privacy protection and security. Robust technical safeguards encompassing encryption, access controls, and cybersecurity measures are fundamental to preventing data breaches and ensuring patient confidentiality.26 Therefore, strong data governance policies, transparency protocols, personnel training, and ethical values embedded across teams are essential in the handling of health data by AI systems.27

Instilling an organizational culture that treats patient information with respect and accountability is vital for earning public trust. Numerous cases of access to unauthorized datasets have undermined patients’ and families’ confidence and exacerbated the risks of AI, thus increasing the disparities in healthcare. For instance, Google’s DeepMind has access to millions of identifiable UK patient records from the National Health Service, with inadequate consent controls.28 Therefore, promoting policies developed through multi-stakeholder engagement and implementing oversight mechanisms can help align AI data practices with key ethical principles, such as informed consent, minimal use, and non-discrimination.27 Ultimately, responsible data stewardship requires ingraining conscientious mindsets across all personnel involved in collecting, processing, and administering health data.

Algorithmic Bias Risks

Well-documented cases have revealed that AI systems can inadvertently inherit, perpetuate, and amplify biases along race, gender, and other dimensions due to flawed or unrepresentative training data.15,16 In healthcare, such biases can significantly compromise patient safety and the quality of care for marginalized groups. For example, an AI algorithm to prioritize patients for additional care resources exhibited racial bias against black patients, even when the lack of access to care was already disproportionately harming minority communities.16

Mitigating such potential harm will require the aspects of diversity and inclusion to be integral throughout the AI development lifecycle, from design choices in data sampling and labeling to testing systems on representative populations.15 However, technical remedies alone are insufficient without concomitant human initiatives to make healthcare provision and technological innovation more equitable. Thoughtful AI governance entails empowering patients and communities to participate in technology deployment decisions that affect them through open dialogue regarding the technology’s values and risks. Fostering institutional reflexivity regarding deficiencies and blind spots is therefore vital. Overall, AI should not be viewed as an easy solution, overlooking the biases that are deeply rooted in healthcare systems and social structures.

Governance Frameworks and Future Research Directions in AI for Healthcare

Governing responsible and ethical AI integration in healthcare is a complex challenge that demands iterative, adaptive approaches.29 While prescriptive, fixed regulations often lag behind rapid technological advancements, effective governance requires nuanced, contextual decision-making and continual updating as AI systems evolve.30 This encompasses a comprehensive framework based on key guiding principles:

  • Transparency: Ensuring AI decision-making processes are understandable and transparent to patients and healthcare providers.31
  • Accountability: Creating mechanisms to address errors or biases in AI systems, with clearly defined responsibilities.32
  • Equity: Designing and implementing AI systems that are fair and do not exacerbate healthcare disparities.16
  • Respect for Patient Autonomy: Upholding patient rights and privacy in AI-assisted decisions.33

Effective structures, such as the European Union’s General Data Protection Regulation (GDPR) for data protection and the US Food and Drug Administration’s (FDA) approach to AI-based medical devices, serve as models for balancing innovation with safety and privacy concerns.34

Collaborative engagement through forums and policy sandboxes are crucial for strengthening oversight and evolving governance structures. These platforms allow diverse stakeholders to identify blind spots and gain insights on balancing risks and benefits in specific contexts.

Future Research Priorities

As the integration of AI into healthcare continues to evolve, several key areas of research emerge as critical for the future:

  • Development of Robust AI Models: The focus should be on overcoming data heterogeneity and bias to develop AI algorithms that are both robust and generalizable. This involves crafting algorithms capable of accurate performance across various real-world clinical datasets, thereby enhancing their reliability and applicability in multiple healthcare contexts.
  • Understanding AI Decision-Making: Enhancing the transparency and interpretability of AI systems is imperative. Research should aim at unpacking AI decision-making processes to foster clinical trust and enable informed decision-making by healthcare professionals, possibly through developing methodologies that render AI algorithms more comprehensible to non-experts.
  • Long-Term Impacts Assessment: There is a need to systematically study the long-term effects of AI on patient outcomes and the overall efficiency of healthcare systems. This includes evaluating aspects such as treatment efficacy, cost-effectiveness, patient satisfaction, and the impact on healthcare workflows. Such studies will provide valuable insights into the tangible benefits and potential drawbacks of AI in healthcare.
  • Ethical Considerations: Ethical issues surrounding AI in healthcare, such as data privacy, algorithmic biases, and equitable access to AI-driven care, must be addressed comprehensively. Research should focus on developing ethical guidelines and frameworks specific to AI in healthcare, ensuring that AI applications are developed and used in a manner that upholds ethical standards and promotes equity.

These areas highlight the need for a flexible, learn-as-we-go approach to AI governance in healthcare, emphasizing transparent cooperation among patients, providers, developers, and other stakeholders. The ongoing evolution of AI technologies and their application in healthcare requires a dynamic and ethically grounded framework to fully realize their potential while mitigating risks.

Conclusion

The integration of Artificial Intelligence (AI) into healthcare marks a transformative era, redefining clinical practices, healthcare administration, and the essence of patient care. AI’s capacity to analyze extensive data sets extends beyond augmenting clinical decision-making and operational efficiencies; it signifies a paradigm shift toward enhancing the human elements of healthcare—empathy, understanding, and patient-centeredness.

Administrative efficiency achieved through AI not only alleviates the workload on healthcare professionals but also redirects their focus toward patient-centric care, potentially revitalizing the ethos of healthcare. In the clinical realm, AI’s precision and analytical prowess promise a new horizon of diagnostic and therapeutic accuracy, yet they underscore the indispensable value of human judgment and the irreplaceable nuances of physician-patient interactions.

However, this technological evolution does not come without its challenges. Ethical considerations, data privacy, and algorithmic fairness remain at the forefront of the discourse, demanding robust governance and a steadfast commitment to ethical principles. These challenges necessitate a collaborative, interdisciplinary approach to navigate the complexities of AI integration, ensuring that technological advancements align with the core values of healthcare.

Future research must therefore not only pursue the advancement of AI technologies but also delve into the socio-ethical dimensions of their application in healthcare. The development of transparent, accountable, and equitable AI systems should be prioritized to foster trust and uphold the dignity and rights of patients.

In conclusion, as we stand on the brink of this new era, the integration of AI in healthcare presents a narrative of balanced optimism. It beckons a future where technology and human compassion converge to enhance healthcare delivery, patient outcomes, and professional satisfaction. Yet, it also calls for a conscientious journey forward, guided by the principles of ethical integrity, inclusivity, and a deep-seated respect for the sanctity of the patient-caregiver relationship. By embracing these tenets, the healthcare community can harness the potential of AI to create a more efficient, empathetic, and equitable healthcare system for all.

Abbreviation

AI, Artificial Intelligence; EHR, electronic health record; GDPR, General Data Protection Regulation; FDA, Food and Drug Administration.

Disclosure

The author reports no conflicts of interest in this work.

References

1. Sinsky C, Colligan L, Li L, et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann Intern Med. 2016;165(11):753. doi:10.7326/M16-0961

2. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56. doi:10.1038/s41591-018-0300-7

3. Rudd J, Igbrude C. A global perspective on data powering responsible AI solutions in health applications. AI Ethics. 2023. doi:10.1007/s43681-023-00302-8

4. EşiYok A, Uslu DiVanoğlu S, Çelik R. Digitalization in healthcare - Mobile Health (M-Health) applications. Aksaray Üniversitesi İktisadi Ve İdari Bilim Fakültesi Derg. 2023;15(2):165–174. doi:10.52791/aksarayiibd.1241287

5. Sharma S, Kumari B, Ali A, et al. Mobile technology: a tool for healthcare and a boon in pandemic. J Fam Med Prim Care. 2022;11(1):37. doi:10.4103/jfmpc.jfmpc_1114_21

6. Sharma HK, Tomar R, Ahlawat P. AI-enabled cloud-based intelligent system for telemedicine. In: Mathematical Modeling for Intelligent Systems. 1st ed. Chapman and Hall/CRC; 2022:75–84. doi:10.1201/9781003291916-5

7. Tekin E, Emikönel S. Comparison of mobile health application examples in Turkey and the World. In: Akkucuk U, editor. Advances in Healthcare Information Systems and Administration. IGI Global; 2023:223–236. doi:10.4018/978-1-6684-8103-5.ch013

8. Wang F, Preininger A. AI in health: state of the art, challenges, and future directions. Yearb Med Inform. 2019;28(01):016–026. doi:10.1055/s-0039-1677908

9. Chakraborty C, Bhattacharya M, Pal S, Lee SS. From machine learning to deep learning: advances of the recent data-driven paradigm shift in medicine and healthcare. Curr Res Biotechnol. 2024;7:100164. doi:10.1016/j.crbiot.2023.100164

10. Glover WJ, Li Z, Pachamanova D. The AI-enhanced future of health care administrative task management. Catal Non-Issue Content. 2022;3(2). doi:10.1056/CAT.21.0355

11. Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. 2021;8(2):e188–e194. doi:10.7861/fhj.2021-0095

12. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. 2019;1(6):e271–e297. doi:10.1016/S2589-7500(19)30123-2

13. Steiner DF, MacDonald R, Liu Y, et al. Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer. Am J Surg Pathol. 2018;42(12):1636–1646. doi:10.1097/PAS.0000000000001151

14. Bajgain B, Lorenzetti D, Lee J, Sauro K. Determinants of implementing artificial intelligence-based clinical decision support tools in healthcare: a scoping review protocol. BMJ Open. 2023;13(2):e068373. doi:10.1136/bmjopen-2022-068373

15. Char DS, Shah NH, Magnus D. Implementing machine learning in health care — addressing ethical challenges. N Engl J Med. 2018;378(11):981–983. doi:10.1056/NEJMp1714229

16. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–453. doi:10.1126/science.aax2342

17. Aquino YSJ. Making decisions: bias in artificial intelligence and data‑driven diagnostic tools. Aust J Gen Pract. 2023;52(7):439–442. doi:10.31128/AJGP-12-22-6630

18. Falco G, Shneiderman B, Badger J, et al. Governing AI safety through independent audits. Nat Mach Intell. 2021;3(7):566–571. doi:10.1038/s42256-021-00370-7

19. Juang WC, Hsu MH, Cai ZX, Chen CM. Developing an AI-assisted clinical decision support system to enhance in-patient holistic health care. PLoS One. 2022;17(10):e0276501. doi:10.1371/journal.pone.0276501

20. Holtz B, Nelson V, Poropatich RK. Artificial intelligence in health: enhancing a return to patient-centered communication. Telemed E-Health. 2023;29(6):795–797. doi:10.1089/tmj.2022.0413

21. Quiroz JC, Laranjo L, Kocaballi AB, Berkovsky S, Rezazadegan D, Coiera E. Challenges of developing a digital scribe to reduce clinical documentation burden. Npj Digit Med. 2019;2(1):1–6. doi:10.1038/s41746-019-0190-1

22. Shuaib A, Arian H, Shuaib A. The increasing role of artificial intelligence in health care: will robots replace doctors in the future? Int J Gen Med. 2020;13:891–896. doi:10.2147/IJGM.S268093

23. Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak. 2023;23(1):73. doi:10.1186/s12911-023-02162-y

24. Aung YYM, Wong DCS, Ting DSW. The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. Br Med Bull. 2021;139(1):4–15. doi:10.1093/bmb/ldab016

25. Jotterand F, Bosco C. Keeping the “Human in the Loop” in the age of Artificial Intelligence. Sci Eng Ethics. 2020;26(5):2455–2460. doi:10.1007/s11948-020-00241-1

26. Kaissis GA, Makowski MR, Rückert D, Braren RF. Secure, privacy-preserving and federated machine learning in medical imaging. Nat Mach Intell. 2020;2(6):305–311. doi:10.1038/s42256-020-0186-1

27. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. In: Artificial Intelligence in Healthcare. Elsevier; 2020:295–336. doi:10.1016/B978-0-12-818438-7.00012-5

28. Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics. 2021;22(1):122. doi:10.1186/s12910-021-00687-3

29. Morley J, Floridi L. The limits of empowerment: how to reframe the role of mHealth tools in the healthcare ecosystem. Sci Eng Ethics. 2020;26(3):1159–1183. doi:10.1007/s11948-019-00115-1

30. Gilbert S, Anderson S, Daumer M, Li P, Melvin T, Williams R. Learning From Experience and Finding the Right Balance in the Governance of Artificial Intelligence and Digital Health Technologies. J Med Internet Res. 2023;25:e43682. doi:10.2196/43682

31. Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: addressing ethical challenges. PLOS Med. 2018;15(11):e1002689. doi:10.1371/journal.pmed.1002689

32. Kroll JA, Huey J, Barocas S, et al. Accountable algorithms. Univ Pa Law Rev; 2017:165.

33. Cohen IG, Amarasingham R, Shah A, Xie B, Lo B. The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Aff. 2014;33(7):1139–1147. doi:10.1377/hlthaff.2014.0048

34. Price WN, Cohen IG. Privacy in the age of medical big data. Nat Med. 2019;25(1):37–43. doi:10.1038/s41591-018-0272-7

Creative Commons License © 2024 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.