Complexities of artificial intelligence in health care
The rise of artificial intelligence (AI) has the potential to drastically improve physicians’ professional lives. With the release of ChatGPT—a generative AI tool that engages in human-like conversation equipped with the vast knowledge of the Internet—public attention has been captivated, along with mixed feelings of fear and awe regarding future implications.
At its core, the power of modern-day AI lies in its ability to self-learn and recognize patterns. Combined with fast computational speed and a seemingly limitless memory, AI programs work much like a human mind by anticipating problems, adapting, and learning from past mistakes. Recently, this formidable tool has made strides in health care. While AI solutions expand and work toward optimizing patient care, it is especially important that policy keeps up with this technological surge to ensure effective and safe implementation in our health care system.
AI in health care is developing rapidly, particularly in the realm of automating routine tasks. For example, primary care and emergency physicians are overwhelmed with patients. AI’s ability to assist in paperwork, prioritize labs, and solve scheduling challenges should alleviate administrative burden and release time back to physicians. Administrative load remains a significant contributor to physician burnout and compassion fatigue, as highlighted by the Canadian Medical Association’s National Physician Health Survey.[1] Companies have created tools to address these concerns—for example, automatic SOAP notes, where AI listens in on a patient encounter, analyzes it, and organizes points into a chart instantaneously. Some applications are integrated with scheduling, allowing follow-up appointments, lab requisitions, and specialist referrals to be sent out by voice. The time saved allows physicians to focus on human interaction in patient care, without sacrificing efficiency.
AI’s development in diagnostics may be the most incredible. The Massachusetts Institute of Technology trained a program with over 32 000 mammogram images of women diagnosed with cancer. The algorithm demonstrated remarkable accuracy in detecting disease presence and subtleties often incomprehensible to humans.[2]
The utility of such tools is exciting; however, concerns regarding data quality and security persist. If poor-quality, biased, or incomplete data are used for the algorithm, AI may perpetuate or exacerbate social inequities present in our health care system today, leading to a phenomenon known as “algorithmic bias,” as termed by Harvard University.[3]
Additionally, security risks may occur during construction of AI algorithms, which have historically lacked privacy measures. A notable instance was when DeepMind (an AI company owned by Google) partnered with the Royal Free London NHS Foundation Trust to use machine learning in management of acute kidney injury.[4] The UK’s Department of Health and Social Care noted that no privacy measures were discussed, and patient data were obtained on an “inappropriate legal basis.” Google took control of DeepMind’s application, transferring control of patient data from the UK to the US. While the actions taken were legal, it is reasonable to believe individuals would have concerns about their health data being used in this manner.
As AI in health care evolves, policy frameworks must adapt to ensure ethical, legal, and societal considerations are addressed in tandem with technology. Policies may differ between provinces. In March and April 2024, the College of Physicians and Surgeons of BC issued two statements, one for all registrants and one specific to registrants who work in diagnostic facilities. During medical encounters, registrants may use AI, adhering to principles such as privacy, confidentiality, and consent.[5] Importantly, physicians will maintain responsibility for interpreting and making final decisions about patient care. For registrants working in diagnostic facilities, only AI approved by the Diagnostic Accreditation Program may be used as supplementary aids for triage, diagnostics, and quantifying aspects in practice.[6] For BC, this is a great step in the right direction.
The ever-evolving nature of AI requires continuous reassessment of regulatory frameworks from a multidisciplinary lens of law, ethics, and medicine. By being aware of developments, we can harness the power of AI while prioritizing patient-physician benefit, societal trust in our health care system, and ethical standards.
—William Liu, BHSc
Member of the Council on Health Promotion
—Birinder Narang, MBBS, CCFP
Member of the Council on Health Promotion
hidden
This article was updated on 27 May 2024.
This article is the opinion of the authors and not necessarily the Council on Health Promotion or Doctors of BC. This article has not been peer reviewed by the BCMJ Editorial Board.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. |
References
1. Canadian Medical Association. Addressing physicians’ administrative burden—The invisible crisis in family medicine. 2023. Accessed 26 March 2024. www.cma.ca/latest-stories/addressing-physicians-administrative-burden-invisible-crisis-family-medicine.
2. Reardon S. Rise of robot radiologists. Nature. 2019. Accessed 27 March 2024. www.nature.com/articles/d41586-019-03847-z.
3. Igoe KJ. Algorithmic bias in health care exacerbates social inequities—How to prevent it. Harvard T.H. Chan School of Public Health. 2021. Accessed 27 March 2024. www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care.
4. Murdoch B. Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Med Ethics 2021;22:122.
5. College of Physicians and Surgeons of British Columbia. Interim guidance: Ethical principles for artificial intelligence in medicine. Revised 11 April 2024. Accessed 24 May 2024. www.cpsbc.ca/files/pdf/IG-Artificial-Intelligence-in-Medicine.pdf.
6. College of Physicians and Surgeons of British Columbia. Position statement: Introducing new technology and artificial intelligence in diagnostic facilities. 2024. Accessed 26 March 2024. www.cpsbc.ca/files/pdf/DAP-PS-Introducing-New-Technology-Artificial-Intelligence-Diagnostic-Facilities.pdf.