
As artificial intelligence (AI) becomes increasingly integrated into health care, it is essential for clinicians to remain vigilant regarding its ethical use. Without appropriate oversight, AI can inadvertently perpetuate or amplify health disparities, including racial inequities.[1] A recent policy statement from Doctors of BC emphasizes the importance of examining the ethical dimensions of AI tools.[2]
Clinicians who are well informed about AI and use it in an evidence-based manner can help reduce the associated ethical risks. The quality of an AI system depends on the quality of the data it is trained on. Data sets may suffer from availability bias, often mirroring present-day health care inequities. One study found that AI chest X-ray prediction models consistently underdiagnosed Hispanic women and other underserved populations.[3] This shows how structural bias can become hidden in AI algorithms over time.
To build more equitable systems, it is important to involve equity-deserving populations in both the design and validation of AI tools. With Indigenous communities, data sovereignty should be respected, and partnerships should be established to explore how to implement AI in culturally appropriate ways. Similarly, rural communities, older adults, and people with disabilities may face distinct barriers in accessing AI tools. Including perspectives from people in these groups in AI model development ensures that models are shaped by the populations they are meant to serve. It is also crucial to recognize that some patients may be uncomfortable with or not have access to AI systems. Therefore, providers should proactively develop alternative care plans to ensure equitable access for all patients, respecting individual preferences and needs.
AI serves as a valuable adjunct to providers’ decision making. Like other established diagnostic tools, AI should be employed with a clear understanding of its strengths and limitations. Investment in AI literacy is also essential to ensure fair access to care. Clinicians are encouraged to pursue continuing education on AI in health care, including accredited online courses from providers such as Coursera [12], to remain current with evolving technologies. Health care organizations should prioritize training for providers unfamiliar with these tools with an emphasis on ethical considerations. This will help bridge the digital divide, as AI uptake is currently concentrated in younger, more experienced individuals living in urban centres.[4]
When using AI-driven platforms, providers should:
Here is an example of an effective prompt: “I am a family physician in Vancouver. What is the best antihypertensive medication for my 55-year-old Indigenous patient with comorbidities including heart failure and chronic kidney disease? Search PubMed for relevant publications and provide references for your answer. Select medications covered by non-insured health benefits.”
Finally, we must advocate for greater accountability among AI companies, which share responsibility for the impact of their tools. The key issue is transparency, specifically around how data is sourced, processed, and stored. Not only is transparency the cornerstone of more inclusive AI models, but it also fosters trust in how one’s information will be used.
As AI continues to evolve, its potential should be recognized but also matched with a commitment to ethical integration. By prioritizing education, accountability, and community involvement, we can leverage AI to provide quality patient-centred care across all BC communities.
—William Liu, BHSc
Council on Health Promotion member
—Colin Siu, MD, CCFP, MPH
Council on Health Promotion former member
hidden
This article is the opinion of the authors and not necessarily the Council on Health Promotion or Doctors of BC. This article has not been peer reviewed by the BCMJ Editorial Board.
[14] |
| This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License [14]. |
1. Haider SA, Borna S, Gomez-Cabello CA, et al. The algorithmic divide: A systematic review on AI-driven racial disparities in healthcare. J Racial Ethn Health Disparities. 2024. https://doi.org/10.1007/s40615-024-02237-0 [15].
2. Doctors of BC. Artificial intelligence in health care. Policy statement. Updated April 2025. Accessed 8 July 2025. www.doctorsofbc.ca/sites/default/files/documents/2025-04-11-chep-ai-in-hc-policy-statement.pdf [16].
3. Seyyed-Kalantari L, Zhang H, McDermott MBA, et al. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat Med 2021;27:2176-2182. https://doi.org/10.1038/s41591-021-01595-0 [17].
4. McElheran K, Li JF, Brynjolfsson E, et al. AI adoption in America: Who, what, and where. National Bureau of Economic Research. October 2023. Accessed 15 July 2025. https://www.nber.org/papers/w31788 [18].
Links
[1] https://bcmj.org/cover/november-2025
[2] https://bcmj.org/author/william-liu-bhsc
[3] https://bcmj.org/author/colin-siu-md-ccfp-mph
[4] https://bcmj.org/node/10971
[5] https://bcmj.org/sites/default/files/BCMJ_Vol67_No9_cohp.pdf
[6] https://bcmj.org/print/council-health-promotion/ethical-considerations-around-use-artificial-intelligence-health-care
[7] https://bcmj.org/printmail/council-health-promotion/ethical-considerations-around-use-artificial-intelligence-health-care
[8] http://www.facebook.com/share.php?u=https://bcmj.org/print/council-health-promotion/ethical-considerations-around-use-artificial-intelligence-health-care
[9] https://twitter.com/intent/tweet?text=Ethical considerations around the use of artificial intelligence in health care&url=https://bcmj.org/print/council-health-promotion/ethical-considerations-around-use-artificial-intelligence-health-care&via=BCMedicalJrnl&tw_p=tweetbutton
[10] https://www.linkedin.com/sharing/share-offsite/?url=https://bcmj.org/print/council-health-promotion/ethical-considerations-around-use-artificial-intelligence-health-care
[11] https://bcmj.org/javascript%3A%3B
[12] http://www.coursera.org/search?query=artificial%20intelligence
[13] https://www.openevidence.com/
[14] http://creativecommons.org/licenses/by-nc-nd/4.0/
[15] https://doi.org/10.1007/s40615-024-02237-0
[16] http://www.doctorsofbc.ca/sites/default/files/documents/2025-04-11-chep-ai-in-hc-policy-statement.pdf
[17] https://doi.org/10.1038/s41591-021-01595-0
[18] https://www.nber.org/papers/w31788
[19] https://bcmj.org/modal_forms/nojs/webform/176
[20] https://bcmj.org/%3Finline%3Dtrue%23citationpop