The grandchild of a good friend of mine is about to graduate as a paralegal. Little does she know that, according to predictions by Oxford researchers, it is likely that by 2033 94% of paralegals will lose their jobs due to algorithms.[1] There is also a chance that the same thing will happen to 76% of archivists, 89% of bus drivers, and so the list of future unemployed goes. There are no specific predictions, yet, for physicians’ job security, but data-collection-related health care workers might not be safe.
Simply put, algorithms are the decision-making steps of everything we do. A detailed recipe for making scrambled eggs or a coach’s step-by-step instructions to teach someone how to play tennis are simple examples of algorithms that exist in everyday life. In medical practice, a patient presenting with stepwise symptoms A, B, and C may represent the specific algorithm of condition D, indicating to the doctor to consider treatment E. Even a medical prescription, such as take this pill twice a day on an empty stomach, is a type of algorithm. Other examples in medical or surgical practices, in research, or in teaching include the decision-tree approaches to the assessment and treatment steps of all conditions. In one way or another, consciously or otherwise, algorithms are used by all living creatures in decision making.
In over 40 years of medical work, I have made many decisions, some with the guidance of decision trees, but I never thought about the myriad minutiae that may make up the steps in an algorithm. Yet, as a family doctor in the mid-1950s and 60s, regularly making two or three house calls a day, I have filed away in the back of my subconscious mind information about my patients’ circumstances, their lifestyle, their home environment—all relevant details to my final assessment and my prescriptions. I was subconsciously collecting data for an algorithm that encompassed even the smallest steps in my management of the whole situation.
Then came computers. In the early 1950s, Alan Turing, a British mathematician, asked why these machines couldn’t use information and reasoning to solve problems and make decisions. The basic principles related to data collection were known and optimism and expectations were high for an intelligent machine, but the obstacles were even higher: lack of computational power, speed, and machine memory. It took close to 50 years, well into the early 2000s, before the required decision-making programs and speech-recognition software became available. Now in the 2020s, artificial intelligence (AI) technology can mimic doctors’ attempts to gather and making sense of health care data to a depth and detail previously unimagined. However, AI techniques differ from human thinking: algorithms are identified or developed exclusively from input data. As to the thinking part, even the experts don’t yet quite understand the machine logic of AI predictions.
Clinical applications of AI are relatively recent. AI is seen as potentially helpful in several important areas. In the diagnostic area, combining the clinician’s skills with AI’s ability to process complex images like CT scans, along with its ability to do in-depth searches in health records, is very promising. In prognosis, AI may serve well by way of its prediction models. In the emerging field of precision medicine, AI may assist with managing massive data about a patient’s behavior, environment, genome, and medical history, thus allowing the physician to better understand the patient. In surgical robotics, programming instruments is also an important AI task.
Physicians, and the health care system in general, have to overcome two major hurdles to deliver the potential benefits of AI. One hurdle is that AI technologies require significant expertise. Health professionals may have to learn to think like data scientists. At the same time, technology professionals have to learn about the details of health care.
Another hurdle relates to ethical concerns. Essentially, there are three issues. First, machine learning and use of AI requires massive amounts of personal data to be gathered, perhaps at the cost of patient privacy. Automation is the second issue. While the use of AI promises more time to attend to patients’ needs, and perhaps a way to reduce burnout, it may also lead to staff who work in areas of health care dealing with digital information to be replaced by technology. The third problem is that, since AI makes decisions solely based on the information it receives, the data may be unfairly coded for a variety of reasons. This may lead to discrimination, unintended biases toward or neglect of population segments, or even a bias toward profitmaking.
The potential for the use of AI in health care is interesting, exciting, and in some ways still mysterious, but in the end it makes me yearn for the days of house calls.
—George Szasz, CM, MD
Reference
1. Harari YN. Homo Deus. Penguin Random House; 2017. p. 380-381.
Suggested reading
Harvard T.H. Chan School of Public Health. Applied artificial intelligence for health care. Accessed 26 April 2022. www.hsph.harvard.edu/ecpe/programs/applied-artificial-intelligence-for-health-care.
Harvard University. The history of artificial intelligence. Accessed 29 April 2022. https//sitn.hms.harvard.edu/flash/2017/history-artifical-intelligence.
Wikipedia. History of artificial intelligence. Accessed 29 April 2022. https://en.wikipedia.org/wiki/History_of_artificial_intelligence.
Wikipedia. Medical algorithm. Accessed 29 April 2022. https://en.wikipedia.org/wiki/Medical_algorithm.
This post has not been peer reviewed by the BCMJ Editorial Board.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. |