Imagine the very edge of a plateau, the point at which firm land finally meets sky on the same plane. People unfortunate enough to meet this edge face the risk of plunging to their doom. Some purposely try to tempt their fate by leaning over the edge, careful to maintain just enough balance so as to not fall over. To general onlookers, it is difficult to discern if these daredevils are about to topple into the precipice. As such, many will stray far from the edge.
Alright, good. Now that you’ve imagined the scenario, switch up a few of the elements. Let the plateau become the entirety of the health industry and let the edge represent the murky ethics of health care. On any given day, physicians, doctors, nurses, and pharmacists alike approach this “edge of ethics” when dealing with patients and their medical health records. Like the daredevils in the scenario above, these healthcare professionals have to make sure that they take the “proper measures” (more on that another time) to keep themselves from violating the ethics in healthcare. Ethics is so paramount to physicians that they all must vow non-maleficence through the Hippocratic oath. However, making ethical decisions is a struggle for physicians because of the necessity to consider vastly different cultural views of death/pain, humanitarian ideals, and patient confidentiality.
Now think about a deep learning algorithm taking the place of a physician. This AI has no consciousness, no human needs, no human desires. With ethical dilemmas already being unsavory issues for real human beings, it’s no wonder why only 47% of 81 surveyed healthcare organizations had implemented or were planning to implement AI imaging software in 2018 (KLAS Report on Artificial Intelligence in Imaging 2018). The fear of AI screwing up a patient diagnosis or leaking information that should have otherwise been maintained confidential is palpable in the air surrounding hospitals, clinics, & pharmacies. The prospect of physicians losing their jobs to AI is not misplaced either when considering that there exists AI technology that can detect atrial fibrillation 19% or more accurately than physicians (check out the 3M™ Littmann® CORE Digital Stethoscope if you’re interested in learning more).
When approaching the concept of ethics in healthcare, one will be blown away by the sheer number of views and questions regarding it. For instance:
Is it okay to leave someone’s life in the hands of AI?
Is it okay to be prescribed medication by a device without having your doctor review the diagnosis first?
Is it okay to have an algorithm fill-in for a physician?
To answer all these questions in one blog post would be akin to stuffing an elephant in a fridge. As such, be sure to check back in on the “That’s What You Want Me To Think…” blog for our series of posts on AI ethics that will try to encapsulate theories on the ethical treatment of patients, their health records, and even physicians.
- Jon Cili
Comments