AI in healthcare


Future doctors schooled in an AI milieu may not be taught that every clinician needs to get into the patient’s mind. 
| Photo Credit: Getty Images

Artificial intelligence (AI) will eventually become part of healthcare. Though the A in AI stands for artificial, it can be augmenting, amplifying, accelerating, assisting and analysing in an ambient milieu. AI techniques are unlocking clinical information, hidden in massive data. AI is pole vaulting not leapfrogging. As much as 80% of 41 zettabytes (410 trillion gigabytes) of digital information is unstructured. AI will detect patterns and trends, which human grey matter is unable to decipher.

A neurosurgeon trained five decades ago, I am concerned that in our enthusiasm to be future-ready, we may not look at the other side. Good and evil are two sides of a coin. Queries will, no doubt, be addressed. Transitions offer great opportunities. We should never forget that it is NI (natural, native) Intelligence which led to AI. In a world where algorithms make diagnoses, wearables track vital signs and remotely controlled robots operate on people, AI should be subservient to NI.

AI enthusiasts argue that specialists will not spend valuable time extracting information from voluminous data, clinical findings and reports. Specialists will manage information extracted by AI, enabling more time with the patient. AI will assist clinicians, providing current information from journals, textbooks and clinical practices to better individual care. AI extracts information from a much larger identical patient population, assisting in real-time inferences.

When AI recommends CAR T cell therapy, does it realise that the patient cannot raise ₹40 lakh for it. Nothing is more devastating than being advised a treatment beyond one’s reach. Using NI, I will not discuss this management. We factor in what the patient and family wants before recommending treatment.

Lars Leksell, inventor of the Gamma Knife, had remarked, “A fool with a tool, is still a fool.” When one has a hammer, everything round looks like a nail — an expensive hammer more so! Technology is a means to an end, not an end. AI is an enabler. Future doctors schooled in an AI milieu may not be taught that every clinician needs to get into the beneficiary’s mind. Will empathising with the patient and family be part of the standard operating protocol prescribed by AI. All are equal, but some are more equal than others. Socioeconomic status plays a part in implementing management plans.

AI development includes ethics enforcement. These include constant human oversight, technical robustness, real-time continuous retraining using unbiased data, safety, privacy, data governance, transparency, diversity, non-discrimin- ation, societal and environmental well-being and accountability. To be accountable and trustworthy, machine learning algorithms must include ethics and humane values. Trust is the key word for doctors. AI systems are becoming autonomous, resulting in greater direct-to-patient advice, bypassing human intervention. Decision to rely on AI is an NI judgment. Culture-sensitive AI systems must have moral and ethical behaviour patterns aligned with human interests. Constantly revaluated, they need fresh additional training data sets. AI should not lead to depersonalisation and dehumanisation. A smart, empathetic clinician using AI will become smarter. A mediocre clinician may not. There will be no change in healthcare outcomes when a below average clinician uses AI.

drkganapathy@gmail.com

Leave a Comment